Mistaken Identity
Over the past week, numerous excited stories have covered a talk given by James Pavur, an Oxford University researcher and Rhodes Scholar, at the Blackhat Convention in Las Vegas. With his girlfriend’s consent, Pavur made 150 subject access requests in her name. In what the BBC called a ‘privacy hack’ until they were shamed into changing the headline, some of those that replied failed to carry out some kind of ID check. Pavur’s pitch is that GDPR is inherently flawed, allowing easy access for identity thieves. This idea has already got the IT vendors circling, and outraged GDPR-denier Roslyn Layton used the story to describe GDPR as a “cybersecurity/identity theft nightmare“. Pavur’s slides are available on the Blackhat website, but so is a more detailed whitepaper written by himself and his girlfriend Casey Knerr, and anyone who has pontificated about the pair’s revelations should really take a look at it.
Much has been made of Pavur’s credentials as an Oxford man, but that doesn’t stop the 10 page document containing errors and misconceptions. The authors claim that Marriott and British Airways have already been fined (they haven’t), and that there are only two reasons to refuse a subject access request (ignoring the existence of exemptions in the Data Protection Act 2018). They use ‘information commissioners’ as a term to describe regulators across Europe, and believe that the likely outcome of a controller rejecting a SAR from a suspiciously acting applicant would be ‘prosecution’. In the UK and most if not all EU countries, this is legally impossible. At the end, their standard SAR letter cites the Data Protection Act 1998, despite the fact that in context, any DPA is irrelevant and that particular one was repealed more than a year ago.
Such a list of clangers would be bad (though not necessarily unexpected) in a Register article, but despite presenting their case with a sheen of academic seriousness, Pavur and Knerr have some serious misconceptions about how GDPR works. It supposedly offers “unprecedented control” to the applicant, despite their experiment utilising a right that has existed in the UK since 1984. They claim GDPR represents a “sea change” in the way EU residents can control, restrict and understand the use of their personal information, even though most rights are limited in some way and are rooted firmly in what went before. They claim that “little attention has been paid to the possibility of request abuse”. I’ve been working on Data Protection since the authors were schoolchildren, and I can say for certain that this claim is completely false. SARs being made by third parties, especially with malicious intent, has been a routine concern in the public and private sector for decades. Checking ID is instinctive and routine in many organisations, to the point of being restrictive in some places.
Other assertions suggest a lack of experience of how SARs actually work. Because of the perceived danger of twitchy regulators fining organisations for not immediately answering SARs “it is therefore fairly risky to fail to provide data in response to a SAR, even for a valid purpose”. This year, the ICO has had to enforce on high profile organisations for failing to answer SARs (it didn’t fine any of them), and is itself is happy to refuse SARs it receives from elderly troublemakers. SARs are routinely ignored and refused, but the authors imagine that nobody ever wants to say no for fear of entirely imaginary consequences.
Pavur and Knerr think that panicking controllers will make a mess of the ID check: “we hypothesized that organisations may be tempted to take shortcuts or be distracted by the scope and complexity of the request”. This ignores three factors. First, for many organisations, a SAR is nothing new, and the people dealing with it will have seen hundreds of SARs before. Second, the power advantage is with the controller, often a large organisation ranged against a single applicant (and in the UK, facing a regulator unlikely to act on the basis of one SAR complaint). Third, and most important, they don’t factor in the reality that the ID check takes place *outside* the month. ICO says that until the ID check is made, the request is not valid and the clock is not ticking. A sense of panic when the request arrives – necessary for the authors’ scenario to work – will only be present in those with little experience, and if you’re telling me that people who don’t understand Data Protection tend to cock it up, I have breaking news about where bears shit.
Another unrealistic idea is that by asking whether data has been inadvertently exposed in a breach (a notion written into the template request), the authors make the organisation afraid that the applicant has knowledge of some actual breach. “We hypothesised that such a belief might cause organisations to overlook identity verification abnormalities”. I can’t speak for every organisation, but in my experience, a breach heightens awareness of DP issues. Making the organisation think that the applicant has inside knowledge of a breach will make most people dot every ‘I’ and cross every ‘T’. Equally, by suggesting that ID be checked through the unlikely option of a secure online portal, the authors hope to make the organisation feel they’re running out of options, especially because they think the portal would have to be sourced within a month. Once again, this is the wrong way around. An applicant who wants to have their ID checked via such a method would either get a flat no, or the controller could sort it out first and then have the month to process the request.
A crucial part of the white paper is this statement: “No particularly rigorous methodology was employed to select organisations for this study”. Pavur and Knerr say that the 150 businesses operate mainly in the UK and US, the two countries they’re most familiar with. I’m going to stick my neck out and bet that the majority of the businesses who handed over the data without checking are US-based. Only two of the examples in the paper are definitely UK – a rail operator and a “major UK hotel chain”. Many of the examples are plainly US businesses (they cite them as Fortune 100 companies), and one of the most specific examples of sensitive data that they obtain is a Social Security Number, which must be a US institution of some kind.
If you tell me that a significant number of UK businesses, who have been dealing with SARs since 1984, don’t do proper ID checks, that’s a real concern. If you tell me that it’s mainly US companies, so what? Many US companies reject the application of GDPR out of hand, and I have some sympathy for their position, but it’s ridiculous to expect them to be applying unwelcome foreign legislation effectively. This is the risk that you take when you give your data to a US company that isn’t represented in the UK or EU. Pavur and Knerr haven’t released the names of the organisations that failed to check ID, and until they do, there’s not much in the paper to show that this is a problem in the UK, and a lot to suggest that it’s not.
The potential solutions…