We got 1.6 million students’ Google search histories!

We have fantastic news. Google have given us 1.6 million UK university students’ five years’ Google search histories, so we can work on improving their learning. Knowing what students have looked for on Google in the last 5 years will allow us to model their metacognitive skills and learning styles very accurately, so we can make individualised interventions when their everyday Google searches show their potential misunderstanding of what we’re trying to teach them.

We’ve promised Google we won’t be using this information for anything else, honest. Our agreement with Google says so, though it also says that third parties (like the students themselves) can’t hold us to anything that’s agreed in there. It’s all very exciting. We have never received data from Google before, never done any learning analytics, heck, we haven’t even looked at the web tracking data of our university’s Moodle virtual learning environment. But we think improving university education is extremely important, and we are so smart and successful, that we decided we just needed to go ahead and do this.

Hmmm. Maybe that doesn’t work too well. Let’s try again.

Working with Google to build educational breakthrough technology

We’re very proud to announce a ground-breaking five year partnership with Google.

Google searches do a phenomenal job of providing information for students, but it’s a badly dated technology. Plain searches for key words are still commonly used in most UK universities, but too often such searches don’t meet academic needs because they are constructed far away from the frontline of student understanding.

This slow and outdated technology means that important changes in a student’s academic understanding don’t get brought to the attention of the right lecturer in time to prevent further serious confusion. When this doesn’t happen, the consequences for students can be severe, and even fatal. At least ten thousand students a year fail in UK universities through entirely preventable causes, and some 40% of students could avoid needing face to face tutorial conversations, if only the right lecturer were able to take the right action sooner.

Our partnership aims to change that, by taking a very different approach to building IT for higher education. Together we are creating world-leading technology, in close collaboration with lecturers themselves, to ensure that the right student information gets to the right academics at the right time, reducing preventable academic failure and underachievement.

The five year partnership will build on the successful year-long joint project to build a smartphone app called Mindflows, which alerts academic teams as soon as a student’s web searches and test results show that they are at risk of developing acute lecture incomprehension (ALI), providing the academics with the necessary contextual academic information to help them to provide the right treatment before the student’s academic condition worsens.

When it’s fully built, we believe that this will speed up the time to alert academics and peer tutors to students in need, down from hours to a few seconds. And by freeing up academics’ time from juggling multiple computer- and paper-based student monitoring systems it should redirect over half a million hours per year away from admin, and towards direct student teaching at the University of Kent alone.

Think that will be enough? Revolutionary real-time learning analytics, with a strong hint that academics will barely have to do admin ever again, and some large numbers? Let’s stick the bit about the Google search data, just how much we’re getting, and why, in a FAQ at the bottom. People will understand that improving education is the thing that really matters, right?

PS To be 100% clear, this is only a parody. We did not get search histories nor any other data from Google. (I co-signed a letter asking for some RtbF info once, but there wasn’t even a reply.) Google DeepMind did get extensive 5 year medical histories of 1.6 million NHS patients, initially without knowledge of those patients, and so far still without consent and with an incomplete justification, which focuses on security when privacy is the issue. DeepMind has no track record of medical data processing, and no history of responsible data sharing with the NHS. The initial deal left DeepMind’s use of the data largely unconstrained. The text above closely mirrors how the recently announced revised deal (that does agree to a much more constrained use of the data) is being presented to the public. Their contract does say that third parties aren’t actually able to hold Google DeepMind and NHS Royal Free to those constraints.

So isn’t this an unfair comparison? Aren’t Google search histories far more sensitive than medical data? Maybe, but not in the perspective of current and future data protection legislation, where both would be sensitive personal data. Both can be stigmatising, and both present a possible risk of blackmail or discrimination, e.g. in the workplace, in the price of insurance or simply in on-line shopping. And judging from the public’s reactions in recent years, people in the UK actually care more about their medical data than their internet histories.

Finally: why make such a fuss, isn’t making data-based progress in medicine critical? Yes of course it is. Medical data are crying out for the kinds of things DeepMind and other big data analysts can do: lots of detailed measurement data with limited understanding of causality, and a massive potential impact. They should want to do AI with NHS data – oddly enough, this deal is presented as excluding AI. Then they could ask to do so properly, not taking shortcuts on patient consent or its replacement procedures.

But even if they go about this the correct way, as a country we do need to ask ourselves some more questions about this. Medical data analytics is an area where the NHS has been quietly privatising its operations for years, partly in reaction to its own infamous IT failures. Should we really privatise the mass analysis of our medical data to the world’s largest data sausage factory? Should we do so in the knowledge that Google operates at the scale of a nation state, but does data surveillance of the world without any checks or balances? Should we accept that the methods and outcomes of analysing this rich (choice word) data become Google’s IP? Even if the answer to all these questions is “Yes”, they should all be asked and answered explicitly and publicly before we hand over large amounts of NHS data to Google.

Not transparent, certainly not accountable: Google and the Right To Be Forgotten

I attended an event on the Right to be Forgotten (RtbF) at City University yesterday, organised to launch journalism prof George Brock‘s book on the topic. For those of you who don’t know, RtbF refers to the process following the 2014 Google Spain judgement (Costeja), and involves the removal from specific Google search results from searches for specific people in limited circumstances. Hence the term “delisting” is actually more appropriate than “RtbF”. Just felt I needed to explain that, as George Brock despite having written the book still talked of “taking down URLs” several times.

I missed Brock’s presentation that started the event, arriving during the panel discussion. Missing from the panel was Julia Powles who was sick – but she did provide her RtbF canon on Twitter.

Peter Barron, EMEA head of communications represented Google. He made two major points that I feel need further rebuttal – in addition to very valid points already made in particular by Paul Bernal and some of the audience.

First, Barron claimed it was wrong for Google to be put in the position of having to balance the various rights on so many RtbF requests. On this point I agree with Google – them as “judges”, with under-resourced data protection agencies as an appeal mechanism is highly unsatisfactory. A significant response from Bernal to this was that RtbF evaluation is yet another algorithm – actually operating in tandem with Google’s other “delisting” algorithms (e.g. for IWF and ©) and indeed with PageRank itself – and so really nothing extraordinary for Google to be doing. Hold on to this thought for my second point, too.

On the train to London, I had been reading Andrew Murray‘s IT Law book – I’d made it to the chapter on Governance and in particular his approach of “network communitarianism”. This train of thought leads to a solution for Google’s heavy judging burden: they should be sharing their responsibility with the community! Does anybody have a plan for that? Actually, yes …

Barron claimed Google were keen to be as transparent as they could be on RtbF. In response, Brock rightly dismissed their Transparency Reports as typical big company PR. It provides a drip feed of information about the Google RtbF decision processes, much like Barron did that evening: “57% denied”; “spent convictions play an important role”; “I’ve given Julia Powles info in conversations”.

Over a year ago, 80 academics, led by Ellen P. Goodman and Powles, asked Google for much more and more specific information on their RtbF decision processes. (Bernal, Brock, Murray, and I were among the 80.) So far, I am not aware of any significant response from Google; it’s a pity nobody asked Barron yesterday. He did hint at the privacy-sensitivity of the data involved, but it’s a bit rich for Google to be reluctant to share sensitive data with academics when they do expect academics to generously share sensitive data with them.

By offering to analyse the RtbF decision processes, the academics provide a way for Google to offload some of its unwanted responsibility onto the community. Google’s refusal to engage shows they would actually rather have responsibility than accountability. If any ethical discussion of this takes place by Google, it must be within their elusive ethics committee (which is not to be confused with the Deepmind Health independent reviewers or the RtbF advisory council).

Accountability is even more central to my second point. Barron talked at some length about notifications – i.e., when something has been delisted, the publisher of the information is informed of this by Google. I have argued before that this is done in the first place to stir up a censorship storm. I concede Barron’s comment that these storms have subsided a little now (though several newspapers and the BBC still publish delistings – you will understand why I won’t link to those).

Barron’s justification for these notifications sounded quite credible. Delistings interfere with the publishers’ rights, so they deserve to be told. However, we need to examine that very closely. If Google does something, on behalf of an “injured” third party, that removes a publication from certain search results, Google wishes to be accountable to the publisher for the interference with their rights. So what if Google does something on its own behalf that removes a publication from certain search results? Or just moves it down so far the search results that it is effectively deleted? Would Google admit that the outcome of PageRank incurs an accountability to the web page publishers for how highly ranked their pages are? And, given that there are no third parties involved, would Google seek to accommodate challenges to ranking outcomes on the basis of publishers’ listing rights being infringed? Of course not.

So Google’s “accountability” through notifications is extremely selective. Google chooses to be “accountable” for something it doesn’t want to be doing and for which it can lay the blame elsewhere. It supports naive journalists in their view that Google search is a public good that presents an objective view of the world which gets distorted by RtbF. It wants accountability to the world for a minor modification on its search outcomes, while shielding the central algorithm that competes with Facebook’s newsfeed algorithm for the role of the least transparent and most powerful gatekeeper of our information society. Transparency and accountability? It’s a sham.

Panama papers: how much of an insider job?

Last week I was asked by Wired UK to comment on the Panama Papers. Specifically, given how out of date security measures on the Mossack Fonseca (MosFon) website looked, how might the data have escaped?

The story that was published gave an extensive overview of security problems, many more than I would have been able to spot myself, with comments from an impressive list of experts.

My quote in this:

We do know that it was a lot of data, and that it came out gradually. This points at an insider with enough access privileges to get to see all the data, but not enough privileges to be able to copy it all quickly to one disc

sounds a bit at odds with what some of my colleagues said. It’s clear from further discussions that nobody is really sure at this point anyway. I’d like to clarify my reasoning a bit further.

Mossack Fonseca itself told customers that it was an attack on their email servers. This is what ended up reported in El Espanol and subsequently The Register, but neither have details (e.g. was it really an attack on the server, or a phishing attack on staff via email?). The best source I’ve found for what happened is an article in Sueddeutsche Zeitung (SDZ). Particularly from the telling line “Mossack Fonseca created a folder for each shell firm” I would say an attack on any emails going in/out of MosFon is unlikely to have been the direct cause of the leak. Those MosFon created folders, nor any predictable or comprehensive amounts of info from them, wouldn’t naturally appear in emails. The SDZ article and other reports contain no indication that the journalists have had to deal with scatty information or incomplete files. So what SDZ got was likely complete folders. Otherwise, the report on processing the files would likely have included a huge sorting step at the start.

My overall line on the bad external security of MosFon would be a general judgement on their security measures, along the lines of “If they didn’t even X, then they certainly wouldn’t have Y” – with many basic security issues X described in the Wired article, and Y things like Data Loss Prevention and other methods of discovery of access anomalies. Or in a double-bluff scenario (and here comes the quoted bit), there might have been an insider who knew enough to stay under the radar of any DLP. I still don’t have a better explanation for the files coming out over a long period – files grabbed from emails would have been incomplete, but the total volume would have fitted a single cheap harddisc, so no reason to do it slowly especially when it was risky. A random hacker accidentally finding the security holes wouldn’t have taken the risks.

Note also that any gradual attack, (starting via email servers or insecure web services or phishing, and then escalating privileges), should also have been caught by DLP etc in the final stage. From that perspective, the exfiltration phase should have looked like an insider attack!

A final wild explanation, which shouldn’t make it into a publication as reputable as Wired, is that the different branches of MosFon used some not-quite-secure cloud system to transfer entire customer files between the sites. This would explain gradual appearance of entire files. Maybe even emailing (unencrypted!) of complete customer files between their different offices, to contradict what I said earlier! But any of this would also imply holes in journalists’ info for stories on networks of connected shell companies. So maybe not.

Bridge in the ancient internet

(This was originally a facebook note from 2011.)

It was wonderful to see the famous psychologist Martin Seligman on TV (Newsnight) today. The name sounded familiar from a very different context …

In 1994 there was one internet bridge club, called “okbridge”, with a few 100s of members. There were a few international top players on there, including Marty Seligman who often played with Paul Soloway (a world champion, who died in 2007). They would have liked to play against other Dutch players including world champion Berry Westra – so they asked the only Dutch bridge players they met on the internet. In my old email archives I found an email I sent to Seligman after a bridge tournament.
Especially the last bit sounds quaint 17 years later.

“Yesterday I played at a tournament, and had the luck to get to play against Berry Westra and Wubbo de Boer (who played with Muller in the world champs team). We only lost 18-12 and were happy with that. We finished our hands rather quickly so I took the opportunity to talk to them about okbridge […] Hans van Staveren DID write about okbridge in the national bridge magazine […] but he never talked to the top players about it. I did my best to explain okbridge, and also Paul Soloway’s and your invitation to come play a match there. My impression was that it seemed like a waste of time to Westra at first, maybe it changed a bit when I told him some US players kept their partnerships trained through okbridge, and mentioned some names of top players on okbridge. Anyway Westra said he was going to be very busy in the near future. De Boer seemed slightly more interested. Neither had read the piece in the national magazine about okbridge, by the way. I refered them to that and to Hans van Staveren for help in borrowing in internet account, I believe they are all near Amsterdam.”

(added 2015: Berry Westra seems to be pretty active on www.berrywestra.nl these days! )

Sustainable data services

Today’s spin development on the Investigatory Powers bill was in the Telegraph, stating that end-to-end encryption would be banned after all. One line from an anonymous Home Office spokesman struck me:

“That means ensuring that companies themselves can access the content of communications on their networks when presented with a warrant, as many of them already do for their own business purposes, for example to target advertising.”

One reading of this is that internet companies leave people no privacy anyway, so there’s no point complaining about intelligence service intrusion. I’ve heard that particular line before from the side of government agencies.

But there’s another way to look at this all. It’s not just end-to-end communication services that “have” customer’s data in such a way that they can’t actually get at it. It’s an entire fast-growing industry of “privacy enhancing technologies”. “Sustainable data services” you might even call them (see where this is going?) Cloud providers, genomic data analysers — all of them safeguarding people’s data that the UK government might at some point want to throw a warrant at. And there’s a history of using a generous definition of “communications services” if that delivers more surveillance data.

If the government can throw us three different lines of spin per day, I can indulge in a silly speculation. Here it comes. What if the UK government is intending to kill off those internet services that aim to use privacy as a selling point? Maybe Apple specifically, maybe the entire privacy enhancement sector. Makes no sense, whatsoever, does it?

Except … Same could have been said for the solar energy industry. A growing industry, good for the economy, good for the environment, good for the carbon targets – and still it got zapped, with companies closing down, jobs lost, and economic capital mindlessly destroyed. Best guess why? Vested interests behind the government. The oil industry and their fracking friends.

So if this government are prepared to sacrifice one healthy growing branch of the economy to satisfy the vested interests behind the screens, why not another? And we can speculate further about what these vested interests may be – the securocrats, or even those internet industries that have more or less given up on privacy.

Of course this is a silly speculation. Silly silly silly. Shouldn’t let myself get dragged into conspiracies.

But if the ban on end-to-end encryption remains on the table (see previous post on this blog), I still think the “privacy enhancing techniques” industry is at risk. The next line in the Home Office spokesman quote is

These companies’ reputations rest on their ability to protect their users’ data.

That’s so nearly right that it’s really wrong. It is the ability to protect their users’ data so well that even the companies themselves can’t get to them.

Mr Cameron has lost his keys

It’s almost a year now since David Cameron started his attack on encryption: on 25 November 2014 he said (in the debate following the ISC Lee Rigby report: [Hansard, column 764]

The question we must ask is: are we prepared to have a means of communication —the internet and a number of modern methods— that we are not able to intercept? My answer is clear: we should not accept that. We should legislate to ensure that that is the case.

On the assumption that this refers to confidentiality and thus encryption, Mr Cameron clearly wants to be able to forbid encryption that he cannot decrypt – no matter how many denials and “U-turns” (including today’s) have followed this story. In other words, in Mr Cameron’s view, all decryption keys are ultimately his. Question is, where did he leave them?

Didn’t he have them all along?

If the US government had had their way in the late 1990s when strong cryptography first came within reach of the broad population, they and other “trusted parties” would have safely overseen crypto, by keeping copies of our keys. This idea of a “key recovery mechanism” or “key escrow” would have helped us, hapless individuals: if we lost our keys we could ask them for a spare copy. For house keys, this works quite well: my neighbours’ sons are indeed quite happy that I have a spare one in escrow. So far, I haven’t been tempted to use this key to find out my neighbours’ secrets: it’s a matter of trust — then again, it’s not my job to convince myself that my neighbours aren’t terrorists. In terms of technical risks (such as safe storage and transmission of keys), key escrow is not all that different from having encryption back doors. In the international context of the internet, the question also arises of which government(s) gets those keys. For all of those reasons, it is good that key escrow never took off.

Can’t he just ask nicely?

The last time UK government flashed up encryption as a problem, in the early 2000s, they ended up with Part 3 of the Regulation of Investigatory Powers Act. This gives government services the power to demand the keys for any encrypted information they come across. Secrecy may be imposed on such requests, jail may follow when keys get refused, even up to five years if the investigation concerns national security or child indecency. Obvious excuses like “I lost the key” (and “you need to prove that I had it in the first place”), or “helping you decrypt my child abuse images would incriminate me” have already been declared invalid.

So this seems to be sufficient for Cameron’s purposes – five years in jail surely is a serious threat. But Cameron wants more – what he hasn’t admitted so far is that he does not only want to be able to find out all communications, but he also doesn’t want the people spied on to be aware of this. That may be a justifiable requirement if the person is part of a larger conspiracy, but more broadly it is an expectation that has become all too natural in this golden age of surveillance.

So if he won’t ask users, where does he expect to find keys?

Cameron’s hope is that encrypted communications of individuals can still be snooped on without them knowing, by going for the internet services that enable these communications. He and his government colleagues have been reassuring us that they don’t want to restrict the use of encryption in electronic commerce. No wonder, and not just because secure e-commerce is crucial for the UK economy. In any case, the intelligence information there is in the transactions, not in the communications, which the internet firms store and, if needed, share anyway. Amazon, for example, are on record with how they have mostly provided information when asked, and ebay help government agencies ensure their customers aren’t involved in illegal activity.
Internet services outside the jurisdiction of UK and friends are less likely to be helpful, but the government needs to impose draconian internet measures to punish non-compliance in other areas anyway — though EU “net neutrality” may yet prevent them.

End-to-end encryption, and keeping the keys?

So for encrypted communications between users and their internet services, certainly those with a UK business presence, Cameron can get all the keys he needs by just asking. How about for end-to-end encryption, where the communication is enabled by an internet service but takes place between two of its users? Joanna Shields gave the most explicit characterisation of the government position on this in the Lords this week:

The Prime Minister did not advocate banning encryption; he expressed concern that many companies are building end-to-end encrypted applications and services and not retaining the keys.
[…]
It is absolutely essential that these companies which understand and build those stacks of technology are able to decrypt that information and provide it to law enforcement in extremis.

Like he has done consistently before, Cameron displays a lack of technical sophistication. If the companies “retained” the keys, it would not be called “end-to-end-encryption”. The point of “end-to-end” is not just that it isn’t decrypted in the middle, but foremost that it can’t be decrypted other than by the intended recipient.

So how do the endpoints get a key for end-to-end encryption?

Surely to set up end-to-end encryption, the enabling service provides the key initially for both parties to communicate? That would indeed be the simple obvious solution — but it is based on the assumption that the service can be trusted. If that trust is not absolute, the security model must assume the key can and will be abused (the service might as well have the key in escrow), and thus encryption must be considered insecure. With the public’s awareness of surveillance programs like PRISM, service providers that use privacy as a selling point should not even want to ask for such absolute trust.

Skype, for example, uses the AES encryption system for calls. This means that there is a single key shared between the two parties. Skype avoids saying whether the central service ever knows or retains this key, and this is one of the reasons for EFF refusing to call Skype encryption end-to-end.

So does the service have to forget the key?

Under normal security assumptions, once something is known to a party, it is never forgotten. So if the service is initially involved in establishing the key, it must be assumed that it will keep and potentially use any knowledge gained from that.

Fortunately, and surprisingly the first times you see it, two parties that want to communicate securely can actually safely agree a key between them without using a third party and with only insecure communication channels between them. The most famous method for this is the Diffie-Hellman method, and protocols based on this are still used in practice. The method used in WhatsApp end-to-end encryption, for example. In such a context, the central service never knows the key — which is in this case a frequently changing series of AES keys.

Shields probably understands this, and WhatsApp was actually mentioned explicitly in that Lords exchange. She does not talk of “retaining” a key that the service never knew – merely about the service being able to decrypt. But only “in extremis” – and maybe that should really be interpreted as: going beyond what is possible, or asking the service to lie to its users about the true level of security provided.

Overall then, in this last year we have seen government’s policy statements on encryption become more focused — but hardly more realistic.

Google owns the platform – twice over

As I’m trying to be a responsible cyber security academic, many app updates on my Android phone provide me with a dilemma. When Google Play offers an app update, it doesn’t normally tell me whether it fixes vulnerabilities, or just changes functionality. So, from the security perspective, I should just install the update; from the privacy perspective, however, when the app update wants new permissions I should refuse it.

I’ve gone on record before that I don’t use the Facebook app on my mobile. My phone contacts haven’t given me permission to share their phone numbers with Facebook, and the app somehow wants to read my text messages too.

In practice, I always have some updates remaining in Google Play that don’t get installed because they want permissions added that I don’t think they need. Sometimes that’s not even really from a privacy argument but merely stubbornness.

I/O 2015: “More Granular Permissions”

During the recent Google I/O Event, news came through that Android M would have “more granular permissions”. More granular is good: apps don’t need to grab a whole set of permissions to be able to do a little thing. But I tweeted in response saying that I needed more changes to the permissions set up to update (for example) the London Tube Map app, as I don’t want it to access my calendar at all. As far as I’m concerned it’s feature-bloat: my use of the app doesn’t require it, and if they insist on adding a feature that does, that feature should really ask for permission as and when (not possible in Android now).

An interesting discussion with the makers of the London Tube Map (@TubeMapLondon) followed. It turned out that the app actually didn’t have a feature using the calendar! Rather, they were catering for adverts that might want to add calendar events. My first objection to that was that ads could use apps with the appropriate permission to change the calendar, rather than doing it themselves. More importantly though, surely this couldn’t scale? All apps with ads, grabbing all the permissions that all their ads might potentially want? I stuck with not installing the app (it also wants in-app purchases, media, and call info, by the way) and thought no more of it.

Later

Weeks later, on my next visit to London, I used the London Tube Map app again (still the old version, of course). With ads. And suddenly it all became crystal clear. Ads served by … Google. The same Google who give whatever permissions they like to the built-in Android apps that you can’t remove. They own the platform. By serving the ads on third party apps, they own the platform twice over. No wonder they’re a bit arrogant about permissions, and no wonder app builders don’t object too loudly to being forced to ask for permissions they don’t need.

Postscript

Today, the Future of Privacy Forum has a post which suggests Android M will indeed allow what I was hoping for. Apps install with only the essential permissions, and the extra ones can be enabled (or refused) at the time that the extra features need them. Looking forward to that!

The ISC report on “Security and Privacy”: loose thoughts

A lot of activity this week around the ISC report on “Security and Privacy” a.k.a. the first time UK politics takes Snowden seriously (with the exception of some actions of the Home Affairs Committee).

Gave some comments to James Temperton of Wired which ended up in his piece “Minimal Oversight of GCHQ Hacking is a Scandal“.

Then I wrote up my thoughts specifically on the Bulk Personal Datasets described for the first time in this report – this appeared in The Conversation yesterday, reblogged by phys.org.

I was contacted by Will Yong of Al Jazeera, would I be interested in coming to London to do an interview – even easier to say yes as I was travelling through anyway. Some discussion on Snowden media responses, metadata, and more, led to two comments broadcast on Listening Post on Saturday morning.

Having read through the whole report on the morning it came out, I tweeted some thoughts then and you will understand I read it through once more on the way to London! Collected loose thoughts on the topics not published on are below, includes some repeats from the initial tweets and more.

  • The “individual right to privacy and the collective right to security” already in the 2nd sentence. False dichotomy, and making privacy seem selfish.
  • in vii.a and ix. it talks about “an individual” – but the concept of a thematic 8.1 warrant means it’s wider than that.
  • xii. shows clearly the contradiction: communications data is a “critical capability”, so how can it be non-intrusive? It’s either full of information or it isn’t.
  • xviii. The IPT judgements are “noted” but they stated that GCHQ had acted unlawfully. How come the ISC does not wonder how this could happen on their watch, or how to prevent that happening in the future?
  • footnote 3: Hedging about it, but the idea that the dark web is much larger than the visible web confuses the deep web (i.e. not indexed by Google) and the dark web (i.e. accessible only through Tor).
  • 4. That encryption puts things “beyond the reach of the court” completely ignores RIPA Section 3, on forced disclosure of keys (& or else …)
  • 7. This is basic stuff for my computing ethics course. Electronic surveillance is intrinsically different from steaming open letters in a post office because it can be done on a large scale so much more easily. That’s computers for you!
  • 9. Given that IPT judged GCHQ actions unlawful twice, ISC might think twice about uncritically reporting their July 2013 “nothing to see here” conclusion.
  • 12. ISC were told everything – but nothing in the report corresponds to the Tempora “full take”.
  • Footnote 12: reports the HASC enquiry into police use of RIPA, but conveniently omits the government’s response to that which was extremely dismissive.
  • 21. I have listened to Paul Bernal, and he made a submission too, so I find it disappointing that they haven’t looked at other ECHR articles.
  • 31. Interesting that the extra powers regarding overseas Communication Services Providers as required in the DRIP emergency legislation are apparently felt to be unenforcable.
  • 39. An 8.1 warrant may not be used for data sent prior to the date of signing, footnote: In certain circumstances [redacted]
  • 42. Coming back to the ECHR point. “Thematic” is not defined in statute, but the Home Secretary or another Secretary of State decides how a person gets redefined as a group according to RIPA 81(1). Now if using that for “a high profile event attended by a large group of people” doesn’t have freedom of association impact!?
  • 45. Seems rather arbitrary to consider surveillance that is outside NTAC to be outside RIPA because of that.
  • M. “the point of view that it is acceptable to let some terrorist attacks happen in order to uphold the individual right to privacy” is a sick perversion:
    1. “let” assumes tolerance or even approval;
    2. “in order to” reverses the causality;
    3. (and anyway, it turned out later the quotes given here were taken out of the context, which was: we still don’t believe mass surveillance works, and you still haven’t given us the evidence to change our minds, so …)
  • 107. Note the confusion between “internal” and “external”. Facebook even between UK residents becomes “external” and then there’s a scramble to fix that later when it becomes clear as between UK residents. No plans to do the sensible thing and remove the distinction.
  • Footnote 88 seems to say that 8(4) warrants cause indirect discrimination against any ethnic minority, but that that is okay because terrorism.
  • 118. Caspar Bowden’s point: special rights for UK citizens outside the EU are illegal under ECHR, do they really not realise that?
  • 134.ii. An entire source of communications data is redacted out here. Is this the CD from Tempora full take then?
  • 134.iii. Related communications data from interception is getting retained at this point; I’m pretty sure I saw it getting deleted at a different point in the report.
  • 141. Confusion is caused by commentators using the term “metadata”!
  • 200. “The robustness of the application process means that any applications she receives will already have been rigorously tested, and she told the Committee that as a result she did not refuse many warrantry requests.” !
  • 257.i They really shouldn’t have redacted one of the two subsections on Deliberate interception of lawyer-client communications

Don’t tell me it’s safe!

Guest post by Oliver Florence (final year undergraduate student)

David Cameron has said that a change to legislation concerning encryption is required.

They would like a means of accessing the content of any communication between citizens of the UK. Cameron’s view is that unless they’re able to have a method of encryption with a ‘back door’ that gives them access, his government will make it illegal for civilians to use encryption.

Prior to the digital age, law enforcement agencies were able to have a look through your post or listen in to your telephone calls as a means of keeping you safe. While they do still do this, Cameron has said there is currently no way of accessing the content of encrypted digital information.

The argument presented is that there are situations in which law enforcement ‘need’ access to the communications or data held on an individual’s phone, and are now unable to get that access as a result of modern encryption. Adding to this problem is that an increasing amount of mobile handsets are being sold with data encrypted enabled as a standard feature

Cameron is proposing a Government backdoor into encrypted communication, which is not an unfamiliar concept, both the director of the FBI and President Obama have made mention of this type of encryption. It’s important to be clear here, referring to any proposed backdoor inclusive encryption method as secure is misleading and dangerous.

Understanding encryption in terms of its function is simple; it’s either secure, and no one other than the intended recipient can decrypt and read it, or it’s not. The problem that arises when you start leaving backdoors in encryption is that someone will find and exploit them.

If citizens of the UK are told they must use a new standard of encryption that has a backdoor, but is safe, the majority of users may continue as though they are still safe, which simply will not be true. Whether the problem is a lack of understanding on the part of our representatives, or a purposeful distribution of misinformation is unclear. In either case though, the resulting landscape would leave residents of the UK far more vulnerable to cybercrime.

This change will of course be presented to the public in some complementary ‘anti-terror’ wrapping paper to make it more palatable.

The threat of outlawing encryption is an absurd proposal and a scare tactic. The UK has an ecommerce industry that had a turnover of 44 billion in 2014. Without a safe form of encryption consumer confidence in the industry would erode and have profound effects on the economy. Also any transaction carried out using https (your bank, amazon, any login information) would no longer be encrypted as securely. It is clear that the threat of removing encryption is in no way viable and its suggestion is a way of whipping up election attention.

I am not suggesting that there isn’t discussion to be had around how agencies can effectively retain their ability to police in the digital world. Banning or breaking encryption is not how this will be achieved though, and having this back and forth is detracting from real progression in the discussion.

This is a guest post by Oliver Florence, final year undergraduate student in Computer Science. Oliver’s research project was “Cyber security current affairs”, relating technical knowledge in cyber security to current affairs and producing stories for a wider audience on that basis.

The Barry Gibb effect (“It oughta be illegal!”)

This is just a short post to introduce a new term into the internet privacy world: the Barry Gibb effect. It relates to the ECJ Google vs Spain judgement (erroneously a.k.a. “The Right To Be Forgotten” RtbF), and Google’s response to it. I don’t want to set out my views on that in detail here – especially not as most of what I would say has been put across eloquently by Paul Bernal, by Jon Baines , on the Amberhawk Blog, or by Tim Turner already. In brief:

  • the judgement serves a real need but is still imperfect, e.g. in making Google the judge of public interest;
  • Google have brought this upon themselves by refusing to be judged in the “press” category;
  • if Google search had anything to do with freedom of expression, its results would need to be ranked transparently in a way that bears no relation to anyone’s commercial interests;
  • the number of RtbF removal requests is dwarfed by the number of copyright removal ones, so it’s very odd that Google would find it hard to deal with them;
  • Google are deliberately overreacting in order to get the press on their side in rejecting the judgement.

The Barry Gibb effect (#gibbeffect) relates to this latter point. Google appears to have removed webpages from search results well beyond the context (outdated irrelevant information of no public or historic interest) of the judgement. It has then sent notifications of the removal to the owner of the page – particularly to journalists. The ECJ judgement does not appear to include any such obligation. One might even argue that sending it perverts the spirit of the judgement (and might constitute unlawful processing under the DPA). Google seems to be hoping for the page owner to make a fuss. This has two effects: one is drawing attention to the page that someone wanted to hide (this is known as the Streisand effect), and the other is getting a journalist worked up about the “unreasonable” ECJ judgement on Google’s behalf. The latter is the Barry Gibb effect, named in recollection of memorable Streisand/Gibb duets like “Guilty” and “What kind of fool”. Notable displays of the Barry Gibb effect so far have been by the BBC journalist Robert Peston “cast into oblivion“, the Daily Telegraph newspaper, and last week
Roy Greenslade of the Guardian and others, about some obscure painter.

Finally, unrelated … once you have seen the Spooky Men’s Chorale you never will think of Barry Gibb in the same way again.