Monthly Archives: December 2016

We got 1.6 million students’ Google search histories!

We have fantastic news. Google have given us 1.6 million UK university students’ five years’ Google search histories, so we can work on improving their learning. Knowing what students have looked for on Google in the last 5 years will allow us to model their metacognitive skills and learning styles very accurately, so we can make individualised interventions when their everyday Google searches show their potential misunderstanding of what we’re trying to teach them.

We’ve promised Google we won’t be using this information for anything else, honest. Our agreement with Google says so, though it also says that third parties (like the students themselves) can’t hold us to anything that’s agreed in there. It’s all very exciting. We have never received data from Google before, never done any learning analytics, heck, we haven’t even looked at the web tracking data of our university’s Moodle virtual learning environment. But we think improving university education is extremely important, and we are so smart and successful, that we decided we just needed to go ahead and do this.

Hmmm. Maybe that doesn’t work too well. Let’s try again.

Working with Google to build educational breakthrough technology

We’re very proud to announce a ground-breaking five year partnership with Google.

Google searches do a phenomenal job of providing information for students, but it’s a badly dated technology. Plain searches for key words are still commonly used in most UK universities, but too often such searches don’t meet academic needs because they are constructed far away from the frontline of student understanding.

This slow and outdated technology means that important changes in a student’s academic understanding don’t get brought to the attention of the right lecturer in time to prevent further serious confusion. When this doesn’t happen, the consequences for students can be severe, and even fatal. At least ten thousand students a year fail in UK universities through entirely preventable causes, and some 40% of students could avoid needing face to face tutorial conversations, if only the right lecturer were able to take the right action sooner.

Our partnership aims to change that, by taking a very different approach to building IT for higher education. Together we are creating world-leading technology, in close collaboration with lecturers themselves, to ensure that the right student information gets to the right academics at the right time, reducing preventable academic failure and underachievement.

The five year partnership will build on the successful year-long joint project to build a smartphone app called Mindflows, which alerts academic teams as soon as a student’s web searches and test results show that they are at risk of developing acute lecture incomprehension (ALI), providing the academics with the necessary contextual academic information to help them to provide the right treatment before the student’s academic condition worsens.

When it’s fully built, we believe that this will speed up the time to alert academics and peer tutors to students in need, down from hours to a few seconds. And by freeing up academics’ time from juggling multiple computer- and paper-based student monitoring systems it should redirect over half a million hours per year away from admin, and towards direct student teaching at the University of Kent alone.

Think that will be enough? Revolutionary real-time learning analytics, with a strong hint that academics will barely have to do admin ever again, and some large numbers? Let’s stick the bit about the Google search data, just how much we’re getting, and why, in a FAQ at the bottom. People will understand that improving education is the thing that really matters, right?

PS To be 100% clear, this is only a parody. We did not get search histories nor any other data from Google. (I co-signed a letter asking for some RtbF info once, but there wasn’t even a reply.) Google DeepMind did get extensive 5 year medical histories of 1.6 million NHS patients, initially without knowledge of those patients, and so far still without consent and with an incomplete justification, which focuses on security when privacy is the issue. DeepMind has no track record of medical data processing, and no history of responsible data sharing with the NHS. The initial deal left DeepMind’s use of the data largely unconstrained. The text above closely mirrors how the recently announced revised deal (that does agree to a much more constrained use of the data) is being presented to the public. Their contract does say that third parties aren’t actually able to hold Google DeepMind and NHS Royal Free to those constraints.

So isn’t this an unfair comparison? Aren’t Google search histories far more sensitive than medical data? Maybe, but not in the perspective of current and future data protection legislation, where both would be sensitive personal data. Both can be stigmatising, and both present a possible risk of blackmail or discrimination, e.g. in the workplace, in the price of insurance or simply in on-line shopping. And judging from the public’s reactions in recent years, people in the UK actually care more about their medical data than their internet histories.

Finally: why make such a fuss, isn’t making data-based progress in medicine critical? Yes of course it is. Medical data are crying out for the kinds of things DeepMind and other big data analysts can do: lots of detailed measurement data with limited understanding of causality, and a massive potential impact. They should want to do AI with NHS data – oddly enough, this deal is presented as excluding AI. Then they could ask to do so properly, not taking shortcuts on patient consent or its replacement procedures.

But even if they go about this the correct way, as a country we do need to ask ourselves some more questions about this. Medical data analytics is an area where the NHS has been quietly privatising its operations for years, partly in reaction to its own infamous IT failures. Should we really privatise the mass analysis of our medical data to the world’s largest data sausage factory? Should we do so in the knowledge that Google operates at the scale of a nation state, but does data surveillance of the world without any checks or balances? Should we accept that the methods and outcomes of analysing this rich (choice word) data become Google’s IP? Even if the answer to all these questions is “Yes”, they should all be asked and answered explicitly and publicly before we hand over large amounts of NHS data to Google.