DeepMind: can we ever trust a machine to diagnose cancer?

File 20171206 917 yn6wxu.jpg?ixlib=rb 1.1
Shutterstock

Srivas Chennu, University of Kent

DeepMind has recently announced a fresh collaborative partnership with the UK’s health service, with plans for the artificial intelligence firm to develop machine learning technology to research breast cancer.

DeepMind, a Google subsidiary, is perhaps best known for successfully building AI that is now better than humans at the ancient game of Go. But in recent months – when attempting to apply this tech to serious healthcare issues – it has been on the sidelines of a data breach storm.

In July, DeepMind’s collaboration with London’s Royal Free hospital led to the NHS trust violating the UK’s data protection laws.

The Information Commissioner’s Office (ICO) found that Royal Free’s decision to share 1.6m personally identifiable patient records with DeepMind for the development of Streams – an automated kidney injury detection software – was “legally inappropriate”. DeepMind wasn’t directly criticised by the ICO.

Personal records included patients’ HIV-positive status, as well as details of drug overdoses and abortions. Royal Free’s breach generated considerable media attention at the time, and it means that DeepMind’s latest partnership with an NHS trust will be scrutinised carefully.

It will be working with Cancer Research UK, Imperial College London and the Royal Surrey NHS trust to apply machine learning to mammography screening for breast cancer. This is a laudable aim, and one to be taken very seriously, given DeepMind’s track record. London-based DeepMind emerged from academic research, assisted by Google’s deep pockets. It is now owned by Google’s parent company Alphabet.

Its success has arisen from recruiting some of the best machine learning and AI scientists, organising them into goal-driven teams, and freeing them up from having to teach or apply for funding.

Mind reader

DeepMind appears to have learned from the Royal Free data breach, having “reflected” on its own actions when it was signed on to work with the trust. It said that the breast cancer dataset it will receive from Royal Surrey is “de-identified”, which should mean that patients’ personal identities won’t be shared.

Another key difference is that the Royal Surrey dataset was explicitly collected for research – indicating that participants gave consent for their data to be shared in this way. DeepMind has also been upfront about its approach to data access, management and security. It has appointed independent reviewers and verifiable data audits, in the hope of building trust and confidence.

Given DeepMind’s continued collaboration with the NHS on a range of research, citizens are rightly concerned about how private corporations might exploit the data they have willingly shared for publicly funded work.

Few details about the Royal Surrey research project – which is in the early stages of development – have been released, but it’s likely that DeepMind will focus on applying deep neural networks for scanning mammogram images to automatically identify signatures of cancerous tissue. This approach would be similar to its Moorfields Eye Hospital project, where DeepMind is building automated machine learning models that can predict macular degeneration and blindness from retinal scans.

DeepMind is probably also exploring the possibility of incorporating novel deep reinforcement learning algorithms to train the machine learning models. The algorithms would then tap into insights from empirical neuroscience research about how the human brain learns from reward and punishment.

Reinforcement learning – which differs from more conventional supervised and unsupervised learning methods used in machine learning – is the technique that enabled DeepMind to train agents that learn to play Go and many other games better than humans.

Apart from these algorithmic advances, DeepMind might find, as it already has with the Streams trial, that there are many technological tweaks that can improve how doctors treat patients, without any need for machine learning at all.

Human nature

From my own experience in applying data analytics to medical diagnostics in neurology, I know that – even if things go well for DeepMind and it manages to build a machine learning model that is excellent at detecting the early signs of breast cancer – it might well face a more practical problem in its application to the real world: interpretability.

‘Inflammatory’ breast cancer, in which tumour cells spread through dermal lymphatic channels (vascular invasion) of the skin.
Shutterstock

The practice of medicine today relies on trust between two humans: a patient and a doctor. The doctor judges the best course of treatment for a patient based on their individual clinical history, weighing up the relative pros and cons of the different options available. The patient implicitly trusts the doctor’s expertise.

But will patients or doctors trust a machine if it produced the same recommendation, based on an algorithm? While the use of machine learning has become commonplace in many contexts behind the scenes, its application in healthcare is fraught with the challenge of trust.

If a doctor or patient fails to understand and communicate the rationale behind a recommendation, it might be very difficult to convince either to adopt it. And no machine learning algorithm is likely to be perfect. Both false positives and negatives are of great consequence in the healthcare context.

In the world of policing, AI has muddily kicked up hidden biases that can creep into machine learning models, reflecting the unfairness embedded in our world. Machines that inherit our prejudices might make for unpopular medical aids.

AI scientists are working on the problems of bias and interpretability, while also working with clinicians to design artificial intelligence that is more transparent about uncertainty.

The ConversationBeyond the technological advances in AI for improving human health, both ethics and interpretation will play central roles in its acceptance.

Srivas Chennu, Lecturer in eHealth, University of Kent

This article was originally published on The Conversation. Read the original article.

Posted in News, opinion, research | Tagged , , , | Leave a comment

Prizes awarded to outstanding Master’s students

Two Master’s students from the School of Computing have been awarded prizes for outstanding achievements on their degrees.

Philip Adey, MSc Advanced Computer Science, won the prize for Outstanding Performance on the Programme with an overall mark of 87.7%.

Lewis Mckeown, MSc Computer Science with an Industrial Placement, won the prize for Outstanding Performance in the Project, with a score of 95%.

Lewis’ project entitled ‘Software for a Narrative Generation’ was supervised by Anna Jordanous and Colin Johnson. In the project Lewis investigated how we can use computational creativity (making software that can do creative things like write stories) to investigate how constraints affect creativity – i.e. In what way does it affect creativity if we are required to follow and fit in with more limitations, rules and specifications – or if we are given fewer such constraints? Lewis explored and compared four different types of story-telling software that worked with varying levels of freedom or constraints, to generate deliberately surreal stories. The work engaged well with theories about creativity in humans as well as computational creativity.

Both students received a certificate and £100. Congratulations to both of them.

Posted in Uncategorized | Leave a comment

Computer Science student wins Gold at WorldSkills UK

Neelan Thuraisingham, a second year student at the School of Computing has won a gold medal at the WorldSkills UK competition.

Neelan, who studies Computer Science (Networks) with a Year in Industry, was selected for the Network Infrastructure Technician national final after winning a national qualifier in Surrey.

Neelan said; ‘When they were announcing the prizes I was hoping for bronze and then silver and then when those prizes went to other people I thought I’d missed out. So it was a huge shock when they announced my name as the gold medal winner. I’m really pleased and would like to pass on my thanks to everyone who supported me.’

WorldSkills UK Competitions bring together apprentices and young people from across the country to compete to be the best in their chosen skill. The national finals were held at The Skills Show, the nation’s largest skills, apprenticeships and careers event, which took  place at the NEC Birmingham.

The Network Infrastructure Technician competition focused on all the essential requirements for a successful career as a Network Technician. It consisted of a series of tasks that included the installation, configuration, maintenance and troubleshooting of ICT networks using predominantly CISCO devices. The competition tested the knowledge and skills learned in the Cisco CCNA Network Academy Program.

 

Posted in News | Tagged , , | Leave a comment

Explainer: how the latest earphones translate languages

Shutterstock

Ian McLoughlin, University of Kent

In the Hitchhiker’s Guide to The Galaxy, Douglas Adams’s seminal 1978 BBC broadcast (then book, feature film and now cultural icon), one of the many technology predictions was the Babel Fish. This tiny yellow life-form, inserted into the human ear and fed by brain energy, was able to translate to and from any language.

Web giant Google have now seemingly developed their own version of the Babel Fish, called Pixel Buds. These wireless earbuds make use of Google Assistant, a smart application which can speak to, understand and assist the wearer. One of the headline abilities is support for Google Translate which is said to be able to translate up to 40 different languages. Impressive technology for under US$200.

So how does it work?

Real-time speech translation consists of a chain of several distinct technologies – each of which have experienced rapid degrees of improvement over recent years. The chain, from input to output, goes like this:

  1. Input conditioning: the earbuds pick up background noise and interference, effectively recording a mixture of the users’ voice and other sounds. “Denoising” removes background sounds while a voice activity detector (VAD) is used to turn the system on only when the correct person is speaking (and not someone standing behind you in a queue saying “OK Google” very loudly). Touch control is used to improve the VAD accuracy.
  2. Language identification (LID): this system uses machine learning to identify what language is being spoken within a couple of seconds. This is important because everything that follows is language specific. For language identification, phonetic characteristics alone are insufficient to distinguish languages (languages pairs like Ukrainian and Russian, Urdu and Hindi are virtually identical in their units of sound, or “phonemes”), so completely new acoustic representations had to be developed.
  3. Automatic speech recognition (ASR): ASR uses an acoustic model to convert the recorded speech into a string of phonemes and then language modelling is used to convert the phonetic information into words. By using the rules of spoken grammar, context, probability and a pronunciation dictionary, ASR systems fill in gaps of missing information and correct mistakenly recognised phonemes to infer a textual representation of what the speaker said.
  4. Natural language processing: NLP performs machine translation from one language to another. This is not as simple as substituting nouns and verbs, but includes decoding the meaning of the input speech, and then re-encoding that meaning as output speech in a different language – with all the nuances and complexities that make second languages so hard for us to learn.
  5. Speech synthesis or text-to-speech (TTS): almost the opposite of ASR, this synthesises natural sounding speech from a string of words (or phonetic information). Older systems used additive synthesis, which effectively meant joining together lots of short recordings of someone speaking different phonemes into the correct sequence. More modern systems use complex statistical speech models to recreate a natural sounding voice.

Putting it all together

So now we have the five blocks of technology in the chain, let’s see how the system would work in practice to translate between languages such as Chinese and English.

Once ready to translate, the earbuds first record an utterance, using a VAD to identify when the speech starts and ends. Background noise can be partially removed within the earbuds themselves, or once the recording has been transferred by Bluetooth to a smartphone. It is then compressed to occupy a much smaller amount of data, then conveyed over WiFi, 3G or 4G to Google’s speech servers.

Google’s servers, operating as a cloud, will accept the recording, decompress it, and use LID technology to determine whether the speech is in Chinese or in English.

The speech will then be passed to an ASR system for Chinese, then to an NLP machine translator setup to map from Chinese to English. The output of this will finally be sent to TTS software for English, producing a compressed recording of the output. This is sent back in the reverse direction to be replayed through the earbuds.

This might seem like a lot of stages of communication, but it takes just seconds to happen. And it is necessary – firstly, because the processor in the earbuds is not powerful enough to do translation by itself, and secondly because their memory storage is insufficient to contain the language and acoustics models. Even if a powerful enough processor with enough memory could be squeezed in to the earbuds, the complex computer processing would deplete the earbud batteries in a couple of seconds.

Furthermore, companies with these kind of products (Google, iFlytek and IBM) rely on continuous improvement to correct, refine and improve their translation models. Updating a model is easy on their own cloud servers. It is much more difficult to do when installed in an earbud.

The ConversationThe late Douglas Adams would surely have found the technology behind these real life translating machines amazing – which it is. But computer scientists and engineers will not stop here. The next wave of speech-enabled computing could even be inspired by another fictional device, such as Iron Man’s smart computer, J.A.R.V.I.S (Just Another Rather Very Intelligent System) from the Marvel series. This system would go way beyond translation, would be able to converse with us, understand what we are feeling and thinking, and anticipate our needs.

Ian McLoughlin, Professor of Computing, Head of School (Medway), University of Kent

This article was originally published on The Conversation. Read the original article.

Posted in News, opinion | Tagged | Leave a comment

Year in Computing launches for 2018

Many students are keen to learn the tech skills that will make them stand out to a graduate employer, or simply want to learn more about computing for their own interests.

The School of Computing offers a ‘Year in Computing‘ for all Kent undergraduate students*. This extra year can be taken after stage 2 or the final year of a students’ current degree programme.

Students interested in finding out more can come to a kickstart lunch on Wednesday 6 December at 12.30 in SW101, Cornwallis South, to find out more.

The Year in Computing will especially be of interest to you if;

  • you are interested in studying computing AND their current degree,
  • you would like to get prepared for a career in tech,
  • you are interested in exploring the frontiers of their subject and computing,
  • you want to learn how to be creative with computing.

The Year in Computing’s cohort of students have come from a wide range of academic disciplines and have been very positive about learning computing in addition to their original degree.

In anonymous feedback students listed the aspects of the programme they liked, including:

  • when my code actually works
  • the range of different subjects/information behind Computing
  • learning different programming languages
  • the photo examples in lectures
  • engaging with my lecturers both in and out of classes
  • I really understand the content and don’t feel stupid for asking questions

 

*with the exception of students from the School of Computing and School of Psychology

Posted in News | Tagged | Leave a comment

New Head of Cyber Security joins School

Shujun Li has joined the University of Kent as a Professor of Cyber Security in the School of Computing. Shujun will head up the School’s Security Research Group and the university-wide Interdisciplinary Research Centre in Cyber Security.

Shujun has a prolific publishing record with over 100 scientific papers published since 2000, including two Best Paper Awards. His research interests are mainly on cyber security, human factors, multimedia computing, digital forensics, and applications of artificial intelligence.

Shujun said, “I am truly excited to start a new chapter of my career at the University of Kent. At both national and international levels, Cyber Security has been recognised by governments and businesses as one of the most important technical subjects for securing a more resilient world for us to live our lives and do business online safely. Based on the University’s excellent research on a number of important topics in Cyber Security, I look forward to working closely with my colleagues here to develop Kent’s Cyber Security research to the next level.”

Shujun received his undergraduate and PhD degrees from the Xi’an Jiaotong University, an elite C9 League university in China. He then became a postdoctoral Research Assistant with the Department of Electronic Engineering, City University of Hong Kong, before becoming a postdoctoral fellow at the Hong Kong Polytechnic University. Shujun then moved to Germany to continue his career, firstly at the FernUniversität in Hagen as an Alexander von Humbold Research Fellow, and then at the Universität Konstanz as a 5-Year Research Fellow of the Zukunftskolleg (Institute for Advanced Study).

In 2011 he joined the University of Surrey as a Senior Lecturer and later became a Reader. From 2014 to 2017, he was a Deputy Director of the Surrey Centre for Cyber Security (SCCS).

 

 

 

Posted in News, research, security | Tagged , | Leave a comment

Kent students involved in launch of makerspace studio in Dover

Students from the School of Computing have played an integral role in helping to launch a makerspace studio in Dover that will house cutting-edge technologies such as 3D printers and laser cutting machines.

The Creative Start Up Space was opened on Saturday 28 October at the Future Foundry building by the Mayor of Dover, Cllr Neil Rix. The venue is designed to give young people the opportunity to learn how to use new and emerging technologies to develop new skills and businesses.

Professor Sally Fincher and Senior Makerspace Technician Dan Knox from the School of Computing were integral to the launch by providing support and guidance to Future Foundry about the equipment they should purchase for the space such as 3D printers.

Students Elliot Carr and Joe Harrison from the School of Computing’s TinkerSocsociety took part in the open day by demonstrating how these machines work and what they can be used for, including showcasing some of the ways in which they are used on courses at the University to test and build objects that could have real-world uses.

The Creative Start Up Space will now start accepting people aged 16-30 to sign-up for a one-year course and deliver workshops for the wider community. They will continue to work with the School of Computing for help and guidance whenever required.

Posted in News | Tagged , | Leave a comment

School hoodies

By popular demand, School hoodies are now available.

You can buy them on the online store at £19.99. They are navy blue zip up hoodies with University of Kent Computing logo on the front left breast and School of Computing on the back.

Posted in News | Leave a comment

Join the School of Computing – four vacancies

Are you looking for a new challenge? The School of Computing is looking for four talented individuals to fill full-time posts in administration, research and lecturing.

Student Experience Manager

(STM08390) Closing date 29 October 2017

Do you have substantial experience of working in an administrative role in the HE Sector? Do you have a background in supervising, leading and training staff?

As a Student Experience Manger you will manage all student related administration within the School of Computing ensuring that academic colleagues and students receive excellent support from the administration team.

Research Associate in Computational Neuroscience

(STM0847) Closing date 05 November 2017, (Fixed term and full time)

This post is part of the EPSRC project “A Computational Prototype for Electroencephalographic Brain Connectomics” awarded to Dr Srivas Chennu, which sits at the interface of signal processing, machine learning and neuroscience.

This is a fixed term post (start date negotiable) and will provide the successful candidate with a rich collaborative environment: it is positioned within the recently formed Data Science group at the University of Kent, and will involve close collaboration with external partners at the Universities of Cambridge and Liège (Belgium).

Lecturer: Student Success x 2 posts

STM0846, School of Computing, Closing date 05 November 2017 (Fixed term for 3 years)

The School of Computing are seeking to appoint two Lecturers, one to be based at our Canterbury campus and one at our Medway campus.

These are unique academic posts within the School of Computing with a specific focus on underperforming students and their success. The primary purpose of these roles is to lead and deliver educational change within the School that will directly lead to improvements in undergraduate retention and academic attainment.

Posted in News, research | Tagged , | Leave a comment

Student develops news aggregator website while on placement year

Natalie McLaren, a BSc Web Computing with a Year in Industry student, has launched a personalised newsfeed application that she developed with a colleague while on her placement year at digital marketing agency Cyber-duck.

While working at Cyber-duck Natalie learnt two new technologies, and with her colleague decided to develop a project in their spare time, using the two languages.

They developed a news aggregator website called Newscape by fusing PHP framework Laravel with React.js, which allows customers to personalise their newsfeed while also viewing their twitter feed. The site offers news articles in English and German and imports news from 70 news sites. They hope to expand and add more features as the platform grows and they receive feedback from users.

Natalie said ‘Initially Newscape was just a learning experience as we combined two tools which are not usually used together. Now we are really interested in hearing what users think of the website and what features they would like to see on there in future.’

Newscape is available at https://newscape.herokuapp.com/

Posted in News | Tagged , , , | Leave a comment