Neelan Thuraisingham, a second year student at the School of Computing has won a gold medal at the WorldSkills UK competition.
Neelan, who studies Computer Science (Networks) with a Year in Industry, was selected for the Network Infrastructure Technician national final after winning a national qualifier in Surrey.
Neelan said; ‘When they were announcing the prizes I was hoping for bronze and then silver and then when those prizes went to other people I thought I’d missed out. So it was a huge shock when they announced my name as the gold medal winner. I’m really pleased and would like to pass on my thanks to everyone who supported me.’
WorldSkills UK Competitions bring together apprentices and young people from across the country to compete to be the best in their chosen skill. The national finals were held at The Skills Show, the nation’s largest skills, apprenticeships and careers event, which took place at the NEC Birmingham.
The Network Infrastructure Technician competition focused on all the essential requirements for a successful career as a Network Technician. It consisted of a series of tasks that included the installation, configuration, maintenance and troubleshooting of ICT networks using predominantly CISCO devices. The competition tested the knowledge and skills learned in the Cisco CCNA Network Academy Program.
In the Hitchhiker’s Guide to The Galaxy, Douglas Adams’s seminal 1978 BBC broadcast (then book, feature film and now cultural icon), one of the many technology predictions was the Babel Fish. This tiny yellow life-form, inserted into the human ear and fed by brain energy, was able to translate to and from any language.
Web giant Google have now seemingly developed their own version of the Babel Fish, called Pixel Buds. These wireless earbuds make use of Google Assistant, a smart application which can speak to, understand and assist the wearer. One of the headline abilities is support for Google Translate which is said to be able to translate up to 40 different languages. Impressive technology for under US$200.
So how does it work?
Real-time speech translation consists of a chain of several distinct technologies – each of which have experienced rapid degrees of improvement over recent years. The chain, from input to output, goes like this:
Input conditioning: the earbuds pick up background noise and interference, effectively recording a mixture of the users’ voice and other sounds. “Denoising” removes background sounds while a voice activity detector (VAD) is used to turn the system on only when the correct person is speaking (and not someone standing behind you in a queue saying “OK Google” very loudly). Touch control is used to improve the VAD accuracy.
Language identification (LID): this system uses machine learning to identify what language is being spoken within a couple of seconds. This is important because everything that follows is language specific. For language identification, phonetic characteristics alone are insufficient to distinguish languages (languages pairs like Ukrainian and Russian, Urdu and Hindi are virtually identical in their units of sound, or “phonemes”), so completely new acoustic representations had to be developed.
Automatic speech recognition (ASR): ASR uses an acoustic model to convert the recorded speech into a string of phonemes and then language modelling is used to convert the phonetic information into words. By using the rules of spoken grammar, context, probability and a pronunciation dictionary, ASR systems fill in gaps of missing information and correct mistakenly recognised phonemes to infer a textual representation of what the speaker said.
Natural language processing: NLP performs machine translation from one language to another. This is not as simple as substituting nouns and verbs, but includes decoding the meaning of the input speech, and then re-encoding that meaning as output speech in a different language – with all the nuances and complexities that make second languages so hard for us to learn.
Speech synthesis or text-to-speech (TTS): almost the opposite of ASR, this synthesises natural sounding speech from a string of words (or phonetic information). Older systems used additive synthesis, which effectively meant joining together lots of short recordings of someone speaking different phonemes into the correct sequence. More modern systems use complex statistical speech models to recreate a natural sounding voice.
Putting it all together
So now we have the five blocks of technology in the chain, let’s see how the system would work in practice to translate between languages such as Chinese and English.
Once ready to translate, the earbuds first record an utterance, using a VAD to identify when the speech starts and ends. Background noise can be partially removed within the earbuds themselves, or once the recording has been transferred by Bluetooth to a smartphone. It is then compressed to occupy a much smaller amount of data, then conveyed over WiFi, 3G or 4G to Google’s speech servers.
Google’s servers, operating as a cloud, will accept the recording, decompress it, and use LID technology to determine whether the speech is in Chinese or in English.
The speech will then be passed to an ASR system for Chinese, then to an NLP machine translator setup to map from Chinese to English. The output of this will finally be sent to TTS software for English, producing a compressed recording of the output. This is sent back in the reverse direction to be replayed through the earbuds.
This might seem like a lot of stages of communication, but it takes just seconds to happen. And it is necessary – firstly, because the processor in the earbuds is not powerful enough to do translation by itself, and secondly because their memory storage is insufficient to contain the language and acoustics models. Even if a powerful enough processor with enough memory could be squeezed in to the earbuds, the complex computer processing would deplete the earbud batteries in a couple of seconds.
Furthermore, companies with these kind of products (Google, iFlytek and IBM) rely on continuous improvement to correct, refine and improve their translation models. Updating a model is easy on their own cloud servers. It is much more difficult to do when installed in an earbud.
The late Douglas Adams would surely have found the technology behind these real life translating machines amazing – which it is. But computer scientists and engineers will not stop here. The next wave of speech-enabled computing could even be inspired by another fictional device, such as Iron Man’s smart computer, J.A.R.V.I.S (Just Another Rather Very Intelligent System) from the Marvel series. This system would go way beyond translation, would be able to converse with us, understand what we are feeling and thinking, and anticipate our needs.
Shujun has a prolific publishing record with over 100 scientific papers published since 2000, including two Best Paper Awards. His research interests are mainly on cyber security, human factors, multimedia computing, digital forensics, and applications of artificial intelligence.
Shujun said, “I am truly excited to start a new chapter of my career at the University of Kent. At both national and international levels, Cyber Security has been recognised by governments and businesses as one of the most important technical subjects for securing a more resilient world for us to live our lives and do business online safely. Based on the University’s excellent research on a number of important topics in Cyber Security, I look forward to working closely with my colleagues here to develop Kent’s Cyber Security research to the next level.”
Shujun received his undergraduate and PhD degrees from the Xi’an Jiaotong University, an elite C9 League university in China. He then became a postdoctoral Research Assistant with the Department of Electronic Engineering, City University of Hong Kong, before becoming a postdoctoral fellow at the Hong Kong Polytechnic University. Shujun then moved to Germany to continue his career, firstly at the FernUniversität in Hagen as an Alexander von Humbold Research Fellow, and then at the Universität Konstanz as a 5-Year Research Fellow of the Zukunftskolleg (Institute for Advanced Study).
In 2011 he joined the University of Surrey as a Senior Lecturer and later became a Reader. From 2014 to 2017, he was a Deputy Director of the Surrey Centre for Cyber Security (SCCS).
Students from the School of Computing have played an integral role in helping to launch a makerspace studio in Dover that will house cutting-edge technologies such as 3D printers and laser cutting machines.
The Creative Start Up Space was opened on Saturday 28 October at the Future Foundry building by the Mayor of Dover, Cllr Neil Rix. The venue is designed to give young people the opportunity to learn how to use new and emerging technologies to develop new skills and businesses.
Professor Sally Fincher and Senior Makerspace Technician Dan Knox from the School of Computing were integral to the launch by providing support and guidance to Future Foundry about the equipment they should purchase for the space such as 3D printers.
Students Elliot Carr and Joe Harrison from the School of Computing’s TinkerSocsociety took part in the open day by demonstrating how these machines work and what they can be used for, including showcasing some of the ways in which they are used on courses at the University to test and build objects that could have real-world uses.
The Creative Start Up Space will now start accepting people aged 16-30 to sign-up for a one-year course and deliver workshops for the wider community. They will continue to work with the School of Computing for help and guidance whenever required.
Do you have substantial experience of working in an administrative role in the HE Sector? Do you have a background in supervising, leading and training staff?
As a Student Experience Manger you will manage all student related administration within the School of Computing ensuring that academic colleagues and students receive excellent support from the administration team.
(STM0847) Closing date 05 November 2017, (Fixed term and full time)
This post is part of the EPSRC project “A Computational Prototype for Electroencephalographic Brain Connectomics” awarded to Dr Srivas Chennu, which sits at the interface of signal processing, machine learning and neuroscience.
This is a fixed term post (start date negotiable) and will provide the successful candidate with a rich collaborative environment: it is positioned within the recently formed Data Science group at the University of Kent, and will involve close collaboration with external partners at the Universities of Cambridge and Liège (Belgium).
STM0846, School of Computing, Closing date 05 November 2017 (Fixed term for 3 years)
The School of Computing are seeking to appoint two Lecturers, one to be based at our Canterbury campus and one at our Medway campus.
These are unique academic posts within the School of Computing with a specific focus on underperforming students and their success. The primary purpose of these roles is to lead and deliver educational change within the School that will directly lead to improvements in undergraduate retention and academic attainment.
Natalie McLaren, a BSc Web Computing with a Year in Industry student, has launched a personalised newsfeed application that she developed with a colleague while on her placement year at digital marketing agency Cyber-duck.
While working at Cyber-duck Natalie learnt two new technologies, and with her colleague decided to develop a project in their spare time, using the two languages.
They developed a news aggregator website called Newscape by fusing PHP framework Laravel with React.js, which allows customers to personalise their newsfeed while also viewing their twitter feed. The site offers news articles in English and German and imports news from 70 news sites. They hope to expand and add more features as the platform grows and they receive feedback from users.
Natalie said ‘Initially Newscape was just a learning experience as we combined two tools which are not usually used together. Now we are really interested in hearing what users think of the website and what features they would like to see on there in future.’
Congratulations to our newest member of staff, Stefan Marr, who arrived at Canterbury Cathedral on Sunday 24 September after cycling for 12 days from Linz in Austria to Canterbury, to take up his new role as lecturer in the School of Computing.
He is currently working on combining different concurrency models in a safe way. His goal is to ensure that complex systems can use the right abstraction for their various computing needs without introducing bugs caused by the subtle interplay of abstractions.
We welcome Stefan to the school.
You can find out more about his cycle ride at #bikexit17
The innovative Year in Computing programme has been awarded a University teaching prize. The Vice-Chancellor, Professor Karen Cox, presented the award to Colin Johnson, Sally Fincher and Ian Utting at a ceremony on Wednesday 4 October 2017.
The Year in Computing programme is a self-contained year of study offered by the School of Computing to students from any other School in the University. Students who successfully complete the year graduate with their original degree title plus ‘with a Year in Computing’.
During the year students work exclusively within the School of Computing, learning and practising a wide range of fundamental computing topics using the Web as a unifying example and platform. Working with web technologies means that students are exposed to a directly applicable collection of widely used software and skills.
Students on the programme come from across the University’s schools and campuses. Unusually for a computing programme, 40% of the students on the course are women; this is more typically around 15%.
Programme director Ian Utting said: ‘The programme exemplifies the University’s aims to allow students to broaden their programmes in a flexible way. It promotes links between different areas of study and allows students with broad interests and ambitions to realise them. It also provides work-related skills to support students in their future study, research or careers and this aligns strongly with the government’s UK Digital Strategy’.
Further programme details:
The Year in Computing initiative
The Year in Computing programme is a self-contained year of study at Level 5, offered by the School of Computing to students from any other School in the University. During the year students work exclusively within the School of Computing, ultimately graduating (if successful) with their degree title augmented “with a Year in Computing”.
Who is this aimed at?
This Year in Computing is aimed at undergraduate students doing any degree in the University other than students registered with the School of Computing. It is delivered in Canterbury, but may be taken by students from either campus. Students may take this after Stage 2 of their degree, or any subsequent Stage including their final Stage. It is designed both for students who want to “convert” into computing for vocational reasons, and for students who want to integrate computing skills and knowledge into their home degree studies. It has an advantage for students over alternatives such as a conversion MSc not only because it forms part of their undergraduate programme for funding (and potentially visa) purposes; but also because the results from the Year do not affect the classification of their “home” degree, decreasing their risk. (Marks from the Year do appear in students’ transcripts, giving employers a detailed view of what they have achieved.)
What will students get from this?