Expert comment: Government’s AI tool will help in online terror fight

Feed URL: https://blogs.kent.ac.uk/unikentcomp-news/2018/02/13/expert-comment-governments-ai-tool-will-help-in-online-terror-fight/feed/?withoutcomments=1

In response to the government unveiling an AI tool that can be used to block terrorist content before it uploaded to the web Professor of Computing Ian McLoughlin has said that the success of the technology will only improve over time as it continues to learn.

‘While the company behind this algorithm is extremely sensitive to revealing any details, we know that it’s based on machine learning technology. From the graphics shown during the BBC interview we can infer that the tool works on a frame-by-frame basis (and is possibly a WaveNet approach). This means it doesn’t analyse a recording in its entirety, but analyses each individual frame of the video.

‘Because actions and words in a video are very much related to the context of what has happened before (i.e. individual frames are not really important in isolation but in the few seconds of time that forms their context), there needs to be something in the algorithm that ties context of frames together, and that may be key. The biggest benefit of a frame-by-frame analysis would be to detect embedded content, i.e. segments of terrorist propaganda embedded in an otherwise innocuous video. A secondary benefit is being able to operate on real-time data (i.e. material as it is being broadcast).

‘They mentioned that there are more than 1,000 videos, and I suspect they probably used almost all of those for training. For any machine learning system, final performance is related to the inherent ability of the analysis and processing technique, plus the quality and quantity of the training material. As time goes by, performance is clearly likely to improve.

‘On the topic of performance, if 94% of videos were correctly recognised with 99.995% accuracy, the big question is what happened to the 6% that were not mentioned? Were those actual terrorist content that would be missed (false negative),or legitimate content that was incorrectly flagged (false positive)?

‘The cost of the former is that something dangerous slips through; the cost of the latter is that a human – who would need to review any flagged content anyway – is loaded with additional work. It is important to analyse the errors in any AI system, and this is no exception. However revealing the characteristics of this performance – i.e. which 60 videos are not captured –  and especially revealing what kinds of videos are correctly and incorrectly recognised, would give too many secrets to those who are producing such material.

‘In summary, a great result from ASI Data Science. Technology doesn’t stand still, and this will need to be improved as terrorists evolve their approaches, but let’s try to keep the exact technology and performance secret in the meantime.’

By

The University’s Press Office provides the media with expert comments in response to topical news events. Colleagues who would like to learn more about how to contribute their expertise or how the service works should contact the Press Office on 3985 or pressoffice@kent.ac.uk

This entry was posted in data science, News, opinion, security and tagged , , . Bookmark the permalink.

Comments are closed.