Towards Robust Artificial Intelligence

06 Jan 2020

Web 1

The field of AI has seen rapid advancements in the past decade, heralding a wide range of new and potential application areas. However, for AI to be adopted more broadly, there needs to be a higher level of trust on the output from AI models and a deeper understanding of the rationale behind their algorithmic assessment. Furthermore, in the face of adversarial AI – where adversaries could modify data to trick an AI system – there is a need for new defensive strategies to ensure the robustness of trained AI models.

Speaking at a DSTA Lecture on 6 January 2020, Dr Cox spoke about the Lab’s ongoing research in the robustness of AI models, and explained that robustness was especially important for industries with mission-critical applications such as defence.

He also shared the performance gap in today’s AI systems. As an example, he spoke on how the accuracy rates of several object-detection models fell drastically when tested on ObjectNet – a dataset where objects are decoupled from contexts in which they would normally appear in – as compared to when they were tested on a conventional image dataset like ImageNet. This illustrated how different datasets currently used to train the models might not be enough as they do not take into account different contexts and corner cases, and underscores the need for new benchmarks for robustness in AI systems.

Web 1

Two areas of research in the area of robustness were discussed. First, Dr Cox addressed the issue of adversarial robustness, which is the ability to function despite the presence of maliciously crafted inputs designed to trick AI systems. He said that even small perturbations in images would produce different results when fed to a system that had already been trained. Furthermore, such perturbation could be too small for the human eye to decipher. With the huge range of adversarial attacks possible, the amount of havoc that an adversary could wreck in a defence context would be enormous.

As such, adversarial robustness was something that has to be achieved. He highlighted the need to understand how effective adversarial attacks could be generated, as well as interpret, detect and mitigate them. He also discussed possible attack methods that were increasingly successful, which were an indication that adversarial attacks were fast catching up with defences.

Web 1

The second area of research that Dr Cox touched on revolved around new approaches to achieving safe and verifiable AI systems. Traditionally, this would be achieved through software testing, but research has shown that comprehensive software testing is not able to guarantee the predictability of AI models.

For instance, research has found that autonomous vehicles would have to be driven hundreds of billions of miles for their reliability to be demonstrated – an effort that would take tens and even hundreds of years to achieve, and could potentially be unsafe. Hence, ways to augment testing-based approaches for deployment of systems in open or adversarial environments would have to be found. Dr Cox shared that combinations of automated reasoning and deep learning could improve the results of AI models significantly.

After the lecture, Dr Cox participated in a Q&A session moderated by Director Cybersecurity Paul Tan.

Web 1

“The lecture was extremely insightful. Dr Cox provided a comprehensive overview of the current problems surrounding adversarial AI and helped the audience understand the difficulties AI users currently face and will face in the future. I learnt that we should go in-depth into the workings of neural networks to better understand how attacks could be formed as well as how we can defend against them,” said Engineer (Cybersecurity) Shawn Chua.

BACK
TO TOP