Adversarial Learning for Autonomous System
With AI solutions becoming more widespread in their use, there are still outstanding problems that need to be solved. One of these is the susceptibility of AI architectures to adversarial attacks. These adversarial attacks manipulate the inputs to the network by adding a carefully calibrated small perturbation.
Though the change to the input is slight and difficult to see for a human operator it can have a large effect on the decision of the AI. This area of research is imperative before autonomous systems can be released to work amongst the general public.
That is why we are working with Dstl to look at ways to mitigate this threat by building solutions that detect and neutralise adversarial attacks. Research by the team has already looked at the use of explainability methods to detect adversarial attacks upon an AI-controlled UAV carrying out a guidance task.
Explainable AI for Drones 2D/3D Recognition and Scene understanding
Artificial Intelligence for Detection of Explosive Device AIDED
AIDED is a European project part of H2020 and funded by the European Defence Agency (EDA). The project aims to counter explosive ordnances (EOs) through the development of an AI- enabled robotic swarm that can be sent out in advance to detect and classify EOs in the terrain, thereby keeping the human soldiers out of harm’s way.
Aside from the devastating loss in human lives, the use of improvised explosive devices (IEDs) by adversaries also significantly hampers and slows down military operations, as the cleaning process is very slow, tedious and costly.
AIDED works towards solving this issue by developing advanced AI-processing techniques for the detection and classification of EO threats quickly and efficiently supported by automated and complex mission planning.
An uncrewed aerial (or ground) vehicle may utilise object detection to undertake a variety of different tasks. The introduction of Deep Neural Networks has vastly improved the capabilities of object detection. However, it is challenging to deep networks in the wild since they remain opaque, closed box algorithms.
Hence, the focus of this project is to develop tools that can assist a developer by providing context behind the detections made by the network. This will allow them to have a greater understanding about the operation and limitations of their deep detector. This area of study is known as Explainable AI (XAI).
The explanations are provided in the form of Saliency Maps which show how different parts of the input frame contributed to the network’s output. These saliency maps were generated using KernelSHAP. The work was undertaken with funding and support of the Defense Science and Technology Laboratory (DSTL).
Get in touch
Address
Northampton Square, London, EC1V 0HB