Deep Learning based Stereo Depth Completion and Guidance for Autonomous Vehicles

This project aims to develop deep-learning-based autonomous driving in urban scenarios. To achieve this objective, the self-driving car must maintain high robustness when facing challenges from dynamic traffic conditions and adverse weather.

The proposed solution comprises a real-time perception system and a novel guidance and decision-making model. To enhance understanding of the surrounding environment, stereo-based depth estimation and semantic segmentation techniques are employed to capture scene geometry and semantic information. These outputs from the perception system are then fed into a Deep Reinforcement Learning (DRL) based guidance model, which generates driving commands to control the vehicle.

Robust Guidance Decision-Making for Single-Agent Autonomous Vehicles

Monocular 3D Object Detection for Autonomous Vehicles

Monocular 3D object detection is challenging for autonomous driving due to limited depth information. This work proposes a novel approach that enhances deep networks with depth cues to improve spatial understanding from single images. The method includes:

  • A Feature Enhancement Pyramid Module to fuse multi-scale features and improve contextual awareness.

  • An Auxiliary Dense Depth Estimator to enrich spatial perception without added computational cost.

  • An Augmented Centre Depth Regression using geometric cues.

Experiments demonstrate real-time and accurate performance, offering a promising monocular 3D object detection solution for enhancing autonomous driving perception.

This work addresses the vulnerability of Deep Reinforcement Learning (DRL) based single-agent guidance decision-making for autonomous vehicles to adversarial perception attacks.

An efficient gradient-based method is proposed to generate adversarial perturbations and a saliency-based detection network to flag attacks on sensor inputs. To ensure safe guidance under such perturbations, a robust DRL framework is developed using PPO with a theoretically grounded, multi-objective constrained optimisation strategy.

Evaluations in complex roundabout scenarios show the approach significantly improves resilience and driving safety under adversarial conditions.

Robust Guidance Decision-Making for Multi-Agent Autonomous Vehicles

While Multi-Agent Reinforcement Learning (MARL) significantly enhances coordination and system stability compared to single-agent approaches, it also introduces increased vulnerability to adversarial perturbations. These attacks on observation inputs can mislead one or more agents, resulting in unsafe behaviours and potential multi-vehicle collisions.

To address this, Robust Constrained Cooperative Multi-Agent Reinforcement Learning (R-CCMARL) is proposed, which employs a universal policy shared across agents and leverages Mean-Field modelling to effectively manage dynamic multi-agent interactions. A risk estimation network is incorporated to assess long-term safety and inform a constrained optimisation objective that balances robustness and task performance, even under adversarial conditions.

Extensive experiments in CARLA intersection scenarios demonstrate that R-CCMARL maintains high performance while significantly improving resilience against observation-based attacks