Artificial Intelligence Techniques for GNC Design, Implementation And Verification

Artificial Intelligence Techniques for GNC Design, Implementation And Verification - EXPRO PLUS (AITIVE-GNC) Project funded by European Space Agency (ESA) aiming at proposing a study to make a formal link between the AI-based ML and the control theory-based reasoning and optimisations within a challenging space GNC scenario as introduced above (see Figure 1).

It will also provide some levels of validation to AI-based ML techniques by using robust control theory and other formal methods, and it will develop explainability mechanisms to open those AI-based ML black-box schemes for GNC-based perception to increase the level of trust for space engineers to adopt these schemes.

The objective of the project is to identify mathematical approaches to support the design and the verification of the next-generation AI-based GNC architectures and functions. More specifically, the objective is to focus on the explainability, and robustness of the system while providing means to formally assess these properties, including Fault Tolerance, Isolation, and Recovery (FDIR) parts of the architecture. These challenges are at the intersection of control, AI, and formal verification.

Explainable Secure Deep Learning Software for Spacecraft GNC Systems

PRO-ACT – Planetary Robots Deployed for Assembly and Construction Tasks

The key robotic elements, namely the rover mobile rover IBIS, the six-legged walking robot Mantis and a mobile gantry are outlined according to the corresponding mission architecture. The ISRU plant is sized to be representative of a future lunar mission, with grasping points to assist robotic manipulation capabilities and considering the effects of reduced lunar gravity.

The IBIS mobile robot has a heavy-duty manipulator with interchangeable end effectors, mobility on moderately uneven terrain, with long endurance. The Mantis hexapod provides the capability to move on challenging terrain, use of 2 legs as dual manipulators with limited payload.

The mobile gantry is in a stowed configuration and required to be unloaded by the 2 robots. It can self-assemble into the final configuration with elementary assistance by the robots with passive mobility.

The project aims to demonstrate the integration of common building blocks for robots that are composed to create functional and intelligent robotic agents. Apart from the lunar exploration mission, a transfer of the applied technologies to terrestrial applications will be evaluated.

Artificial Intelligence (AI) algorithms have been widely used to solve a variety of complex real-world problems, attracting significant interest in space engineering due to their promising performance. However, a primary concern regarding the robustness of Deep Learning (DL) techniques is their vulnerability to adversarial attacks. These attacks are often imperceptible to human vision but can significantly influence the decisions made by DL schemes.

The objective of this project is to make onboard adversarial learning transparent, ensuring trustworthy decisions and providing highly precise detection and defensive responses to ensure the safety of space vehicles. To achieve this, we aim to harness Explainable Artificial Intelligence (XAI) with adversarial attack detection, which is potentially applicable to the embedded space Guidance, Navigation, and Control (GNC) systems of space vehicles that adopt deep learning techniques. More specifically, this project provides DL-based space GNC schemes while simultaneously exploiting models’ explainability to detect potential adversarial attacks on these schemes. This approach makes them secure, less vulnerable, and ready for safe adoption in real-world applications.

Hierarchical DRL for Cubesat Guidance and Control

Deep Neural Network Based LIDAR Navigation for Space Landing operations

OIBAR: Orbital AI-based Autonomous Refuelling

Space Servicing Vehicle close-range Multi-Spectral Camera Engineering Model

Trust Through Explainability of AI Based Space Software

Vision-based space relative navigation systems, which use cameras to collect information in the visible spectrum, are affected by changes in the illumination conditions of the target. For instance, when part of the orbit is in the penumbra or umbra zone, the illumination conditions can be very low or even in full eclipse.

In such cases, a promising solution is to combine infrared cameras with visual cameras. This approach addresses issues related to imaging in the visual spectrum, as the Thermal Infrared (TIR) spectrum depends on the target’s temperature and thermal inertia to cool down and warm up, not on the illumination.

This project aims to develop a vision-based autonomous navigation solution that produces accurate relative pose estimation by capturing images during all phases of the rendezvous, regardless of the illumination conditions. By fusing data from the visual and infrared bands, the algorithm aims to output highly accurate relative poses. Moreover, the system also aims to reach Technology Readiness Level (TRL) 6, making it suitable for space mission environments.

In this project, we study the explainability A1 based algorithms for lunar landing scenario. The objective of the activity is to develop mechanisms related to the explainability of these algorithms to make those onboard intelligent techniques transparent to ensure what they do is safe while meeting the level of performances required by the space applications within an uncertain and acceptable boundary level.

Three sub-tasks are divided to achieve XAI-based lunar landing navigation, which are crater detection, crater identification, and relative navigation. The explainability of networks is achieved during the architecture design by introducing attention mechanism into networks.

Firstly, the initial state of the lander is assumed to be known and the attention-based recurrent convolutional neural network (RCNN) structure is designed to address the relative pose estimation during the landing phase. Then, a general case "lost in space" is studied, which the initial state of the lander is unknown.

This project presents a hierarchical deep reinforcement learning (HDRL) based autonomus guidance and attitude control system for CubeSats. Leveraging a Hierarchical Actor-Critic (HAC) framework, the system demonstrates its ability to execute short range satellite rendezvous while maintaining orientation stability in the face of environmental disturbances and actuator noise.

To validate the simulated results, a novel hardware-in-the-loop (HIL) testbed was developed, being the first demonstration of deep reinforcement learning for CubeSat guidance and control in a HIL testing environment. This setup enabled physical validation of the guidnace and control scheme, with simulation and HIL results revealing that the HDRL controller surpasses both TD3-based schemes as well as PD controllers.

The project is funded by the prestigious body European Space Agency (ESA) and presents an AI navigation architecture to predict a spacecraft odometry, that is suitable for space landing operations using a 3D Light Detection and Ranging (LIDAR) sensor. The solution takes advantage of recent advances in deep learning techniques.

Leveraging the convolutional neural networks for feature learning that can help with real-time estimation for a rigid body transformation. Simulated scenarios using PANGU software (Planet and Asteroid Natural scene Generation Utility) to generate LiDAR data presented in three different images (Range, Slope and Elevation).

The hardware implementation and validation including sensor’s installation and configuration on a chosen dynamic test bench platform in order to obtain realistic LiDAR data, from a synthetic representative lunar terrain during a simulated landing trajectory at CITY Autonomous System and Machine Intelligence Lab (ASMIL).

A spacecraft’s navigation system is of paramount importance for missions involving proximity operations, playing a pivotal role in the success of objectives based on the careful coordination of two or more bodies in space. In particular, new and ambitious rendezvous and docking (RVD) programs implicate the execution of precise manoeuvres at close distances with fast reaction times, justifying the need for autonomous decision-making capabilities to run onboard without a ground station in the loop, especially outside of low Earth orbit.

These encompass activities such as on-orbit servicing (OOS), non-cooperative rendezvous (NCRV), and active debris removal (ADR). Orbital AI-based Autonomous Refuelling (OIBAR) represents RAMI’s contribution towards the FAIR-SPACE Hub for Future AI & Robotics for Space, administered/led by Surrey University and funded by UK Research and Innovation (UKRI) and the UK Space Agency.

This project aims to develop an artificial intelligence (AI)-based solution for space docking applications, permitting an autonomous and accurate refuelling of existing satellites or stations in space to maintain their activities for longer durations.

The two main solution components presented in this work are: 1) a deep learning vision-based orbital relative navigation algorithm exploiting RVD video sequences to safely approach and dock to the target body; and 2) an intelligent hardware mechanism achieving the mechanical docking and refuelling operation of the target. This integrated software/hardware solution is validated in simulation and experimentally at RAMI’s Autonomous Systems and Machine Intelligence Laboratory (ASMIL) facilities to meet space standards and performances for this kind of operation.