1. A Low Power and High Performance Artificial Intelligence Approach to Increase Guidance Navigation and Control Robustness
- Author
-
Ghiglino, Pablo and Mandar Harshe
- Subjects
obdp2021 ,obdp ,on-board processing - Abstract
Onboard computers are expected to perform more and more processor intensive tasks autonomously. Examples can be found in a broad number of applications: efficient data compression for Earth Observation data, system anomaly and fault detection, data encryption, sensor data fusion, etc. More over, the increasing interest in Artificial intelligence (AI) from the Space Industry has only exacerbated the problem. Vision based navigation in Space, Earth Observation data validation in Space, signal to noise ratio (SNR) reduction are only some examples of the need of AI in Space. The Space Industry, as many other industries, has focused exclusively in the use of standard parallelisation techniques for the acceleration of processor intensive algorithms. Parallelisation tools like OpenMP are used to break down a specific operation within the algorithm, e.g. matrix multiplication, into smaller pieces that are processed using several parallel threads. This technique ensures a low processing time of the algorithm. However, it has limited control of CPU usage and moreover it lacks control of the data throughput. In this work the authors propose a well known paradigm to data processing that has recently been rising renewed interest in Academia: pipelining. Pipelining, as opposed to parallelisation, breaks down an algorithm in steps or operation and assign threads to each of these steps. Pipelining is somehow an an assembly line, where each thread is in charge of one specific piece of the product. Academia research shows significant improvements in the overall performance of the algorithm. Moreover, the authors of this paper made use of a lock-free data processing framework to implement a novel pipeline framework that can enhance onboard processing performance even more. This lead to ground-breaking results in terms of CPU usage and data throughput: more than 20%-60% CPU reduction and 2x - 8x data processing rate increase. Two validation experiments are presented by the authors. First, an algorithms consisting on a number of matrix multiplications to be performed sequentially. And secondly, an AI network for Asteroid pose estimation using data from past Space mission. Both experiments where benchmarked and compared with OpenMP, for the matrix multiplication example, and with TensorFlowLite for the AI algorithm, In both instances, the here presented lock-free pipelined approach substantially outperformed both OpenMP and TensorFlowLite both in terms of CPU and data throughput.
- Published
- 2021