Back to Search Start Over

Using Reinforcement Learning to Improve Airspace Structuring in an Urban Environment

Authors :
Ribeiro, M.J. (author)
Ellerbroek, Joost (author)
Hoekstra, J.M. (author)
Ribeiro, M.J. (author)
Ellerbroek, Joost (author)
Hoekstra, J.M. (author)
Publication Year :
2022

Abstract

Current predictions on future drone operations estimate that traffic density orders of magnitude will be higher than any observed in manned aviation. Such densities redirect the focus towards elements that can decrease conflict rate and severity, with special emphasis on airspace structures, an element that has been overlooked within distributed environments in the past. This work delves into the impacts of different airspace structures in multiple traffic scenarios, and how appropriate structures can increase the safety of future drone operations in urban airspace. First, reinforcement learning was used to define optimal heading range distributions with a layered airspace concept. Second, transition layers were reserved to facilitate the vertical deviation between cruising layers and conflict avoidance. The effects of traffic density, non-linear routes, and vertical deviation between layers were tested in an open-source airspace simulation platform. Results show that optimal structuring catered to the current traffic scenario improves airspace usage by correctly segmenting aircraft according to their flight routes. The number of conflicts and losses of minimum separation was reduced versus using a single, uniform airspace structure for all traffic scenarios, thus enabling higher airspace capacity.<br />Control & Simulation

Details

Database :
OAIster
Notes :
English
Publication Type :
Electronic Resource
Accession number :
edsoai.on1357879899
Document Type :
Electronic Resource
Full Text :
https://doi.org/10.3390.aerospace9080420