1. Robust Deep Learning Under Application Induced Data Distortions
- Author
-
Sahay, Rajeev
- Subjects
Electrical engineering not elsewhere classified - Abstract
Deep learning has been increasingly adopted in a multitude of settings. Yet, its strong performance relies on processing data during inference that is in-distribution with its training data. Deep learning input data during deployment, however, is not guaranteed to be in-distribution with the model's training data and can often times be distorted, either intentionally (e.g., by an adversary) or unintentionally (e.g., by a sensor defect), leading to significant performance degradations. In this dissertation, we develop algorithms for a variety of applications to improve the performance of deep learning models in the presence of distorted data. We begin by first designing feature engineering methodologies to increase classification performance in noisy environments. Here, we demonstrate the efficacy of our proposed algorithms on two target detection tasks and show that our framework outperforms a variety of state-of-the-art baselines. Next, we develop mitigation algorithms to improve the performance of deep learning in the presence of adversarial attacks and nonlinear signal distortions. In this context, we demonstrate the effectiveness of our methods on a variety of wireless communications tasks including automatic modulation classification, power allocation in massive MIMO networks, and signal detection. Finally, we develop an uncertainty quantification framework, which produces distributive estimates, as opposed to point predictions, from deep learning models in order to characterize samples with uncertain predictions as well as samples that are out-of-distribution from the model's training data. Our uncertainty quantification framework is carried out on a hyperspectral image target detection task as well as on counter unmanned aircraft systems (cUAS) model. Ultimately, our proposed algorithms improve the performance of deep learning in several environments in which the data during inference has been distorted to be out-of-distribution from the training data.
- Published
- 2022
- Full Text
- View/download PDF