95 results on '"Clancy, Thomas Charles III"'
Search Results
2. Processing of communications signals using machine learning
- Author
-
Virginia Tech Intellectual Properties, Inc., Clancy, Thomas Charles III, O'Shea, Timothy James, Virginia Tech Intellectual Properties, Inc., Clancy, Thomas Charles III, and O'Shea, Timothy James
- Abstract
One or more processors control processing of radio frequency (RF) signals using a machine-learning network. The one or more processors receive as input, to a radio communications apparatus, a first representation of an RF signal, which is processed using one or more radio stages, providing a second representation of the RF signal. Observations about, and metrics of, the second representation of the RF signal are obtained. Past observations and metrics are accessed from storage. Using the observations, metrics and past observations and metrics, parameters of a machine-learning network, which implements policies to process RF signals, are adjusted by controlling the radio stages. In response to the adjustments, actions performed by one or more controllers of the radio stages are updated. A representation of a subsequent input RF signal is processed using the radio stages that are controlled based on actions including the updated one or more actions.
- Published
- 2020
3. Processing of communications signals using machine learning
- Author
-
Virginia Tech Intellectual Properties, Inc., Clancy, Thomas Charles III, O'Shea, Timothy James, Virginia Tech Intellectual Properties, Inc., Clancy, Thomas Charles III, and O'Shea, Timothy James
- Abstract
One or more processors control processing of radio frequency (RF) signals using a machine-learning network. The one or more processors receive as input, to a radio communications apparatus, a first representation of an RF signal, which is processed using one or more radio stages, providing a second representation of the RF signal as. Observations about, and metrics of, the second representation of the RF signal are obtained. Past observations and metrics are accessed from storage. Using the observations, metrics and past observations and metrics, parameters of a machine-learning network, which implements policies to process RF signals, are adjusted by controlling the radio stages. In response to the adjustments, actions performed by one or more controllers of the radio stages are updated. A representation of a subsequent input RF signal is processed using the radio stages that are controlled based on actions including the updated one or more actions.
- Published
- 2019
4. A Modest Proposal for Open Market Risk Assessment to Solve the Cyber-Security Problem
- Author
-
O’Shea, Timothy J., Mondl, Adam, Clancy, Thomas Charles III, O’Shea, Timothy J., Mondl, Adam, and Clancy, Thomas Charles III
- Abstract
We introduce a model for a market based economic system of cyber-risk valuation to correct fundamental problems of incentives within the information technology and information processing industries. We assess the makeup of the current day marketplace, identify incentives, identify economic reasons for current failings, and explain how a market based risk valuation system could improve these incentives to form a secure and robust information marketplace for all consumers by providing visibility into open, consensus based risk pricing and allowing all parties to make well informed decisions.
- Published
- 2016
5. System and method for heterogenous spectrum sharing between commercial cellular operators and legacy incumbent users in wireless networks
- Author
-
Kumar, Akshay, Mitola, Joseph III, Reed, Jeffrey H., Clancy, Thomas Charles III, Amanna, Ashwin E., McGweir, Robert, Sengupta, Avik, Kumar, Akshay, Mitola, Joseph III, Reed, Jeffrey H., Clancy, Thomas Charles III, Amanna, Ashwin E., McGweir, Robert, and Sengupta, Avik
- Abstract
Described herein are systems and methods for telecommunications spectrum sharing between multiple heterogeneous users, which leverage a hybrid approach that includes both distributed spectrum sharing, spectrum-sensing, and use of geo-reference databases.
- Published
- 2016
6. A Modest Proposal for Open Market Risk Assessment to Solve the Cyber-Security Problem
- Author
-
Computer Science, O’Shea, Timothy J., Mondl, Adam, Clancy, Thomas Charles III, Computer Science, O’Shea, Timothy J., Mondl, Adam, and Clancy, Thomas Charles III
- Abstract
We introduce a model for a market based economic system of cyber-risk valuation to correct fundamental problems of incentives within the information technology and information processing industries. We assess the makeup of the current day marketplace, identify incentives, identify economic reasons for current failings, and explain how a market based risk valuation system could improve these incentives to form a secure and robust information marketplace for all consumers by providing visibility into open, consensus based risk pricing and allowing all parties to make well informed decisions.
- Published
- 2016
7. System and method for heterogenous spectrum sharing between commercial cellular operators and legacy incumbent users in wireless networks
- Author
-
Electrical and Computer Engineering, Business Information Technology, Hume Center for National Security and Technology, Virginia Tech Intellectual Properties, Inc., Federated Wireless, Inc., Kumar, Akshay, Mitola, Joseph III, Reed, Jeffrey H., Clancy, Thomas Charles III, Amanna, Ashwin E., McGweir, Robert, Sengupta, Avik, Electrical and Computer Engineering, Business Information Technology, Hume Center for National Security and Technology, Virginia Tech Intellectual Properties, Inc., Federated Wireless, Inc., Kumar, Akshay, Mitola, Joseph III, Reed, Jeffrey H., Clancy, Thomas Charles III, Amanna, Ashwin E., McGweir, Robert, and Sengupta, Avik
- Abstract
Described herein are systems and methods for telecommunications spectrum sharing between multiple heterogeneous users, which leverage a hybrid approach that includes both distributed spectrum sharing, spectrum-sensing, and use of geo-reference databases.
- Published
- 2016
8. Physical layer orthogonal frequency-division multiplexing acquisition and timing synchronization security
- Author
-
La Pan, Matthew J., Clancy, Thomas Charles III, McGwier, Robert W., Electrical and Computer Engineering, and Hume Center for National Security and Technology
- Subjects
ComputerSystemsOrganization_COMPUTER-COMMUNICATIONNETWORKS ,Data_CODINGANDINFORMATIONTHEORY ,security ,synchronization ,OFDM - Abstract
Orthogonal frequency-division multiplexing (OFDM) has become the manifest modulation choice for 4G standards. Timing acquisition and carrier frequency offset synchronization are prerequisite to OFDM demodulation and must be performed often. Most of the OFDM methods for synchronization were not designed with security in mind. In particular, we analyze the performance of a maximum likelihood synchronization estimator against highly correlated jamming attacks. We present a series of attacks against OFDM timing acquisition: preamble whitening, the false preamble attack, preamble warping, and preamble nulling.The performance of OFDM synchronization turns out to be very poor against these attacks, and a number of mitigation strategies and security improvements are discussed.
- Published
- 2014
9. A Multi-Tier Wireless Spectrum Sharing System Leveraging Secure Spectrum Auctions
- Author
-
Abdelhadi, Ahmed, Shajaiah, Haya, Clancy, Thomas Charles III, Abdelhadi, Ahmed, Shajaiah, Haya, and Clancy, Thomas Charles III
- Abstract
Secure spectrum auctions can revolutionize the spectrum utilization of cellular networks and satisfy the ever increasing demand for resources. In this paper, a multi-tier dynamic spectrum sharing system is studied for efficient sharing of spectrum with commercial wireless system providers (WSPs), with an emphasis on federal spectrum sharing. The proposed spectrum sharing system optimizes usage of spectrum resources, manages intra-WSP and inter-WSP interference and provides essential level of security, privacy, and obfuscation to enable the most efficient and reliable usage of the shared spectrum. It features an intermediate spectrum auctioneer responsible for allocating resources to commercial WSPs by running secure spectrum auctions. The proposed secure spectrum auction, MTSSA, leverages Paillier cryptosystem to avoid possible fraud and bidrigging. Numerical simulations are provided to compare the performance of MTSSA, in the considered spectrum sharing system, with other spectrum auction mechanisms for realistic cellular systems.
- Published
- 2015
- Full Text
- View/download PDF
10. Intrusion Detection System for Applications using Linux Containers
- Author
-
Abed, Amr S., Clancy, Thomas Charles III, Levy, David S., Abed, Amr S., Clancy, Thomas Charles III, and Levy, David S.
- Abstract
Linux containers are gaining increasing traction in both individual and industrial use, and as these containers get integrated into mission-critical systems, real-time detection of malicious cyber attacks becomes a critical operational requirement. This paper introduces a real-time host-based intrusion detection system that can be used to passively detect malfeasance against applications within Linux containers running in a standalone or in a cloud multi-tenancy environment. The demonstrated intrusion detection system uses bags of system calls monitored from the host kernel for learning the behavior of an application running within a Linux container and determining anomalous container behavior. Performance of the approach using a database application was measured and results are discussed.
- Published
- 2015
- Full Text
- View/download PDF
11. Intrusion Detection System for Applications using Linux Containers
- Author
-
Electrical and Computer Engineering, Computer Science, Hume Center for National Security and Technology, Abed, Amr S., Clancy, Thomas Charles III, Levy, David S., Electrical and Computer Engineering, Computer Science, Hume Center for National Security and Technology, Abed, Amr S., Clancy, Thomas Charles III, and Levy, David S.
- Abstract
Linux containers are gaining increasing traction in both individual and industrial use, and as these containers get integrated into mission-critical systems, real-time detection of malicious cyber attacks becomes a critical operational requirement. This paper introduces a real-time host-based intrusion detection system that can be used to passively detect malfeasance against applications within Linux containers running in a standalone or in a cloud multi-tenancy environment. The demonstrated intrusion detection system uses bags of system calls monitored from the host kernel for learning the behavior of an application running within a Linux container and determining anomalous container behavior. Performance of the approach using a database application was measured and results are discussed.
- Published
- 2015
12. Distributed Storage Systems with Secure and Exact Repair - New Results
- Author
-
Tandon, Ravi, Amuru, SaiDhiraj, Clancy, Thomas Charles III, Buehrer, R. Michael, Tandon, Ravi, Amuru, SaiDhiraj, Clancy, Thomas Charles III, and Buehrer, R. Michael
- Abstract
Distributed storage systems (DSS) in the presence of a passive eavesdropper are considered in this paper. A typical DSS is characterized by 3 parameters (n, k, d) where, a file is stored in a distributed manner across n nodes such that it can be recovered entirely from any k out of n nodes. Whenever a node fails, d ∈ [k, n) nodes participate in the repair process. In this paper, we study the exact repair capabilities of a DSS, where a failed node is replaced with its exact replica. Securing this DSS from a passive eavesdropper capable of wiretapping the repair process of any l < k nodes, is the main focus of this paper. Specifically, we characterize the optimal secure storagevs- exact-repair-bandwidth tradeoff region for the (4, 2, 3) DSS when l = 1 and the (n, n − 1, n − 1) DSS when l = n − 2.
- Published
- 2014
- Full Text
- View/download PDF
13. Distributed Storage Systems with Secure and Exact Repair - New Results
- Author
-
Electrical and Computer Engineering, Hume Center for National Security and Technology, Tandon, Ravi, Amuru, SaiDhiraj, Clancy, Thomas Charles III, Buehrer, R. Michael, Electrical and Computer Engineering, Hume Center for National Security and Technology, Tandon, Ravi, Amuru, SaiDhiraj, Clancy, Thomas Charles III, and Buehrer, R. Michael
- Abstract
Distributed storage systems (DSS) in the presence of a passive eavesdropper are considered in this paper. A typical DSS is characterized by 3 parameters (n, k, d) where, a file is stored in a distributed manner across n nodes such that it can be recovered entirely from any k out of n nodes. Whenever a node fails, d ∈ [k, n) nodes participate in the repair process. In this paper, we study the exact repair capabilities of a DSS, where a failed node is replaced with its exact replica. Securing this DSS from a passive eavesdropper capable of wiretapping the repair process of any l < k nodes, is the main focus of this paper. Specifically, we characterize the optimal secure storagevs- exact-repair-bandwidth tradeoff region for the (4, 2, 3) DSS when l = 1 and the (n, n − 1, n − 1) DSS when l = n − 2.
- Published
- 2014
14. Physical layer orthogonal frequency-division multiplexing acquisition and timing synchronization security
- Author
-
Electrical and Computer Engineering, Hume Center for National Security and Technology, La Pan, Matthew J., Clancy, Thomas Charles III, McGwier, Robert W., Electrical and Computer Engineering, Hume Center for National Security and Technology, La Pan, Matthew J., Clancy, Thomas Charles III, and McGwier, Robert W.
- Abstract
Orthogonal frequency-division multiplexing (OFDM) has become the manifest modulation choice for 4G standards. Timing acquisition and carrier frequency offset synchronization are prerequisite to OFDM demodulation and must be performed often. Most of the OFDM methods for synchronization were not designed with security in mind. In particular, we analyze the performance of a maximum likelihood synchronization estimator against highly correlated jamming attacks. We present a series of attacks against OFDM timing acquisition: preamble whitening, the false preamble attack, preamble warping, and preamble nulling.The performance of OFDM synchronization turns out to be very poor against these attacks, and a number of mitigation strategies and security improvements are discussed.
- Published
- 2014
15. Vulnerability of LTE to Hostile Interference
- Author
-
Lichtman, Marc, Reed, Jeffrey H., Clancy, Thomas Charles III, Norton, Mark, Lichtman, Marc, Reed, Jeffrey H., Clancy, Thomas Charles III, and Norton, Mark
- Abstract
LTE is well on its way to becoming the primary cellular standard, due to its performance and low cost. Over the next decade we will become dependent on LTE, which is why we must ensure it is secure and available when we need it. Unfortunately, like any wireless technology, disruption through radio jamming is possible. This paper investigates the extent to which LTE is vulnerable to intentional jamming, by analyzing the components of the LTE downlink and uplink signals. The LTE physical layer consists of several physical channels and signals, most of which are vital to the operation of the link. By taking into account the density of these physical channels and signals with respect to the entire frame, as well as the modulation and coding schemes involved, we come up with a series of vulnerability metrics in the form of jammer to signal ratios. The “weakest links” of the LTE signals are then identified, and used to establish the overall vulnerability of LTE to hostile interference.
- Published
- 2013
16. Application of Cybernetics and Control Theory for a New Paradigm in Cybersecurity
- Author
-
Adams, Michael D., Hitefield, Seth D., Hoy, Bruce, Fowler, Michael C., Clancy, Thomas Charles III, Adams, Michael D., Hitefield, Seth D., Hoy, Bruce, Fowler, Michael C., and Clancy, Thomas Charles III
- Abstract
A significant limitation of current cyber security research and techniques is its reactive and applied nature. This leads to a continuous ‘cyber cycle’ of attackers scanning networks, developing exploits and attacking systems, with defenders detecting attacks, analyzing exploits and patching systems. This reactive nature leaves sensitive systems highly vulnerable to attack due to un-patched systems and undetected exploits. Some current research attempts to address this major limitation by introducing systems that implement moving target defense. However, these ideas are typically based on the intuition that a moving target defense will make it much harder for attackers to find and scan vulnerable systems, and not on theoretical mathematical foundations. The continuing lack of fundamental science and principles for developing more secure systems has drawn increased interest into establishing a ‘science of cyber security’. This paper introduces the concept of using cybernetics, an interdisciplinary approach of control theory, systems theory, information theory and game theory applied to regulatory systems, as a foundational approach for developing cyber security principles. It explores potential applications of cybernetics to cyber security from a defensive perspective, while suggesting the potential use for offensive applications. Additionally, this paper introduces the fundamental principles for building non-stationary systems, which is a more general solution than moving target defenses. Lastly, the paper discusses related works concerning the limitations of moving target defense and one implementation based on non-stationary principles.
- Published
- 2013
17. Multipersona Hypovisors: Securing Mobile Devices through High-Performance Light-Weight Subsystem Isolation
- Author
-
Krishan, Neelima, Hitefield, Seth D., Clancy, Thomas Charles III, McGwier, Robert W., Tront, Joseph G., Krishan, Neelima, Hitefield, Seth D., Clancy, Thomas Charles III, McGwier, Robert W., and Tront, Joseph G.
- Abstract
We propose and detail a system called multipersona Hypovisors for providing light-weight isolation for enhancing security on Multipersona mobile devices, particularly with respect to the current memory constraints of these devices. Multipersona Hypovisors leverage Linux kernel cGroups and namespaces to establish independent process container, al-lowing isolation of the Multipersona process tree from other simultaneous instances of Multipersona and the hypovisor which is an underlying Angstrom-based embedded Linux distributions designed to add additional security to the system. The system incorporates a wide range of data integrity tools in the embedded hypovisor, and an SE Linux-enabled kernel for mandatory access control and integrity tools for transparent auditing of running Multipersona instances. A prototype is presented which uses integrity tools external to the Multipersona container to audit it for malicious activity, and also has the ability to support a multipersona environment with multiple encrypted personas existing individually or simultaneously on the device. Two versions are demonstrated, one which allows cold-swapping of personas for high-assurance scenarios and also one that supports hot-swapping. Analysis shows that the hypovisor has a 40-50 MB impact on the overall memory footprint for the system.
- Published
- 2013
18. Application of Cybernetics and Control Theory for a New Paradigm in Cybersecurity
- Author
-
Electrical and Computer Engineering, Hume Center for National Security and Technology, Adams, Michael D., Hitefield, Seth D., Hoy, Bruce, Fowler, Michael C., Clancy, Thomas Charles III, Electrical and Computer Engineering, Hume Center for National Security and Technology, Adams, Michael D., Hitefield, Seth D., Hoy, Bruce, Fowler, Michael C., and Clancy, Thomas Charles III
- Abstract
A significant limitation of current cyber security research and techniques is its reactive and applied nature. This leads to a continuous ‘cyber cycle’ of attackers scanning networks, developing exploits and attacking systems, with defenders detecting attacks, analyzing exploits and patching systems. This reactive nature leaves sensitive systems highly vulnerable to attack due to un-patched systems and undetected exploits. Some current research attempts to address this major limitation by introducing systems that implement moving target defense. However, these ideas are typically based on the intuition that a moving target defense will make it much harder for attackers to find and scan vulnerable systems, and not on theoretical mathematical foundations. The continuing lack of fundamental science and principles for developing more secure systems has drawn increased interest into establishing a ‘science of cyber security’. This paper introduces the concept of using cybernetics, an interdisciplinary approach of control theory, systems theory, information theory and game theory applied to regulatory systems, as a foundational approach for developing cyber security principles. It explores potential applications of cybernetics to cyber security from a defensive perspective, while suggesting the potential use for offensive applications. Additionally, this paper introduces the fundamental principles for building non-stationary systems, which is a more general solution than moving target defenses. Lastly, the paper discusses related works concerning the limitations of moving target defense and one implementation based on non-stationary principles.
- Published
- 2013
19. Multipersona Hypovisors: Securing Mobile Devices through High-Performance Light-Weight Subsystem Isolation
- Author
-
Computer Science, Krishan, Neelima, Hitefield, Seth D., Clancy, Thomas Charles III, McGwier, Robert W., Tront, Joseph G., Computer Science, Krishan, Neelima, Hitefield, Seth D., Clancy, Thomas Charles III, McGwier, Robert W., and Tront, Joseph G.
- Abstract
We propose and detail a system called multipersona Hypovisors for providing light-weight isolation for enhancing security on Multipersona mobile devices, particularly with respect to the current memory constraints of these devices. Multipersona Hypovisors leverage Linux kernel cGroups and namespaces to establish independent process container, al-lowing isolation of the Multipersona process tree from other simultaneous instances of Multipersona and the hypovisor which is an underlying Angstrom-based embedded Linux distributions designed to add additional security to the system. The system incorporates a wide range of data integrity tools in the embedded hypovisor, and an SE Linux-enabled kernel for mandatory access control and integrity tools for transparent auditing of running Multipersona instances. A prototype is presented which uses integrity tools external to the Multipersona container to audit it for malicious activity, and also has the ability to support a multipersona environment with multiple encrypted personas existing individually or simultaneously on the device. Two versions are demonstrated, one which allows cold-swapping of personas for high-assurance scenarios and also one that supports hot-swapping. Analysis shows that the hypovisor has a 40-50 MB impact on the overall memory footprint for the system.
- Published
- 2013
20. Machine Learning for Millimeter Wave Wireless Systems: Network Design and Optimization
- Author
-
Zhang, Qianqian, Electrical Engineering, Saad, Walid, Clancy, Thomas Charles III, Hudait, Mantu K., Yang, Yaling, and Xin, Hongliang
- Subjects
Millimeter Wave Communications ,MIMO Communications ,Performance Optimization ,Machine learning ,Unmanned Aerial Vehicle - Abstract
Next-generation cellular systems will rely on millimeter wave (mmWave) bands to meet the increasing demand for wireless connectivity from end user equipment. Given large available bandwidth and small-sized antenna elements, mmWave frequencies can support high communication rates and facilitate the use of multiple-input-multiple-output (MIMO) techniques to increase the wireless capacity. However, the small wavelength of mmWave yields severe path loss and high channel uncertainty. Meanwhile, using a large number of antenna elements requires a high energy consumption and heavy communication overhead for MIMO transmissions and channel measurement. To facilitate efficient mmWave communications, in this dissertation, the challenges of energy efficiency and communication overhead are addressed. First, the use of unmanned aerial vehicle (UAV), intelligent signal reflector, and device-to-device (D2D) communications are investigated to improve the reliability and energy efficiency of mmWave communications in face of blockage. Next, to reduce the communication overhead, new channel modeling and user localization approaches are developed to facilitate MIMO channel estimation by providing prior knowledge of mmWave links. Using advance mathematical tools from machine learning (ML), game theory, and communication theory, this dissertation develops a suite of novel frameworks using which mmWave communication networks can be reliably deployed and operated in wireless cellular systems, UAV networks, and wearable device networks. For UAV-based wireless communications, a learning framework is developed to predict the cellular data traffic during congestion events, and a new framework for the on-demand deployment of UAVs is proposed to offload the excessive traffic from the ground base stations (BSs) to the UAVs. The results show that the proposed approach enables a dynamical and optimal deployment of UAVs that alleviates the cellular traffic congestion. Subsequently, a novel energy-efficient framework is developed to reflect mmWave signals from a BS towards mobile users using a UAV-carried intelligent reflector (IR). To optimize the location and reflection coefficient of the UAV-carried IR, a deep reinforcement learning (RL) approach is proposed to maximize the downlink transmission capacity. The results show that the RL-based approach significantly improves the downlink line-of-sight probability and increases the achievable data rate. Moreover, the channel estimation challenge for MIMO communications is addressed using a distributional RL approach, while optimizing an IR-aided downlink multi-user communication. The results show that the proposed method captures the statistic feature of MIMO channels, and significantly increases the downlink sum-rate. Moreover, in order to capture the characteristics of air-to-ground channels, a data-driven approach is developed, based on a distributed framework of generative adversarial networks, so that each UAV collects and shares mmWave channel state information (CSI) for cooperative channel modeling. The results show that the proposed algorithm enables an accurate channel modeling for mmWave MIMO communications over a large temporal-spatial domain. Furthermore, the CSI pattern is analyzed via semi-supervised ML tools to localize the wireless devices in the mmWave networks. Finally, to support D2D communications, a novel framework for mmWave multi-hop transmissions is investigated to improve the performance of the high-rate low-latency transmissions between wearable devices. In a nutshell, this dissertation provides analytical foundations on the ML-based performance optimization of mmWave communication systems, and the anticipated results provide rigorous guidelines for effective deployment of mmWave frequency bands into next-generation wireless systems (e.g., 6G). Doctor of Philosophy Different kinds of new smart devices are invented and deployed every year. Emerging smart city applications, including autonomous vehicles, virtual reality, drones, and Internet-of-things, will require the wireless communication system to support more data transmissions and connectivity. However, existing wireless network (e.g., 5G and Wi-Fi) operates at congested microwave frequency bands and cannot satisfy needs of these applications due to limited resources. Therefore, a different, very high frequency band at the millimeter wave (mmWave) spectrum becomes an inevitable choice to manage the exponential growth in wireless traffic for next-generation communication systems. With abundant bandwidth resources, mmWave frequencies can provide the high transmission rate and support the wireless connectivity for the massive number of devices in a smart city. Despite the advantages of communications at the mmWave bands, it is necessary to address the challenges related to high-frequency transmissions, such as low energy efficiency and unpredictable link states. To this end, this dissertation develops a set of novel network frameworks to facilitate the service deployment, performance analysis, and network optimization for mmWave communications. In particular, the proposed frameworks and efficient algorithms are tailored to the characteristics of mmWave propagation and satisfy the communication requirements of emerging smart city applications. Using advanced mathematical tools from machine learning, game theory, and wireless communications, this dissertation provides a comprehensive understanding of the communication performance over mmWave frequencies in the cellular systems, wireless local area networks, and drone networks. The anticipated results will promote the deployment of mmWave frequencies in next-generation communication systems.
- Published
- 2021
21. New Techniques for Time-Reversal-Based Ultra-wideband Microwave Pulse Compression in Reverberant Cavities
- Author
-
Drikas, Zachary Benjamin, Electrical Engineering, Raman, Sanjay, Ellingson, Steven W., Yu, Guoqiang, Clancy, Thomas Charles III, and Black, Jonathan T.
- Subjects
ultra-wideband (UWB) ,Ultra-short pulse (USP) ,time-reversal ,reconfigurable cavity ,pulse compression ,dispersive cavity - Abstract
Generation of high-peak power, microwave ultra-short pulses (USPs) is desirable for ultra-wideband communications and remote sensing. A variety of microwave USP generators exist today, or are described in the literature, and have benefits and limitations depending on application. A new class of pulse compressors for generating USPs using electromagnetic time reversal (TR) techniques have been developed in the last decade, and are the topic of this dissertation. This dissertation presents a compact TR microwave pulse-compression cavity that has ultra-wide bandwidth (5 GHz – 18 GHz), and employs waveguide feeds for high-peak power output over the entire band. The system uses a time-reversal-based pulse compression scheme with one-bit processing (OBTR) to achieve high compression gain. Results from full-wave simulations are presented as well as measurements showing compression gain exceeding 21.2 dB, 22% efficiency, and measured instantaneous peak output powers reaching 39.2 kW. These are all record results for this type of pulse compressor. Additionally presented is new analysis of variation in compression gain due to impulse response recording time and bandwidth variation, new experimental work on the effect of mode stirrer position on compression gain, and a novel RF switch-based technique for reducing time-sidelobes while using OBTR. Finally, a new technique is presented that uses a reverberant cavity with only one feed connected to an ultra-wideband circulator (6.5 GHz to 17 GHz) to perform TRPC. Prior to this work, TRPC has only been demonstrated in electromagnetics using two or more feeds and a reverberant cavity acting as the time-reversal mirror. This new 1-port technique is demonstrated in both simulation and measurement. The proposed system achieves up to a measured 3 dB increase in compression gain and increased efficiency. Also, a novel application of the random coupling model (RCM) to calculate compression gain is presented. The cavity eigenfrequencies are modeled after eigenvalues of random matrices satisfying the Gaussian orthogonal ensembles (GOE) condition. Cavity transfer functions are generated using Monte Carlo simulations, and used to compute the compression gains for many different cavity realizations. Doctor of Philosophy Generation of high-peak power, microwave ultra-short pulses (USPs) is desirable for ultra-wideband communications and remote sensing. A variety of microwave USP generators exist today, or are described in the literature, and have benefits and limitations depending on application. A new class of pulse compressors for generating USPs using electromagnetic time reversal (TR) techniques have been developed in the last decade, and are the topic of this dissertation. This dissertation presents a compact TR-based microwave pulse-compression cavity that has unique features that make it optimal for high-power operations, with results from simulations as well as measurements showing improved performance over other similar cavities published in the literature with a record demonstrated peak output power of 39.2 kW. Additionally, new analysis on the operation and optimization of this cavity for increased performance is also presented. Finally, a new technique is presented that uses a cavity with only one feed that acts as both the input and output. This 1-port technique is demonstrated in both simulation and measurement. The proposed system achieves a two-times increase in compression gain over its 2-port counterpart. In conjunction with these measurements and simulations, a novel technique for predicting the performance of these cavities using Monte Carlo simulation is also presented.
- Published
- 2020
22. Distributed Machine Learning for Autonomous and Secure Cyber-physical Systems
- Author
-
Ferdowsi Khosrowshahi, Aidin, Electrical Engineering, Saad, Walid, Woolsey, Craig A., Reed, Jeffrey H., Kekatos, Vasileios, and Clancy, Thomas Charles III
- Subjects
Optimality ,Game Theory ,Machine learning ,Security ,Autonomous Cyber-Physical Systems ,Robustness ,Stability - Abstract
Autonomous cyber-physical systems (CPSs) such as autonomous connected vehicles (ACVs), unmanned aerial vehicles (UAVs), critical infrastructure (CI), and the Internet of Things (IoT) will be essential to the functioning of our modern economies and societies. Therefore, maintaining the autonomy of CPSs as well as their stability, robustness, and security (SRS) in face of exogenous and disruptive events is a critical challenge. In particular, it is crucial for CPSs to be able to not only operate optimally in the vicinity of a normal state but to also be robust and secure so as to withstand potential failures, malfunctions, and intentional attacks. However, to evaluate and improve the SRS of CPSs one must overcome many technical challenges such as the unpredictable behavior of a CPS's cyber-physical environment, the vulnerability to various disruptive events, and the interdependency between CPSs. The primary goal of this dissertation is, thus, to develop novel foundational analytical tools, that weave together notions from machine learning, game theory, and control theory, in order to study, analyze, and optimize SRS of autonomous CPSs. Towards achieving this overarching goal, this dissertation led to several major contributions. First, a comprehensive control and learning framework was proposed to thwart cyber and physical attacks on ACV networks. This framework brings together new ideas from optimal control and reinforcement learning (RL) to derive a new optimal safe controller for ACVs in order to maximize the street traffic flow while minimizing the risk of accidents. Simulation results show that the proposed optimal safe controller outperforms the current state of the art controllers by maximizing the robustness of ACVs to physical attacks. Furthermore, using techniques from convex optimization and deep RL a joint trajectory and scheduling policy is proposed in UAV-assisted networks that aims at maintaining the freshness of ground node data at the UAV. The analytical and simulation results show that the proposed policy can outperform policies such discretized state RL and value-based methods in terms of maximizing the freshness of data. Second, in the IoT domain, a novel watermarking algorithm, based on long short term memory cells, is proposed for dynamic authentication of IoT signals. The proposed watermarking algorithm is coupled with a game-theoretic framework so as to enable efficient authentication in massive IoT systems. Simulation results show that using our approach, IoT messages can be transmitted from IoT devices with an almost 100% reliability. Next, a brainstorming generative adversarial network (BGAN) framework is proposed. It is shown that this framework can learn to generate real-looking data in a distributed fashion while preserving the privacy of agents (e.g. IoT devices, ACVs, etc). The analytical and simulation results show that the proposed BGAN architecture allows heterogeneous neural network designs for agents, works without reliance on a central controller, and has a lower communication over head compared to other state-of-the-art distributed architectures. Last, but not least, the SRS challenges of interdependent CI (ICI) are addressed. Novel game-theoretic frameworks are proposed that allow the ICI administrator to assign different protection levels on ICI components to maximizing the expected ICI security. The mixed-strategy Nash of the games are derived analytically. Simulation results coupled with theoretical analysis show that, using the proposed games, the administrator can maximize the security level in ICI components. In summary, this dissertation provided major contributions across the areas of CPSs, machine learning, game theory, and control theory with the goal of ensuring SRS across various domains such as autonomous vehicle networks, IoT systems, and ICIs. The proposed approaches provide the necessary fundamentals that can lay the foundations of SRS in CPSs and pave the way toward the practical deployment of autonomous CPSs and applications. Doctor of Philosophy In order to deliver innovative technological services to their residents, smart cities will rely on autonomous cyber-physical systems (CPSs) such as cars, drones, sensors, power grids, and other networks of digital devices. Maintaining stability, robustness, and security (SRS) of those smart city CPSs is essential for the functioning of our modern economies and societies. SRS can be defined as the ability of a CPS, such as an autonomous vehicular system, to operate without disruption in its quality of service. In order to guarantee SRS of CPSs one must overcome many technical challenges such as CPSs' vulnerability to various disruptive events such as natural disasters or cyber attacks, limited resources, scale, and interdependency. Such challenges must be considered for CPSs in order to design vehicles that are controlled autonomously and whose motion is robust against unpredictable events in their trajectory, to implement stable Internet of digital devices that work with a minimum communication delay, or to secure critical infrastructure to provide services such as electricity, gas, and water systems. The primary goal of this dissertation is, thus, to develop novel foundational analytical tools, that weave together notions from machine learning, game theory, and control theory, in order to study, analyze, and optimize SRS of autonomous CPSs which eventually will improve the quality of service provided by smart cities. To this end, various frameworks and effective algorithms are proposed in order to enhance the SRS of CPSs and pave the way toward the practical deployment of autonomous CPSs and applications. The results show that the developed solutions can enable a CPS to operate efficiently while maintaining its SRS. As such, the outcomes of this research can be used as a building block for the large deployment of smart city technologies that can be of immense benefit to tomorrow's societies.
- Published
- 2020
23. Distributed Wireless Resource Management in the Internet of Things
- Author
-
Park, Taehyeun, Electrical Engineering, Saad, Walid, Clancy, Thomas Charles III, Hudait, Mantu K., Bish, Douglas R., and Reed, Jeffrey H.
- Subjects
Wireless Networks ,Radio Resource Management ,Internet of Things - Abstract
The Internet of Things (IoT) is a promising networking technology that will interconnect a plethora of heterogeneous wireless devices. To support the connectivity across a massive-scale IoT, the scarce wireless communication resources must be appropriately allocated among the IoT devices, while considering the technical challenges that arise from the unique properties of the IoT, such as device heterogeneity, strict communication requirements, and limited device capabilities in terms of computation and memory. The primary goal of this dissertation is to develop novel resource management frameworks using which resource-constrained IoT devices can operate autonomously in a dynamic environment. First, a comprehensive overview on the use of various learning techniques for wireless resource management in an IoT is provided, and potential applications for each learning framework are proposed. Moreover, to capture the heterogeneity among IoT devices, a framework based on cognitive hierarchy theory is discussed, and its implementation with learning techniques of different complexities for IoT devices with varying capabilities is analyzed. Next, the problem of dynamic, distributed resource allocation in an IoT is studied when there are heterogeneous messages. Particularly, a novel finite memory multi-state sequential learning is proposed to enable diverse IoT devices to reallocate the limited communication resources in a self-organizing manner to satisfy the delay requirement of critical messages, while minimally affecting the delay-tolerant messages. The proposed learning framework is shown to be effective for the IoT devices with limited memory and observation capabilities to learn the number of critical messages. The results show that the performance of learning framework depends on memory size and observation capability of IoT devices and that the learning framework can realize low delay transmission in a massive IoT. Subsequently, the problem of one-to-one association between resource blocks and IoT devices is studied, when the IoT devices have partial information. The one-to-one association is formulated as Kolkata Paise Restaurant (KPR) game in which an IoT device tries to choose a resource block with highest gain, while avoiding duplicate selection. Moreover, a Nash equilibrium (NE) of IoT KPR game is shown to coincide with socially optimal solution. A proposed learning framework for IoT KPR game is shown to significantly increase the number of resource blocks used to successful transmit compared to a baseline. The KPR game is then extended to consider age of information (AoI), which is a metric to quantify the freshness of information in the perspective of destination. Moreover, to capture heterogeneity in an IoT, non-linear AoI is introduced. To minimize AoI, centralized and distributed approaches for the resource allocation are proposed to enable the sharing of limited communication resources, while delivering messages to the destination in a timely manner. Moreover, the proposed distributed resource allocation scheme is shown to converge to an NE and to significantly lower the average AoI compared to a baseline. Finally, the problem of dynamically partitioning the transmit power levels in non-orthogonal multiple access is studied when there are heterogeneous messages. In particular, an optimization problem is formulated to determine the number of power levels for different message types, and an estimation framework is proposed to enable the network base station to adjust power level partitioning to satisfy the performance requirements. The proposed framework is shown to effectively increase the transmission success probability compared to a baseline. Furthermore, an optimization problem is formulated to increase sum-rate and reliability by adjusting target received powers. Under different fading channels, the optimal target received powers are analyzed, and a tradeoff between reliability and sum-rate is shown. In conclusion, the theoretical and performance analysis of the frameworks proposed in this dissertation will prove essential for implementing an appropriate distributed resource allocation mechanisms for dynamic, heterogeneous IoT environments. Doctor of Philosophy The Internet of Things (IoT), which is a network of smart devices such as smart phones, wearable devices, smart appliances, and environment sensors, will transform many aspects of our society with numerous innovative IoT applications. Those IoT applications include interactive education, remote healthcare, smart grids, home automation, intelligent transportation, industrial monitoring, and smart agriculture. With the increasing complexity and scale of an IoT, it becomes more difficult to quickly manage the IoT devices through a cloud, and a centralized management approach may not be viable for certain IoT scenarios. Therefore, distributed solutions are needed for enabling IoT devices to fulfill their services and maintain seamless connectivity. Here, IoT device management refers to the fact that the system needs to decide which devices access the network and using which resources (e.g., frequencies). For distributed management of an IoT, the unique challenge is to allocate scarce communication resources to many IoT devices appropriately. With distributed resource management, diverse IoT devices can share the limited communication resources in a self-organizing manner. Distributed resource management overcomes the limitations of centralized resource management by satisfying strict service requirements in a massive, complex IoT. Despite the advantages and the opportunities of distributed resource management, it is necessary to address the challenges related to an IoT, such as analyzing intricate interaction of heterogeneous devices, designing viable frameworks for constrained devices, and quickly adapting to a dynamic IoT. Furthermore, distributed resource management must enable IoT devices to communicate with high reliability and low delay. In this regard, this dissertation investigates these critical IoT challenges and introduces novel distributed resource management frameworks for an IoT. In particular, the proposed frameworks are tailored to realistic IoT scenarios and consider different performance metrics. To this end, mathematical frameworks and effective algorithms are developed by significantly extending tools from wireless communication, game theory, and machine learning. The results show that the proposed distributed wireless resource management frameworks can optimize key performance metrics and meet strict communication requirements while coping with device heterogeneity, massive scale, dynamic environment, and scarce wireless resources in an IoT.
- Published
- 2020
24. Intelligent Knowledge Distribution for Multi-Agent Communication, Planning, and Learning
- Author
-
Fowler, Michael C., Electrical and Computer Engineering, Williams, Ryan K., Clancy, Thomas Charles III, Tokekar, Pratap, Patterson, Cameron D., and Roan, Michael J.
- Subjects
Markov Decision Processes ,Relational Learning ,Distributed Decision Making ,ComputingMethodologies_ARTIFICIALINTELLIGENCE ,Probabilistic Constraint Satisfaction ,Multi-agent System ,Wireless Communications - Abstract
This dissertation addresses a fundamental question of multi-agent coordination: what infor- mation should be sent to whom and when, with the limited resources available to each agent? Communication requirements for multi-agent systems can be rather high when an accurate picture of the environment and the state of other agents must be maintained. To reduce the impact of multi-agent coordination on networked systems, e.g., power and bandwidth, this dissertation introduces new concepts to enable Intelligent Knowledge Distribution (IKD), including Constrained-action POMDPs (CA-POMDP) and concurrent decentralized (CoDec) POMDPs for an agnostic plug-and-play capability for fully autonomous systems. Each agent runs a CoDec POMDP where all the decision making (motion planning, task allocation, asset monitoring, and communication) are separated into concurrent individual MDPs to reduce the combinatorial explosion of the action and state space while maintaining dependencies between the models. We also introduce the CA-POMDP with action-based constraints on partially observable Markov decision processes, rewards driven by the value of information, and probabilistic constraint satisfaction through discrete optimization and Markov chain Monte Carlo analysis. IKD is adapted real-time through machine learning of the actual environmental impacts on the behavior of the system, including collaboration strategies between autonomous agents, the true value of information between heterogeneous systems, observation probabilities and resource utilization. Doctor of Philosophy This dissertation addresses a fundamental question behind when multiple autonomous sys- tems, like drone swarms, in the field need to coordinate and share data: what information should be sent to whom and when, with the limited resources available to each agent? Intelligent Knowledge Distribution is a framework that answers these questions. Communication requirements for multi-agent systems can be rather high when an accurate picture of the environment and the state of other agents must be maintained. To reduce the impact of multi-agent coordination on networked systems, e.g., power and bandwidth, this dissertation introduces new concepts to enable Intelligent Knowledge Distribution (IKD), including Constrained-action POMDPs and concurrent decentralized (CoDec) POMDPs for an agnostic plug-and-play capability for fully autonomous systems. The IKD model was able to demonstrate its validity as a "plug-and-play" library that manages communications between agents that ensures the right information is being transmitted at the right time to the right agent to ensure mission success.
- Published
- 2020
25. Differential Dependency Network and Data Integration for Detecting Network Rewiring and Biomarkers
- Author
-
Fu, Yi, Electrical Engineering, Wang, Yue J., Haghighat, Alireza, Zhang, Zhen, Clancy, Thomas Charles III, and Yu, Guoqiang
- Subjects
differential network analysis ,molecular data integration ,biomarker - Abstract
Rapid advances in high-throughput molecular profiling techniques enabled large-scale genomics, transcriptomics, and proteomics-based biomedical studies, generating an enormous amount of multi-omics data. Processing and summarizing multi-omics data, modeling interactions among biomolecules, and detecting condition-specific dysregulation using multi-omics data are some of the most important yet challenging analytics tasks. In the case of detecting somatic DNA copy number aberrations using bulk tumor samples in cancer research, normal cell contamination becomes one significant confounding factor that weakens the power regardless of whichever methods used for detection. To address this problem, we propose a computational approach – BACOM 2.0 to more accurately estimate normal cell fraction and accordingly reconstruct DNA copy number signals in cancer cells. Specifically, by introducing allele-specific absolute normalization, BACOM 2.0 can accurately detect deletion types and aneuploidy in cancer cells directly from DNA copy number data. Genes work through complex networks to support cellular processes. Dysregulated genes can cause structural changes in biological networks, also known as network rewiring. Genes with a large number of rewired edges are more likely to be associated with functional alteration leading phenotype transitions, and hence are potential biomarkers in diseases such as cancers. Differential dependency network (DDN) method was proposed to detect such network rewiring and biomarkers. However, the existing DDN method and software tool has two major drawbacks. Firstly, in imbalanced sample groups, DDN suffers from systematic bias and produces false positive differential dependencies. Secondly, the computational time of the block coordinate descent algorithm in DDN increases rapidly with the number of involved samples and molecular entities. To address the imbalanced sample group problem, we propose a sample-scale-wide normalized formulation to correct systematic bias and design a simulation study for testing the performance. To address high computational complexity, we propose several strategies to accelerate DDN learning, including two reformulated algorithms for block-wise coefficient updating in the DDN optimization problem. Specifically, one strategy on discarding predictors and one strategy on accelerating parallel computing. More importantly, experimental results show that new DDN learning speed with combined accelerating strategies is hundreds of times faster than that of the original method on medium-sized data. We applied the DDN method on several biomedical datasets of omics data and detected significant phenotype-specific network rewiring. With a random-graph-based detection strategy, we discovered the hub node defined biomarkers that helped to generate or validate several novel scientific hypotheses in collaborative research projects. For example, the hub genes detected by the DDN methods in proteomics data from artery samples are significantly enriched in the citric acid cycle pathway that plays a critical role in the development of atherosclerosis. To detect intra-omics and inter-omics network rewirings, we propose a method called multiDDN that uses a multi-layer signaling model to integrate multi-omics data. We adapt the block coordinate descent algorithm to solve the multiDDN optimization problem with accelerating strategies. The simulation study shows that, compared with the DDN method on single omics, the multiDDN method has considerable advantage on higher accuracy of detecting network rewiring. We applied the multiDDN method on the real multi-omics data from CPTAC ovarian cancer dataset, and detected multiple hub genes associated with histone protein deacetylation and were previously reported in independent ovarian cancer data analysis. Doctor of Philosophy We witnessed the start of the human genome project decades ago and stepped into the era of omics since then. Omics are comprehensive approaches for analyzing genome-wide biomolecular profiles. The rapid development of high-throughput technologies enables us to produce an enormous amount of omics data such as genomics, transcriptomics, and proteomics data, which makes researchers swim in a sea of omics information that once never imagined. Yet, the era of omics brings new challenges to us: to process the huge volumes of data, to summarize the data, to reveal the interactions between entities, to link various types of omics data, and to discover mechanisms hidden behind omics data. In processing omics data, one factor that weakens the strengths of follow up data analysis is sample impurity. We call impure tumor samples contaminated by normal cells as heterogeneous samples. The genomic signals measured from heterogeneous samples are a mixture of signals from both tumor cells and normal cells. To correct the mixed signals and get true signals from pure tumor cells, we propose a computational approach called BACOM 2.0 to estimate normal cell fraction and corrected genomics signals accordingly. By introducing a novel normalization method that identifies the neutral component in mixed signals of genomic copy number data, BACOM 2.0 could accurately detect genes' deletion types and abnormal chromosome numbers in tumor cells. In cells, genes connect to other genes and form complex biological networks to perform their functions. Dysregulated genes can cause structural change in biological networks, also known as network rewiring. In a biological network with network rewiring events, a large quantity of network rewiring linking to a single hub gene suggests concentrated gene dysregulation. This hub gene has more impact on the network and hence is more likely to associate with the functional change of the network, which ultimately leads to abnormal phenotypes such as cancer diseases. Therefore, the hub genes linked with network rewiring are potential indicators of disease status or known as biomarkers. Differential dependency network (DDN) method was proposed to detect network rewiring events and biomarkers from omics data. However, the DDN method still has a few drawbacks. Firstly, for two groups of data with unequal sample sizes, DDN consistently detects false targets of network rewiring. The permutation test, which uses the same method on randomly shuffled samples is supposed to distinguish the true targets from random effects, however, is also suffered from the same reason and could let pass those false targets. We propose a new formulation that corrects the mistakes brought by unequal group size and design a simulation study to test the new formulation's correctness. Secondly, the time used for computing in solving DDN problems is unbearably long when processing omics data with a large number of samples scale or a large number of genes. We propose several strategies to increase DDN's computation speed, including three redesigned formulas for efficiently updating the results, one rule to preselect predictor variables, and one accelerating skill of utilizing multiple CPU cores simultaneously. In the timing test, the DDN method with increased computing speed is much faster than the original method. To detect network rewirings within the same omics data or between different types of omics, we propose a method called multiDDN that uses an integrated model to process multiple types of omics data. We solve the new problem by adapting the block coordinate descending algorithm. The test on simulated data shows multiDDN is better than single omics DDN. We applied DDN or multiDDN method on several datasets of omics data and detected significant network rewiring associated with diseases. We detected hub nodes from the network rewiring events. These hub genes as potential biomarkers help us to ask new meaningful questions in related researches.
- Published
- 2020
26. A Defense-In-Depth Security Architecture for Software Defined Radio Systems
- Author
-
Hitefield, Seth D., Electrical and Computer Engineering, Clancy, Thomas Charles III, Butt, Ali R., MacKenzie, Allen B., Black, Jonathan T., and Yang, Yaling
- Subjects
Software radio ,Security ,Wireless Communications ,Isolation - Abstract
Modern wireless communications systems are constantly evolving and growing more complex. Recently, there has been a shift towards software defined radios due to the flexibility soft- ware implementations provide. This enables an easier development process, longer product lifetimes, and better adaptability for congested environments than conventional hardware systems. However, this shift introduces new attack surfaces where vulnerable implementa- tions can be exploited to disrupt communications or gain unauthorized access to a system. Previous research concerning wireless security mainly focuses on vulnerabilities within pro- tocols rather than in the radios themselves. This dissertation specifically addresses this new threat against software radios and introduces a new security model intended to mitigate this threat. We also demonstrate example exploits of waveforms which can result in either a denial-of-service or a compromise of the system from a wireless attack vector. These example exploits target vulnerabilities such as overflows, unsanitized control inputs, and unexpected state changes. We present a defense-in-depth security architecture for software radios that protects the system by isolating components within a waveform into different security zones. Exploits against vulnerabilities within blocks are contained by isolation zones which protects the rest of the system from compromise. This architecture is inspired by the concept of a microkernel and provides a minimal trusted computing base for developing secure radio systems. Unlike other previous security models, our model protects from exploits within the radio protocol stack itself and not just the higher layer application. Different isolation mechanisms such as containers or virtual machines can be used depending on the security risk imposed by a component and any security requirements. However, adding these isolation environments incurs a performance overhead for applications. We perform an analysis of multiple example waveforms to characterize the impact of isolation environments on the overall performance of an application and demonstrate the overhead generated from the added isolation can be minimal. Because of this, our defense-in-depth architecture should be applied to real-world, production systems. We finally present an example integration of the model within the GNU Radio framework that can be used to develop any waveform using the defense-in-depth se- curity architecture. Doctor of Philosophy In recent years, wireless devices and communication systems have become a common part of everyday life. Mobile devices are constantly growing more complex and with the growth in mobile networks and the Internet of Things, an estimated 20 billion devices will be connected in the next few years. Because of this complexity, there has been a recent shift towards using software rather than hardware for the primary functionality of the system. Software enables an easier and faster development process, longer product lifetimes through over- the-air updates, and better adaptability for extremely congested environments. However, these complex software systems can be susceptible to attack through vulnerabilities in the radio interfaces that allow attackers to completely control a targeted device. Much of the existing wireless security research only focuses on vulnerabilities within different protocols rather than considering the possibility of vulnerabilities in the radios themselves. This work specifically focuses on this new threat and demonstrates example exploits of software radios. We then introduce a new security model intended to protect against these attacks. The main goal of this dissertation is to introduce a new defense-in-depth security architecture for software radios that protects the system by isolating components within a waveform into different security zones. Exploits against the system are contained within the zones and unable to compromise the overall system. Unlike other security models, our model protects from exploits within the radio protocol stack itself and not just the higher layer application. Different isolation mechanisms such as containers or virtual machines can be used depending on the security risk imposed by a component and any security requirements for the system. However, adding these isolation environments incurs a performance overhead for applications. We also perform a performance analysis with several example applications and show the overhead generated from the added isolation can be minimal. Therefore, the defense-in-depth model should be the standard method for architecting wireless communication systems. We finally present a GNU Radio based framework for developing waveforms using the defense- in-depth approach.
- Published
- 2020
27. Mathematical Modeling and Deconvolution for Molecular Characterization of Tissue Heterogeneity
- Author
-
Chen, Lulu, Electrical and Computer Engineering, Wang, Yue J., Lou, Wenjing, Yu, Guoqiang, Baumann, William T., and Clancy, Thomas Charles III
- Subjects
feature selection ,tissue heterogeneity ,convex analysis ,biomarkers ,bioinformatics ,deconvolution ,unsupervised learning - Abstract
Tissue heterogeneity, arising from intermingled cellular or tissue subtypes, significantly obscures the analyses of molecular expression data derived from complex tissues. Existing computational methods performing data deconvolution from mixed subtype signals almost exclusively rely on supervising information, requiring subtype-specific markers, the number of subtypes, or subtype compositions in individual samples. We develop a fully unsupervised deconvolution method to dissect complex tissues into molecularly distinctive tissue or cell subtypes directly from mixture expression profiles. We implement an R package, deconvolution by Convex Analysis of Mixtures (debCAM) that can automatically detect tissue or cell-specific markers, determine the number of constituent sub-types, calculate subtype proportions in individual samples, and estimate tissue/cell-specific expression profiles. We demonstrate the performance and biomedical utility of debCAM on gene expression, methylation, and proteomics data. With enhanced data preprocessing and prior knowledge incorporation, debCAM software tool will allow biologists to perform a deep and unbiased characterization of tissue remodeling in many biomedical contexts. Purified expression profiles from physical experiments provide both ground truth and a priori information that can be used to validate unsupervised deconvolution results or improve supervision for various deconvolution methods. Detecting tissue or cell-specific expressed markers from purified expression profiles plays a critical role in molecularly characterizing and determining tissue or cell subtypes. Unfortunately, classic differential analysis assumes a convenient test statistic and associated null distribution that is inconsistent with the definition of markers and thus results in a high false positive rate or lower detection power. We describe a statistically-principled marker detection method, One Versus Everyone Subtype Exclusively-expressed Genes (OVESEG) test, that estimates a mixture null distribution model by applying novel permutation schemes. Validated with realistic synthetic data sets on both type 1 error and detection power, OVESEG-test applied to benchmark gene expression data sets detects many known and de novo subtype-specific expressed markers. Subsequent supervised deconvolution results, obtained using markers detected by the OVESEG-test, showed superior performance when compared with popular peer methods. While the current debCAM approach can dissect mixed signals from multiple samples into the 'averaged' expression profiles of subtypes, many subsequent molecular analyses of complex tissues require sample-specific deconvolution where each sample is a mixture of 'individualized' subtype expression profiles. The between-sample variation embedded in sample-specific subtype signals provides critical information for detecting subtype-specific molecular networks and uncovering hidden crosstalk. However, sample-specific deconvolution is an underdetermined and challenging problem because there are more variables than observations. We propose and develop debCAM2.0 to estimate sample-specific subtype signals by nuclear norm regularization, where the hyperparameter value is determined by random entry exclusion based cross-validation scheme. We also derive an efficient optimization approach based on ADMM to enable debCAM2.0 application in large-scale biological data analyses. Experimental results on realistic simulation data sets show that debCAM2.0 can successfully recover subtype-specific correlation networks that is unobtainable otherwise using existing deconvolution methods. Doctor of Philosophy Tissue samples are essentially mixtures of tissue or cellular subtypes where the proportions of individual subtypes vary across different tissue samples. Data deconvolution aims to dissect tissue heterogeneity into biologically important subtypes, their proportions, and their marker genes. The physical solution to mitigate tissue heterogeneity is to isolate pure tissue components prior to molecular profiling. However, these experimental methods are time-consuming, expensive and may alter the expression values during isolation. Existing literature primarily focuses on supervised deconvolution methods which require a priori information. This approach has an inherent problem as it relies on the quality and accuracy of the a priori information. In this dissertation, we propose and develop a fully unsupervised deconvolution method - deconvolution by Convex Analysis of Mixtures (debCAM) that can estimate the mixing proportions and 'averaged' expression profiles of individual subtypes present in heterogeneous tissue samples. Furthermore, we also propose and develop debCAM2.0 that can estimate 'individualized' expression profiles of participating subtypes in complex tissue samples. Subtype-specific expressed markers, or marker genes (MGs), serves as critical a priori information for supervised deconvolution. MGs are exclusively and consistently expressed in a particular tissue or cell subtype while detecting such unique MGs involving many subtypes constitutes a challenging task. We propose and develop a statistically-principled method - One Versus Everyone Subtype Exclusively-expressed Genes (OVESEG-test) for robust detection of MGs from purified profiles of many subtypes.
- Published
- 2020
28. Silicon-Based PALNA Transmit/Receive Circuits for Integrated Millimeter Wave Phased Arrays
- Author
-
Abdomerovic, Iskren, Electrical Engineering, Raman, Sanjay, Clancy, Thomas Charles III, MacKenzie, Allen B., Haghighat, Alireza, and Yi, Yang
- Subjects
noise figure ,PALNA ,low noise amplifier ,output power ,switchless T/R circuits ,power amplifier ,LNAPA ,Hardware_INTEGRATEDCIRCUITS ,millimeter-wave frequencies ,phased arrays ,power added efficiency ,bidirectional T/R circuits ,transmit/receive circuits - Abstract
Phased array element RF front ends typically use single pole double throw (SPDT) switches or circulators with high isolation to prevent leakage of transmit energy into the receiver circuits. However, as phased-array designs scale to the millimeter-wave range, with high degrees of integration, the physical size and performance degradations associated with switches and circulators can present challenges in meeting system performance and size/weight/power (SWAP) requirements. This work demonstrates a loss-aware methodology for analysis and design of switchless transmit/receive (T/R) circuits. The methodology provides design insights and a practical, generally applicable approach for solving the multi-variable optimization problem of switchless power amplifier/low-noise amplifier (PALNA) matching networks, which present optimal matching impedances to both the power amplifier (PA) and the low noise amplifier (LNA) while maximizing power transfer efficiency and minimizing dissipative losses in each (transmit or receive) mode of operation. Three PALNA example designs at W-band are presented in this dissertation, each following a distinct design methodology. The first example design in 32SOI CMOS leverages PA and LNA circuits that already include 50 Ω matching networks at both input and output. The second example design in 8XP SiGe develops the PA and LNA circuits and integrates the PA output and LNA input matching networks into the PALNA matching network that connects the PA and the LNA. The third design in 32SOI CMOS leverages the loss-aware PALNA design methodology to develop a PALNA that achieves simulated maximum power added efficiency of 18 % in transmit and noise figure of 7.5 dB in receive at 94 GHz, which is beyond the published state-of-art for T/R circuits. In addition, for comparison purposes, this dissertation also presents an efficient, switch-based T/R circuit design in 32SOI CMOS technology, which achieves a simulated maximum power added efficiency of 15 % in transmit and noise figure of 6.5 dB in receive at 94 GHz, which is also beyond the published state-of-art for T/R circuits. Doctor of Philosophy In military and commercial applications, phased arrays are devices primarily used to achieve focusing and steering of transmitted or received electromagnetic energy. Phased arrays consist of many elements, each with an ability to both transmit and receive radio frequency (RF) signals. Each element incorporates a power amplifier (PA) for transmit and a low noise amplifier (LNA) for receive, which are typically connected using a single pole double throw (SPDT) switch or a circulator with high isolation to prevent leakage of transmit energy into the receiver circuits. However, as phased arrays exploit the latest technological advances in circuit integration and their frequencies of operation increase, physical size and performance degradations associated with switches and circulators can present challenges in meeting system performance and size/weight/power (SWAP) requirements. This dissertation provides a loss-aware methodology for analysis and design of switchless transmit/receive (T/R) circuits where the switches and circulators are replaced by carefully designed power amplifier/low-noise amplifier (PALNA) impedance matching networks. In the switchless T/R circuits, the design goals of maximum power efficiency and minimum noise in transmit and receive, respectively, are achieved through impedance matching that is optimal and low-loss in both modes of operation simultaneously. Three distinct PALNA example designs at W-band are presented in this dissertation, each following a distinct design methodology. With each new design, lessons learned are leveraged and design methodologies are enhanced. The first example design leverages already available PA and LNA circuits and connects them using 50 Ω transmission lines whose lengths are designed to guarantee optimum impedance match in receive and transmit mode of operation. The second example design develops new PA and LNA circuits and connects them using 50 Ω transmission lines whose lengths are designed to simultaneously achieve optimum impedance matching for maximum power efficiency in transmit mode of operation and lowest noise in receive mode of operation. The third design leverages a loss-aware PALNA design methodology, a multi-variable optimization procedure, to develop a PALNA that achieves simulated maximum power added efficiency of 18 % in transmit and noise figure of 7.5 dB in receive at 94 GHz, which is beyond the published state-of-art for T/R circuits. In addition, for comparison purposes with the third PALNA design, this dissertation also presents an efficient, switch-based T/R circuit design, which achieves a simulated maximum power added efficiency of 15 % in transmit and noise figure of 6.5 dB in receive at 94 GHz, which is also beyond the published state-of-art for T/R circuits.
- Published
- 2020
29. Power-Performance-Predictability: Managing the Three Cornerstones of Resource Constrained Real-Time System Design
- Author
-
Mukherjee, Anway, Electrical and Computer Engineering, Chantem, Thidapat, Gerdes, Ryan M., Yu, Guoqiang, Clancy, Thomas Charles III, and Tilevich, Eli
- Subjects
Android ,Voltage-Frequency Scaling ,Trusted Execution ,Real-Time Systems - Abstract
This dissertation explores several challenges that plague the hardware-software co-design of popular resource constrained real-time embedded systems. We specifically tackle existing real-world problems, and address them through our design solutions which are highly scalable, and have practical feasibility as verified through our solution implementation on real-world hardware. We address the problem of poor battery life in mobile embedded devices caused due to side-by-side execution of multiple applications in split-screen mode. Existing industry solutions either restricts the number of applications that can run simultaneously, limit their functionality, and/or increase the hardware capacity of the battery associated with the system. We exploit the gap in research on performance and power trade-off in smartphones to propose an integrated energy management solution, that judiciously minimizes the system-wide energy consumption with negligible effect on its quality of service (QoS). Another important real-world requirement in today's interconnected world is the need for security. In the domain of real-time computing, it is not only necessary to secure the system but also maintain its timeliness. Some example security mechanisms that may be used in a hard real-time system include, but are not limited to, security keys, protection of intellectual property (IP) of firmware and application software, one time password (OTP) for software certification on-the-fly, and authenticated computational off-loading. Existing design solutions require expensive, custom-built hardware with long time-to-market or time-to-deployment cycle. A readily available alternative is the use of trusted execution environment (TEE) on commercial off-the-shelf (COTS) embedded processors. However, utilizing TEE creates multiple challenges from a real-time perspective, which includes additional time overhead resulting in possible deadline misses. Second, trusted execution may adversely affect the deterministic execution of the system, as tasks running inside a TEE may need to communicate with other tasks that are executing on the native real-time operating system. We propose three different solutions to address the need for a new task model that can capture the complex relationship between performance and predictability for real-time tasks that require secure execution inside TEE. We also present novel task assignment and scheduling frameworks for real-time trusted execution on COTS processors to improve task set schedulability. We extensively assess the pros and cons of our proposed approaches in comparison to the state-of-the-art techniques in custom-built real-world hardware for feasibility, and simulated environments to test our solutions' scalability. Doctor of Philosophy Today's real-world problems demand real-time solutions. These solutions need to be practically feasible, and scale well with increasing end user demands. They also need to maintain a balance between system performance and predictability, while achieving minimum energy consumption. A recent example of technological design problem involves ways to improve the battery lifetime of mobile embedded devices, for example, smartphones, while still achieving the required performance objectives. For instance, smartphones that run Android OS has the capability to run multiple applications concurrently using a newly introduced split-screen mode of execution, where applications can run side-by-side at the same time on screen while using the same shared resources (e.g., CPU, memory bandwidth, peripheral devices etc.). While this can improve the overall performance of the system, it can also lead to increased energy consumption, thereby directly affecting the battery life. Another technological design problem involves ways to protect confidential proprietary information from being siphoned out of devices by external attackers. Let us consider a surveillance unmanned aerial vehicle (UAV) as an example. The UAV must perform sensitive tasks, such as obtaining coordinates of interest for surveillance, within a given time duration, also known as task deadline. However, an attacker may learn how the UAV communicates with ground control, and take control of the UAV, along with the sensitive information it carries. Therefore, it is crucial to protect such sensitive information from access by an unauthorized party, while maintaining the system's task deadlines. In this dissertation, we explore these two real-world design problems in depth, observe the challenges associated with them, while presenting several solutions to tackle the issues. We extensively assess the pros and cons of our proposed approaches in comparison to the state-of- the-art techniques in custom-built real-world hardware, and simulated environments to test our solutions' scalability.
- Published
- 2019
30. Robust Speech Filter And Voice Encoder Parameter Estimation using the Phase-Phase Correlator
- Author
-
Azad, Abul K., Electrical Engineering, Mili, Lamine M., Clancy, Thomas Charles III, Zaghloul, Amir I., MacKenzie, Allen B., and Ramakrishnan, Naren
- Subjects
impulsive noise ,estimator varianceurve ,Robust statistics ,estimator bias curve ,phase-phase correlator ,auto-regressive parameter estimation ,mixed-excitation linear prediction ,speech processing - Abstract
In recent years, linear prediction voice encoders have become very efficient in terms of computing execution time and channel bandwidth usage while providing, in the absence of im- pulsive noise, natural sounding synthetic speech signals. This good performance has been achieved via the use of a maximum likelihood parameter estimation of an auto-regressive model of order ten that best fits the speech signal under the assumption that the signal and the noise are Gaussian stochastic processes. However, this method breaks down in the presence of impulse noise, which is common in practice, resulting in harsh or non-intelligible audio signals. In this paper, we propose a robust estimator of correlation, the Phase-Phase correlator that is able to cope with impulsive noise. Utilizing this correlator, we develop a Robust Mixed Excitation Linear Prediction encoder that provides improved audio quality for voiced, unvoiced, and transition speech segments. This is achieved by applying a statistical test to robust Mahalanobis distances for identifying the outliers in the corrupted speech signal, which are then replaced with filtered signals. Simulation results reveal that the proposed method outperforms in variance, bias, and breakdown point three other robust approaches based on the arcsin law, the polarity coincidence correlator, and the median- of-ratio estimator without sacrificing the encoder bandwidth efficiency and the compression gain while remaining compatible with real-time applications. Furthermore, in the presence of impulsive noise, the proposed speech encoder speech perceptual quality also outperforms the state of the art in terms of mean opinion score. Doctor of Philosophy Impulsive noise is a natural phenomenon in everyday experience. Impulsive noise can be analogous to discontinuities or a drastic change in natural progressions of events. Specifically in this research the disrupting events can occur in signals such as speech, power transmission, stock market, communication systems, etc. Sudden power outage due to lighting, maintenance or other catastrophic events are some of the reasons why we may experience performance degradation in our electronic devices. Another example of impulsive noise is when we play an old damaged vinyl records, which results in annoying clicking sounds. At the time instance of each click, the true music or speech or simply the audible waveform is completely destroyed. Other examples of impulse noise is a sudden crash in the stock market; a sudden dive in the market can destroy the regression and future predictions. Unfortunately, in the presence of impulsive noise, classical methods methods are unable to filter out the impulse corruptions. The intended filtering objective of this dissertation is specific, but not limited, to speech signal processing. Specifically, research different filter model to determine the optimum method of eliminating impulsive noise in speech. Note, that the optimal filter model is different for time series signal model such as speech, stock market, power systems, etc. In our studies we have shown that our speech filter method outperforms the state of the art algorithms. Another major contribution of our research is in speech compression algorithm that is robust to impulse noise in speech. In digital signal processing, a compression method entails in representing the same signal with less data and yet convey the the same same message as the original signal. For example, human auditory system can produce sounds in the range of approximately 60 Hz and 3500 Hz, another word speech can occupy approximately 4000 Hz in frequency space. So the challenge is, can we compress speech in one of half of that space, or even less. This is a very attractive proposition because frequency space is limited but the wireless service providers desires to service as many users as possible without sacrificing quality and ultimately maximize the bottom line. Encoding impulse corrupted speech produces harsh quality of synthesized audio. We have shown if the encoding is done with the proposed method, synthesized audio quality is far superior to the sate of the art.
- Published
- 2019
31. Analysis of Firmware Security in Embedded ARM Environments
- Author
-
Brown, Dane Andrew, Electrical and Computer Engineering, Clancy, Thomas Charles III, Schaumont, Patrick R., Black, Jonathan T., Yang, Yaling, and Gerdes, Ryan M.
- Subjects
Embedded ,Firmware ,Security - Abstract
Modern enterprise-grade systems with virtually unlimited resources have many options when it comes to implementing state of the art intrusion prevention and detection solutions. These solutions are costly in terms of energy, execution time, circuit board area, and capital. Sustainable Internet of Things devices and power-constrained embedded systems are thus forced to make suboptimal security trade-offs. One such trade-off is the design of architectures which prevent execution of injected shell code, yet have allowed Return Oriented Programming (ROP) to emerge as a more reliable way to execute malicious code following attacks. ROP is a method used to take over the execution of a program by causing the return address of a function to be modified through an exploit vector, then returning to small segments of otherwise innocuous code located in executable memory one after the other to carry out the attacker's aims. We show that the Tiva TM4C123GH6PM microcontroller, which utilizes anARM Cortex-M4F processor, can be fully controlled with this technique. Firmware code is pre-loaded into a ROM on Tiva microcontrollers which can be subverted to erase and rewrite the flash memory where the program resides. That same firmware is searched for a Turing-complete gadget set which allows for arbitrary execution. We then design and evaluate a method for verifying the integrity of firmware on embedded systems, in this case Solid State Drives (SSDs). Some manufacturers make firmware updates available, but their proprietary protections leave end users unable to verify the authenticity of the firmware post installation. This means that attackers who are able to get a malicious firmware version installed on a victim SSD are able to operate with full impunity, as the owner will have no tools for detection. We have devised a method for performing side channel analysis of the current drawn by an SSD, which can compare its behavior while running genuine firmware against its behavior when running modified firmware. We train a binary classifier with samples of both versions and are able to consistently discriminate between genuine firmware and modified firmware, even despite changes in external factors such as temperature and supplied power. Doctor of Philosophy To most consumers and enterprises, a computer is the desktop or laptop device they use to run applications or write reports. Security for these computers has been a top priority since the advent of the Internet and the security landscape has matured considerably since that time. Yet, these consumer-facing computers are outnumbered several times over by embedded computers and microcontrollers which power ubiquitous systems in industrial control, home automation, and the Internet of Things. Unfortunately, the security landscape for these embedded systems is in relative infancy. Security controls designed for consumer and enterprise computers are often poorly suited for embedded system due to constraints such as power, memory, processing, and real-time performance demands. This research considers the unique constraints of embedded systems and analyzes their security in a practical way. We begin by exploring the mechanism and extent to which a device can be compromised. We show that a technique known as Return Oriented Programming (ROP) can be used to bypass some of the process control protections in place and that there can be enough existing code in the firmware to allow an attacker to execute code at will. This leads naturally to the question of how embedded computers can be secured. One important security assurance is the knowledge that a device is running legitimate firmware. This can be difficult for a device owner to verify due to proprietary protections put in place by manufacturers. However, we contribute a method to detect modifications to firmware on embedded systems, particularly Solid State Drives. This is done through an analysis of the current drawn during drive operations with best-practice data classification techniques. The findings of this research indicate that current embedded devices present a larger surface area for attack, less sophistication required for attack, and a larger quantity of devices vulnerable to attack. Even though these findings should raise concern, we also found that there are practical methods for detecting attack via monitoring and analysis.
- Published
- 2019
32. Multi-layer Optimization Aspects of Deep Learning and MIMO-based Communication Systems
- Author
-
Erpek, Tugba, Electrical Engineering, Clancy, Thomas Charles III, Raman, Sanjay, MacKenzie, Allen B., Mili, Lamine M., Wang, Yue J., Lou, Wenjing, and Buehrer, Richard M.
- Subjects
Interference Channel ,Deep learning (Machine learning) ,Channel Access ,Machine learning ,Multiple Input Multiple Output (MIMO) ,Rate Maximization ,Network Control ,Data_CODINGANDINFORMATIONTHEORY ,Autoencoder ,Multi-User MIMO ,Multiple Access Channel ,Computer Science::Information Theory - Abstract
This dissertation addresses multi-layer optimization aspects of multiple input multiple output (MIMO) and deep learning-based communication systems. The initial focus is on the rate optimization for multi-user MIMO (MU-MIMO) configurations; specifically, multiple access channel (MAC) and interference channel (IC). First, the ergodic sum rates of MIMO MAC and IC configurations are determined by jointly integrating the error and overhead effects due to channel estimation (training) and feedback into the rate optimization. Then, we investigated methods that will increase the achievable rate for parallel Gaussian IC (PGIC) which is a special case of MIMO IC where there is no interference between multiple antenna elements. We derive a generalized iterative waterfilling algorithm for power allocation that maximizes the ergodic achievable rate. We verified the sum rate improvement with our proposed scheme through extensive simulation tests. Next, we introduce a novel physical layer scheme for single user MIMO spatial multiplexing systems based on unsupervised deep learning using an autoencoder. Both transmitter and receiver are designed as feedforward neural networks (FNN) and constellation diagrams are optimized to minimize the symbol error rate (SER) based on the channel characteristics. We first evaluate the SER in the presence of a constant Rayleigh-fading channel as a performance upper bound. Then, we quantize the Gaussian distribution and train the autoencoder with multiple quantized channel matrices. The channel is provided as an input to both the transmitter and the receiver. The performance exceeds that of conventional communication systems both when the autoencoder is trained and tested with single and multiple channels and the performance gain is sustained after accounting for the channel estimation error. Moreover, we evaluate the performance with increasing number of quantization points and when there is a difference between training and test channels. We show that the performance loss is minimal when training is performed with sufficiently large number of quantization points and number of channels. Finally, we develop a distributed and decentralized MU-MIMO link selection and activation protocol that enables MU-MIMO operation in wireless networks. We verified the performance gains with the proposed protocol in terms of average network throughput. Doctor of Philosophy Multiple Input Multiple Output (MIMO) wireless systems include multiple antennas both at the transmitter and receiver and they are widely used today in cellular and wireless local area network systems to increase robustness, reliability and data rate. Multi-user MIMO (MU-MIMO) configurations include multiple access channel (MAC) where multiple transmitters communicate simultaneously with a single receiver; interference channel (IC) where multiple transmitters communicate simultaneously with their intended receivers; and broadcast channel (BC) where a single transmitter communicates simultaneously with multiple receivers. Channel state information (CSI) is required at the transmitter to precode the signal and mitigate interference effects. This requires CSI to be estimated at the receiver and transmitted back to the transmitter in a feedback loop. Errors occur during both channel estimation and feedback processes. We initially analyze the achievable rate of MAC and IC configurations when both channel estimation and feedback errors are taken into account in the capacity formulations. We treat the errors associated with channel estimation and feedback as additional noise. Next, we develop methods to maximize the achievable rate for IC by using interference cancellation techniques at the receivers when the interference is very strong. We consider parallel Gaussian IC (PGIC) which is a special case of MIMO IC where there is no interference between multiple antenna elements. We develop a power allocation scheme which maximizes the ergodic achievable rate of the communication systems. We verify the performance improvement with our proposed scheme through simulation tests. Standard optimization techniques are used to determine the fundamental limits of MIMO communications systems. However, there is still a gap between current operational systems and these limits due to complexity of these solutions and limitations in their assumptions. Next, we introduce a novel physical layer scheme for MIMO systems based on machine learning; specifically, unsupervised deep learning using an autoencoder. An autoencoder consists of an encoder and a decoder that compresses and decompresses data, respectively. We designed both the encoder and the decoder as feedforward neural networks (FNNs). In our case, encoder performs transmitter functionalities such as modulation and error correction coding and decoder performs receiver functionalities such as demodulation and decoding as part of the communication system. Channel is included as an additional layer between the encoder and decoder. By incorporating the channel effects in the design process of the autoencoder and jointly optimizing the transmitter and receiver, we demonstrate the performance gains over conventional MIMO communication schemes. Finally, we develop a distributed and decentralized MU-MIMO link selection and activation protocol that enables MU-MIMO operation in wireless networks. We verified the performance gains with the proposed protocol in terms of average network throughput.
- Published
- 2019
33. Solutions for Internet of Things Security Challenges: Trust and Authentication
- Author
-
McGinthy, Jason M., Electrical and Computer Engineering, Clancy, Thomas Charles III, Michaels, Alan J., Hicks, Matthew, MacKenzie, Allen B., and Saad, Walid
- Subjects
Authentication ,Lightweight ,Internet of Things ,Security ,Standardization - Abstract
The continuing growth of Internet-connected devices presents exciting opportunities for future technology. These Internet of Things (IoT) products are being manufactured and interleaved with many everyday activities, which is creating a larger security concern. Sensors will collect previously unimaginable amounts of private and public data and transmit all of it through an easily observable wireless medium in order for other devices to perform data analytics. As more and more devices are produced, many are lacking a strong security foundation in order to be the "first to market." Moreover, current security techniques are based on protocols that were designed for more-capable devices such as desktop computers and cellular phones that have ample power, computational ability, and memory storage. Due to IoT's technological infancy, there are many security challenges without proper solutions. As IoT continues to grow, special considerations and protections must be in place to properly secure this data and protect the privacy of its users. This dissertation highlights some of the major challenges related to IoT and prioritizes their impacts to help identify where gaps are that must be filled. Focusing on these high priority concerns, solutions are presented that are tailored to IoT's constraints. A security feature-based framework is developed to help characterize classes of devices to help manage the heterogeneous nature of IoT devices and networks. A novel physical device authentication method is presented to show the feasibility in IoT devices and networks. Additional low-power techniques are designed and evaluated to help identify different security features available to IoT devices as presented in the aforementioned framework. Doctor of Philosophy The Internet has been gaining a foothold in our everyday lives. Smart homes, smart cars, and smart cities are becoming less science fiction and more everyday realities. In order to increase the public’s general quality of life, this new Internet of Things (IoT) technological revolution is adding billions of devices around us. These devices aim to collect unforeseen amounts of data to help better understand environments and improve numerous aspects of life. However, IoT technology is still in its infancy, so there are still many challenges still remaining. One major issue in IoT is the questionable security for many devices. Recent cyber attacks have highlighted the shortcomings of many IoT devices. Many of these device manufacturers simply wanted to be the first in a niche market, ignoring the importance of security. Proper security implementation in IoT has only been done by a minority of designers and manufacturers. Therefore, this document proposes a secure design for all IoT devices to be based. Numerous security techniques are presented and shown to properly protect the data that will pass through many of these devices. The overall goal for this proposed work aims to have an overall security solution that overcomes the current shortfalls of IoT devices, lessening the concern for IoT’s future use in our everyday lives.
- Published
- 2019
34. Exploring the Vulnerabilities of Traffic Collision Avoidance Systems (TCAS) Through Software Defined Radio (SDR) Exploitation
- Author
-
Berges, Paul Martin, Electrical and Computer Engineering, Gerdes, Ryan M., Reed, Jeffrey H., and Clancy, Thomas Charles III
- Subjects
TCAS ,Security ,SDR ,Cyber-Physical ,SSR - Abstract
Traffic Collision Avoidance Systems (TCAS) are safety-critical systems that are deployed on most commercial aircraft in service today. However, TCAS transactions were not designed to account for malicious actors. While in the past it may have been infeasible for an attacker to craft arbitrary radio signals, attackers today have access to open-source digital signal processing software like GNU Radio and inexpensive Software Define Radios (SDR). Therefore, this thesis presents motivation through analytical and experimental means for more investigation into TCAS from a security perspective. Methods for analyzing TCAS both qualitatively and quantitatively from an adversarial perspective are presented, and an experimental attack is developed in GNU Radio to perform an attack in a well-defined threat model. Master of Science Since 1993, the Federal Aviation Administration (FAA) requires that many commercial turbine-powered aircraft to be outfitted with an on-board mid-air collision mitigation system. This system is known as the Traffic Collision Avoidance System (TCAS) in the United States, and it is known as the Airborne Collision Avoidance System (ACAS) in other parts of the world. TCAS/ACAS is a type of safety-critical system, which means that implementations need to be highly tolerant to system failures because their operation directly affects the safety of the on-board passengers and crew. However, while safety-critical systems are tolerant to failures, the designers of these systems only account for failures that occur in a cooperative environment; these engineers fail to account for “bad actors” who want to attack the weaknesses of these systems, or they assume that attacking such a system is infeasible. Therefore, to demonstrate how safety-critical systems like TCAS/ACAS are vulnerable to such bad actors, this thesis presents a method for manipulating the TCAS/ACAS in the favor of a bad actor. To start, a method for qualitatively and quantitatively analyzing the system’s vulnerabilities is presented. Then, using Software Defined Radio (SDR), which is a free and open-source effort to combine the flexibility of software with the power of wireless communication, this thesis shows how an actor can craft wireless signals such that they appear to look like an aircraft on a collision course with a target.
- Published
- 2019
35. Security of Cyber-Physical Systems with Human Actors: Theoretical Foundations, Game Theory, and Bounded Rationality
- Author
-
Sanjab, Anibal Jean, Electrical Engineering, Saad, Walid, Clancy, Thomas Charles III, De La Ree, Jaime, Yao, Danfeng (Daphne), and Dhillon, Harpreet Singh
- Subjects
Prospect Theory ,Game Theory ,Cyber-Physical Systems Security ,Internet of Things ,Smart Grids ,Unmanned Aerial Vehicles - Abstract
Cyber-physical systems (CPSs) are large-scale systems that seamlessly integrate physical and human elements via a cyber layer that enables connectivity, sensing, and data processing. Key examples of CPSs include smart power systems, smart transportation systems, and the Internet of Things (IoT). This wide-scale cyber-physical interconnection introduces various operational benefits and promises to transform cities, infrastructure, and networked systems into more efficient, interactive, and interconnected smart systems. However, this ubiquitous connectivity leaves CPSs vulnerable to menacing security threats as evidenced by the recent discovery of the Stuxnet worm and the Mirai malware, as well as the latest reported security breaches in a number of CPS application domains such as the power grid and the IoT. Addressing these culminating security challenges requires a holistic analysis of CPS security which necessitates: 1) Determining the effects of possible attacks on a CPS and the effectiveness of any implemented defense mechanism, 2) Analyzing the multi-agent interactions -- among humans and automated systems -- that occur within CPSs and which have direct effects on the security state of the system, and 3) Recognizing the role that humans and their decision making processes play in the security of CPSs. Based on these three tenets, the central goal of this dissertation is to enhance the security of CPSs with human actors by developing fool-proof defense strategies founded on novel theoretical frameworks which integrate the engineering principles of CPSs with the mathematical concepts of game theory and human behavioral models. Towards realizing this overarching goal, this dissertation presents a number of key contributions targeting two prominent CPS application domains: the smart electric grid and drone systems. In smart grids, first, a novel analytical framework is developed which generalizes the analysis of a wide set of security attacks targeting the state estimator of the power grid, including observability and data injection attacks. This framework provides a unified basis for solving a broad set of known smart grid security problems. Indeed, the developed tools allow a precise characterization of optimal observability and data injection attack strategies which can target the grid as well as the derivation of optimal defense strategies to thwart these attacks. For instance, the results show that the proposed framework provides an effective and tractable approach for the identification of the sparsest stealthy attacks as well as the minimum sets of measurements to defend for protecting the system. Second, a novel game-theoretic framework is developed to derive optimal defense strategies to thwart stealthy data injection attacks on the smart grid, launched by multiple adversaries, while accounting for the limited resources of the adversaries and the system operator. The analytical results show the existence of a diminishing effect of aggregated multiple attacks which can be leveraged to successfully secure the system; a novel result which leads to more efficiently and effectively protecting the system. Third, a novel analytical framework is developed to enhance the resilience of the smart grid against blackout-inducing cyber attacks by leveraging distributed storage capacity to meet the grid's critical load during emergency events. In this respect, the results demonstrate that the potential subjectivity of storage units' owners plays a key role in shaping their energy storage and trading strategies. As such, financial incentives must be carefully designed, while accounting for this subjectivity, in order to provide effective incentives for storage owners to commit the needed portions of their storage capacity for possible emergency events. Next, the security of time-critical drone-based CPSs is studied. In this regard, a stochastic network interdiction game is developed which addresses pertinent security problems in two prominent time-critical drone systems: drone delivery and anti-drone systems. Using the developed network interdiction framework, the optimal path selection policies for evading attacks and minimizing mission completion times, as well as the optimal interdiction strategies for effectively intercepting the paths of the drones, are analytically characterized. Using advanced notions from Nobel-prize winning prospect theory, the developed framework characterizes the direct impacts of humans' bounded rationality on their chosen strategies and the achieved mission completion times. For instance, the results show that this bounded rationality can lead to mission completion times that significantly surpass the desired target times. Such deviations from the desired target times can lead to detrimental consequences primarily in drone delivery systems used for the carriage of emergency medical products. Finally, a generic security model for CPSs with human actors is proposed to study the diffusion of threats across the cyber and physical realms. This proposed framework can capture several application domains and allows a precise characterization of optimal defense strategies to protect the critical physical components of the system from threats emanating from the cyber layer. The developed framework accounts for the presence of attackers that can have varying skill levels. The results show that considering such differing skills leads to defense strategies which can better protect the system. In a nutshell, this dissertation presents new theoretical foundations for the security of large-scale CPSs, that tightly integrate cyber, physical, and human elements, thus paving the way towards the wide-scale adoption of CPSs in tomorrow's smart cities and critical infrastructure. Ph. D. Enhancing the efficiency, sustainability, and resilience of cities, infrastructure, and industrial systems is contingent on their transformation into more interactive and interconnected smart systems. This has led to the emergence of what is known as cyber-physical systems (CPSs). CPSs are widescale distributed and interconnected systems integrating physical components and humans via a cyber layer that enables sensing, connectivity, and data processing. Some of the most prominent examples of CPSs include the smart electric grid, smart cities, intelligent transportation systems, and the Internet of Things. The seamless interconnectivity between the various elements of a CPS introduces a wealth of operational benefits. However, this wide-scale interconnectivity and ubiquitous integration of cyber technologies render CPSs vulnerable to a range of security threats as manifested by recently reported security breaches in a number of CPS application domains. Addressing these culminating security challenges requires the development and implementation of fool-proof defense strategies grounded in solid theoretical foundations. To this end, the central goal of this dissertation is to enhance the security of CPSs by advancing novel analytical frameworks which tightly integrate the cyber, physical, and human elements of a CPS. The developed frameworks and tools enable the derivation of holistic defense strategies by: a) Characterizing the security interdependence between the various elements of a CPS, b) Quantifying the consequences of possible attacks on a CPS and the effectiveness of any implemented defense mechanism, c) Modeling the multi-agent interactions in CPSs, involving humans and automated systems, which have a direct effect on the security state of the system, and d) Capturing the role that human perceptions and decision making processes play in the security of CPSs. The developed tools and performed analyses integrate the engineering principles of CPSs with the mathematical concepts of game theory and human behavioral models and introduce key contributions to a number of CPS application domains such as the smart electric grid and drone systems. The introduced results enable strengthening the security of CPSs, thereby paving the way for their wide-scale adoption in smart cities and critical infrastructure.
- Published
- 2018
36. Modeling and Analysis of Non-Linear Dependencies using Copulas, with Applications to Machine Learning
- Author
-
Karra, Kiran, Electrical Engineering, Mili, Lamine M., Clancy, Thomas Charles III, Ramakrishnan, Naren, Yu, Guoqiang, and Raman, Sanjay
- Subjects
big data ,probability ,Machine learning ,copula ,stochastic - Abstract
Many machine learning (ML) techniques rely on probability, random variables, and stochastic modeling. Although statistics pervades this field, there is a large disconnect between the copula modeling and the machine learning communities. Copulas are stochastic models that capture the full dependence structure between random variables and allow flexible modeling of multivariate joint distributions. Elidan was the first to recognize this disconnect, and introduced copula based models to the ML community that demonstrated magnitudes of order better performance than the non copula-based models Elidan [2013]. However, the limitation of these is that they are only applicable for continuous random variables and real world data is often naturally modeled jointly as continuous and discrete. This report details our work in bridging this gap of modeling and analyzing data that is jointly continuous and discrete using copulas. Our first research contribution details modeling of jointly continuous and discrete random variables using the copula framework with Bayesian networks, termed Hybrid Copula Bayesian Networks (HCBN) [Karra and Mili, 2016], a continuation of Elidan’s work on Copula Bayesian Networks Elidan [2010]. In this work, we extend the theorems proved by Neslehov ˇ a [2007] from bivariate ´ to multivariate copulas with discrete and continuous marginal distributions. Using the multivariate copula with discrete and continuous marginal distributions as a theoretical basis, we construct an HCBN that can model all possible permutations of discrete and continuous random variables for parent and child nodes, unlike the popular conditional linear Gaussian network model. Finally, we demonstrate on numerous synthetic datasets and a real life dataset that our HCBN compares favorably, from a modeling and flexibility viewpoint, to other hybrid models including the conditional linear Gaussian and the mixture of truncated exponentials models. Our second research contribution then deals with the analysis side, and discusses how one may use copulas for exploratory data analysis. To this end, we introduce a nonparametric copulabased index for detecting the strength and monotonicity structure of linear and nonlinear statistical dependence between pairs of random variables or stochastic signals. Our index, termed Copula Index for Detecting Dependence and Monotonicity (CIM), satisfies several desirable properties of measures of association, including Renyi’s properties, the data processing inequality (DPI), and ´ consequently self-equitability. Synthetic data simulations reveal that the statistical power of CIM compares favorably to other state-of-the-art measures of association that are proven to satisfy the DPI. Simulation results with real-world data reveal CIM’s unique ability to detect the monotonicity structure among stochastic signals to find interesting dependencies in large datasets. Additionally, simulations show that CIM shows favorable performance to estimators of mutual information when discovering Markov network structure. Our third research contribution deals with how to assess an estimator’s performance, in the scenario where multiple estimates of the strength of association between random variables need to be rank ordered. More specifically, we introduce a new property of estimators of the strength of statistical association, which helps characterize how well an estimator will perform in scenarios where dependencies between continuous and discrete random variables need to be rank ordered. The new property, termed the estimator response curve, is easily computable and provides a marginal distribution agnostic way to assess an estimator’s performance. It overcomes notable drawbacks of current metrics of assessment, including statistical power, bias, and consistency. We utilize the estimator response curve to test various measures of the strength of association that satisfy the data processing inequality (DPI), and show that the CIM estimator’s performance compares favorably to kNN, vME, AP, and HMI estimators of mutual information. The estimators which were identified to be suboptimal, according to the estimator response curve, perform worse than the more optimal estimators when tested with real-world data from four different areas of science, all with varying dimensionalities and sizes. Ph. D. Many machine learning (ML) techniques rely on probability, random variables, and stochastic modeling. Although statistics pervades this field, many of the traditional machine learning techniques rely on linear statistical techniques and models. For example, the correlation coefficient, a widely used construct in modern data analysis, is only a measure of linear dependence and cannot fully capture non-linear interactions. In this dissertation, we aim to address some of these gaps, and how they affect machine learning performance, using the mathematical construct of copulas. Our first contribution deals with accurate probabilistic modeling of real-world data, where the underlying data is both continuous and discrete. We show that even though the copula construct has some limitations with respect to discrete data, it is still amenable to modeling large real-world datasets probabilistically. Our second contribution deals with analysis of non-linear datasets. Here, we develop a new measure of statistical association that can handle discrete, continuous, or combinations of such random variables that are related by any general association pattern. We show that our new metric satisfies several desirable properties and compare it’s performance to other measures of statistical association. Our final contribution attempts to provide a framework for understanding how an estimator of statistical association will affect end-to-end machine learning performance. Here, we develop the estimator response curve, and show a new way to characterize the performance of an estimator of statistical association, termed the estimator response curve. We then show that the estimator response curve can help predict how well an estimator performs in algorithms which require statistical associations to be rank ordered.
- Published
- 2018
37. Analysis of Jamming-Vulnerabilities of Modern Multi-carrier Communication Systems
- Author
-
Mahal, Jasmin Ara, Electrical Engineering, Clancy, Thomas Charles III, Saad, Walid, Silva, Luiz A., Roan, Michael J., and McGwier, Robert W.
- Subjects
OFDMA ,SC-FDMA ,Anti-jamming ,MISO ,Pilot-spoofing ,Channel estimation ,Physical Layer Security ,Jamming ,Multi-carrier Systems ,OFDM - Abstract
The ever-increasing demand for private and sensitive data transmission over wireless networks has made security a crucial concern in the current and future large-scale, dynamic, and heterogeneous wireless communication systems. To address this challenge, wireless researchers have tried hard to continuously analyze the jamming threats and come up with improved countermeausres. In this research, we have analyzed the jamming-vulnerabilities of the leading multi-carrier communication systems, Orthogonal Frequency Division Multiplexing (OFDM) and Single-Carrier Frequency Division Multiple Access (SC-FDMA). In order to lay the necessary theoretical groundwork, first we derived the analytical BER expressions for BPSK/QPSK and analytical upper and lower bounds for 16-QAM for OFDMA and SC-FDMA using Pilot Symbol Assisted Channel Estimation (PSACE) techniques in Rayleigh slow-fading channel that takes into account channel estimation error as well as pilot-jamming effect. From there we advanced to propose more novel attacks on the Cyclic Prefix (CP) of SC-FDMA. The associated countermeasures developed prove to be very effective to restore the system. We are first to consider the effect of frequency-selectivity and fading correlation of channel on the achievable rates of the legitimate system under pilot-spoofing attack. With respect to jamming mitigation techniques, our approaches are more focused on Anti-Jamming (AJ) techniques rather than Low Probability of Intercept (LPI) methods. The Channel State Information (CSI) of the two transceivers and the CSI between the jammer and the target play critical roles in ensuring the effectiveness of jamming and nulling attacks. Although current literature is rich with different channel estimation techniques between two legitimate transceivers, it does not have much to offer in the area of channel estimation from jammer's perspective. In this dissertation, we have proposed novel, computationally simple, deterministic, and optimal blind channel estimation techniques for PSK-OFDM as well as QAM-OFDM that estimate the jammer channel to the target precisely in high Signal-to-Noise (SNR) environment from a single OFDM symbol and thus perform well in mobile radio channel. We have also presented the feasibility analysis of estimating transceiver channel from jammer's perspective at the transmitter as well as receiver side of the underlying OFDM system. Ph. D.
- Published
- 2018
38. Processing of communications signals using machine learning
- Author
-
Virginia Tech Intellectual Properties, Inc., Clancy, Thomas Charles III, and O'Shea, Timothy James
- Subjects
H04L25/0252 ,G06N3/0454 ,H04L25/03165 ,H04B17/101 ,G06N3/08 ,H04L25/0254 ,H04B17/24 ,H04L2025/03464 ,G06N5/046 ,H04B17/3912 ,H04B17/373 ,H04B17/3913 ,G06N20/00 - Abstract
One or more processors control processing of radio frequency (RF) signals using a machine-learning network. The one or more processors receive as input, to a radio communications apparatus, a first representation of an RF signal, which is processed using one or more radio stages, providing a second representation of the RF signal as. Observations about, and metrics of, the second representation of the RF signal are obtained. Past observations and metrics are accessed from storage. Using the observations, metrics and past observations and metrics, parameters of a machine-learning network, which implements policies to process RF signals, are adjusted by controlling the radio stages. In response to the adjustments, actions performed by one or more controllers of the radio stages are updated. A representation of a subsequent input RF signal is processed using the radio stages that are controlled based on actions including the updated one or more actions.
- Published
- 2018
39. OneSwitch Data Center Architecture
- Author
-
Sehery, Wile Ali, Electrical and Computer Engineering, Clancy, Thomas Charles III, Mili, Lamine M., Chantem, Thidapat, Gerdes, Ryan M., and Chen, Ing-Ray
- Subjects
SDN ,OpenFlow ,Clos ,Supermarket ,Flow Optimization ,Flow-Commodity ,Data Center ,Load Balancing - Abstract
In the last two-decades data center networks have evolved to become a key element in improving levels of productivity and competitiveness for different types of organizations. Traditionally data center networks have been constructed with 3 layers of switches, Edge, Aggregation, and Core. Although this Three-Tier architecture has worked well in the past, it poses a number of challenges for current and future data centers. Data centers today have evolved to support dynamic resources such as virtual machines and storage volumes from any physical location within the data center. This has led to highly volatile and unpredictable traffic patterns. Also The emergence of "Big Data" applications that exchange large volumes of information have created large persistent flows that need to coexist with other traffic flows. The Three-Tier architecture and current routing schemes are no longer sufficient for achieving high bandwidth utilization. Data center networks should be built in a way where they can adequately support virtualization and cloud computing technologies. Data center networks should provide services such as, simplified provisioning, workload mobility, dynamic routing and load balancing, equidistant bandwidth and latency. As data center networks have evolved the Three-Tier architecture has proven to be a challenge not only in terms of complexity and cost, but it also falls short of supporting many new data center applications. In this work we propose OneSwitch: A switch architecture for the data center. OneSwitch is backward compatible with current Ethernet standards and uses an OpenFlow central controller, a Location Database, a DHCP Server, and a Routing Service to build an Ethernet fabric that appears as one switch to end devices. This allows the data center to use switches in scale-out topologies to support hosts in a plug and play manner as well as provide much needed services such as dynamic load balancing, intelligent routing, seamless mobility, equidistant bandwidth and latency. PHD
- Published
- 2018
40. Software-Defined Radio Implementation of Two Physical Layer Security Techniques
- Author
-
Ryland, Kevin Sherwood, Electrical Engineering, Clancy, Thomas Charles III, Dietrich, Carl B., and Buehrer, R. Michael
- Subjects
Artificial Noise ,Software radio ,Alamouti STBC ,Over-the-Air ,Physical Layer Security ,Data_CODINGANDINFORMATIONTHEORY - Abstract
This thesis discusses the design of two Physical Layer Security (PLS) techniques on Software Defined Radios (SDRs). PLS is a classification of security methods that take advantage of physical properties in the waveform or channel to secure communication. These schemes can be used to directly obfuscate the signal from eavesdroppers, or even generate secret keys for traditional encryption methods. Over the past decade, advancements in Multiple-Input Multiple-Output systems have expanded the potential capabilities of PLS while the development of technologies such as the Internet of Things has provided new applications. As a result, this field has become heavily researched, but is still lacking implementations. The design work in this thesis attempts to alleviate this problem by establishing SDR designs geared towards Over-the-Air experimentation. The first design involves a 2x1 Multiple-Input Single-Output system where the transmitter uses Channel State Information from the intended receiver to inject Artificial Noise (AN) into the receiver's nullspace. The AN is consequently not seen by the intended receiver, however, it will interfere with eavesdroppers experiencing independent channel fading. The second design involves a single-carrier Alamouti coding system with pseudo-random phase shifts applied to each transmit antenna, referred to as Phase-Enciphered Alamouti Coding (PEAC). The intended receiver has knowledge of the pseudo-random sequence and can undo these phase shifts when performing the Alamouti equalization, while an eavesdropper without knowledge of the sequence will be unable to decode the signal. Master of Science
- Published
- 2018
41. Learning from Data in Radio Algorithm Design
- Author
-
O'Shea, Timothy James, Electrical Engineering, Clancy, Thomas Charles III, McGwier, Robert W., Reed, Jeffrey H., Ramakrishnan, Naren, and Raman, Sanjay
- Subjects
radio ,modulation ,coding ,software radio ,Machine learning ,deep learning ,communications system design ,neural networks ,physical layer ,sensing - Abstract
Algorithm design methods for radio communications systems are poised to undergo a massive disruption over the next several years. Today, such algorithms are typically designed manually using compact analytic problem models. However, they are shifting increasingly to machine learning based methods using approximate models with high degrees of freedom, jointly optimized over multiple subsystems, and using real-world data to drive design which may have no simple compact probabilistic analytic form. Over the past five years, this change has already begun occurring at a rapid pace in several fields. Computer vision tasks led deep learning, demonstrating that low level features and entire end-to-end systems could be learned directly from complex imagery datasets, when a powerful collection of optimization methods, regularization methods, architecture strategies, and efficient implementations were used to train large models with high degrees of freedom. Within this work, we demonstrate that this same class of end-to-end deep neural network based learning can be adapted effectively for physical layer radio systems in order to optimize for sensing, estimation, and waveform synthesis systems to achieve state of the art levels of performance in numerous applications. First, we discuss the background and fundamental tools used, then discuss effective strategies and approaches to model design and optimization. Finally, we explore a series of applications across estimation, sensing, and waveform synthesis where we apply this approach to reformulate classical problems and illustrate the value and impact this approach can have on several key radio algorithm design problems. Ph. D.
- Published
- 2017
42. New Method for Directional Modulation Using Beamforming: Applications to Simultaneous Wireless Information and Power Transfer and Increased Secrecy Capacity
- Author
-
Yamada, Randy Matthew, Electrical Engineering, Mili, Lamine M., Steinhardt, Allan O., Buehrer, R. Michael, Clancy, Thomas Charles III, and Gugercin, Serkan
- Subjects
Beamforming ,Broadcast Channel ,Array ,Secrecy ,Physical Layer Security ,Directional Modulation - Abstract
The proliferation of connected embedded devices has driven wireless communications into commercial, military, industrial, and personal systems. It is unreasonable to expect privacy and security to be inherent in these networks given the spatial density of these devices, limited spectral resources, and the broadcast nature of wireless communications systems. Communications for these systems must have sufficient information capacity and secrecy capacity while typically maintaining small size, light weight, and minimized power consumption. With increasing crowding of the electromagnetic spectrum, interference must be leveraged as an available resource. This work develops a new beamforming method for direction-dependent modulation that provides wireless communications devices with enhanced physical layer security and the ability to simultaneously communicate and harvest energy by exploiting co-channel interference. We propose a method that optimizes a set of time-varying array steering vectors to enable direction-dependent modulation, thus exploiting a new degree of freedom in the space-time-frequency paradigm. We formulate steering vector selection as a convex optimization problem for rapid computation given arbitrarily positioned array antenna elements. We show that this method allows us to spectrally separate co-channel interference from an information-bearing signal in the analog domain, enabling the energy from the interference to be diverted for harvesting during the digitization and decoding of the information-bearing signal. We also show that this method provides wireless communications devices with not only enhanced information capacity, but also enhanced secrecy capacity in a broadcast channel. By using the proposed method, we can increase the overall channel capacity in a broadcast system beyond the current state-of-the-art for wireless broadcast channels, which is based on static coding techniques. Further, we also increase the overall secrecy capacity of the system by enabling secrecy for each user in the system. In practical terms, this results in higher-rate, confidential messages delivered to multiple devices in a broadcast channel for a given power constraint. Finally, we corroborate these claims with simulation and experimental results for the proposed method. PHD
- Published
- 2017
43. Spectrum Opportunity Duration Assurance: A Primary-Secondary Cooperation Approach for Spectrum Sharing Systems
- Author
-
Sohul, Munawwar Mahmud, Electrical and Computer Engineering, Reed, Jeffrey H., Rahman, Saifur, Roan, Michael J., MacKenzie, Allen B., and Clancy, Thomas Charles III
- Subjects
Dynamic spectrum access ,Spectrum sharing ,Spectrum Access System (SAS) ,Primary-Secondary cooperation ,Spectrum opportunity duration assurance (SODA) - Abstract
The radio spectrum dependent applications are facing a huge scarcity of the resource. To address this issue, future wireless systems require new wireless network architectures and new approaches to spectrum management. Spectrum sharing has emerged as a promising solution to address the radio frequency (RF) spectrum bottleneck. Although spectrum sharing is intended to provide flexible use of the spectrum, the architecture of the existing approaches, such as TV White Space [1] and Citizen Broadband Radio Services (CBRS) [2], have a relatively fixed sharing framework. This fixed structure limits the applicability of the architecture to other bands where the relationship between various new users and different types of legacy users co-exist. Specifically, an important aspect of sharing that has not been explored enough is the cooperation between the resource owner and the opportunistic user. Also in a shared spectrum system, the users do not have any information about the availability and duration of the available spectrum opportunities. This lack of understanding about the shared spectrum leads the research community to explore a number of core spectrum sharing tasks, such as opportunity detection, dynamic opportunity scheduling, and interference protection for the primary users, etc. This report proposes a Primary-Secondary Cooperation Framework to provide flexibility to all the involved parties in terms of choosing the level of cooperation that allow them to satisfy different objective priorities. The cooperation framework allows exchange of a probabilistic assurance: Spectrum Opportunity Duration Assurance (SODA) between the primary and secondary operations to improve the overall spectrum sharing experience for both the parties. This capability will give the spectrum sharing architectures new flexibility to handle evolutions in technologies, regulations, and the requirements of new bands being transitioned from fixed to share usage. In this dissertation we first look into the regulatory aspect of spectrum sharing. We analyze the Federal Communications Commission's (FCC) initiatives with regards to the commercial use of the 150 MHz spectrum block in the 3.5 GHz band. This analysis results into a Spectrum Access System (SAS) architecture and list of required functionalities. Then we address the nature of primary-secondary cooperation in spectrum sharing and propose to generate probabilistic assurances for spectrum opportunities. We use the generated assurance to observe the impact of cooperation from the perspective of spectrum sharing system management. We propose to incorporate primary user cooperation in the auctioning and resource allocation procedures to manage spectrum opportunities. We also analyze the improvement in spectrum sharing experience from the perspective of the primary and secondary users as a result of cooperation. We propose interference avoidance schemes that involve cooperation to improve the achievable quality of service. Primary-secondary cooperation has the potential to significantly influence the mechanism and outcomes of the spectrum sharing systems. Both the primary and secondary operations can benefit from cooperation in a sharing scenario. Based on the priorities of the primary and secondary operations, the users may decide on the level of cooperation that they are willing to participate. Also access to information about the availability and usability of the spectrum opportunity will result in efficient spectrum opportunity management and improved sharing performance for both the primary and secondary users. Thus offering assurances about the availability and duration of spectrum opportunity through primary-secondary cooperation will significantly improve the overall spectrum sharing experience. The research reported in this dissertation is expected to provide a fundamental analytical framework for characterizing and quantifying the implications of primary-secondary cooperation in a spectrum sharing context. It analyzes the technical challenges in modeling different level of cooperation and their impact on the spectrum sharing experience. We hope that this dissertation will establish the fundamentals of the spectrum sharing to allow the involved parties to participate in sharing mechanisms that is suitable to their objective priorities. PHD
- Published
- 2017
44. Securing Cloud Containers through Intrusion Detection and Remediation
- Author
-
Abed, Amr Sayed Omar, Electrical and Computer Engineering, Clancy, Thomas Charles III, Rakha, Hesham A., Yang, Yaling, Azab, Mohamed Mahmoud Mahmoud, and Reed, Jeffrey H.
- Subjects
Deep learning (Machine learning) ,Container Security ,Anomaly Detection ,Security in Cloud Computing ,Behavior Modeling ,Intrusion Detection - Abstract
Linux containers are gaining increasing traction in both individual and industrial use. As these containers get integrated into mission-critical systems, real-time detection of malicious cyber attacks becomes a critical operational requirement. However, a little research has been conducted in this area. This research introduces an anomaly-based intrusion detection and remediation system for container-based clouds. The introduced system monitors system calls between the container and the host server to passively detect malfeasance against applications running in cloud containers. We started by applying a basic memory-based machine learning technique to model the container behavior. The same technique was also extended to learn the behavior of a distributed application running in a number of cloud-based containers. In addition to monitoring the behavior of each container independently, the system used prior knowledge for a more informed detection system. We then studied the feasibility and effectiveness of applying a more sophisticated deep learning technique to the same problem. We used a recurrent neural network to model the container behavior. We evaluated the system using a typical web application hosted in two containers, one for the front-end web server, and one for the back-end database server. The system has shown promising results for both of the machine learning techniques used. Finally, we describe a number of incident handling and remediation techniques to be applied upon attack detection. Ph. D.
- Published
- 2017
45. Differential Network Analysis based on Omic Data for Cancer Biomarker Discovery
- Author
-
Zuo, Yiming, Electrical and Computer Engineering, Yu, Guoqiang, Ressom, Habtom W., Lou, Wenjing, Clancy, Thomas Charles III, and Wang, Yue J.
- Subjects
differential network analysis ,differential expression analysis ,cancer biomarker discovery - Abstract
Recent advances in high-throughput technique enables the generation of a large amount of omic data such as genomics, transcriptomics, proteomics, metabolomics, glycomics etc. Typically, differential expression analysis (e.g., student's t-test, ANOVA) is performed to identify biomolecules (e.g., genes, proteins, metabolites, glycans) with significant changes on individual level between biologically disparate groups (disease cases vs. healthy controls) for cancer biomarker discovery. However, differential expression analysis on independent studies for the same clinical types of patients often led to different sets of significant biomolecules and had only few in common. This may be attributed to the fact that biomolecules are members of strongly intertwined biological pathways and highly interactive with each other. Without considering these interactions, differential expression analysis could lead to biased results. Network-based methods provide a natural framework to study the interactions between biomolecules. Commonly used data-driven network models include relevance network, Bayesian network and Gaussian graphical models. In addition to data-driven network models, there are many publicly available databases such as STRING, KEGG, Reactome, and ConsensusPathDB, where one can extract various types of interactions to build knowledge-driven networks. While both data- and knowledge-driven networks have their pros and cons, an appropriate approach to incorporate the prior biological knowledge from publicly available databases into data-driven network model is desirable for more robust and biologically relevant network reconstruction. Recently, there has been a growing interest in differential network analysis, where the connection in the network represents a statistically significant change in the pairwise interaction between two biomolecules in different groups. From the rewiring interactions shown in differential networks, biomolecules that have strongly altered connectivity between distinct biological groups can be identified. These biomolecules might play an important role in the disease under study. In fact, differential expression and differential network analyses investigate omic data from two complementary perspectives: the former focuses on the change in individual biomolecule level between different groups while the latter concentrates on the change in pairwise biomolecules level. Therefore, an approach that can integrate differential expression and differential network analyses is likely to discover more reliable and powerful biomarkers. To achieve these goals, we start by proposing a novel data-driven network model (i.e., LOPC) to reconstruct sparse biological networks. The sparse networks only contains direct interactions between biomolecules which can help researchers to focus on the more informative connections. Then we propose a novel method (i.e., dwgLASSO) to incorporate prior biological knowledge into data-driven network model to build biologically relevant networks. Differential network analysis is applied based on the networks constructed for biologically disparate groups to identify cancer biomarker candidates. Finally, we propose a novel network-based approach (i.e., INDEED) to integrate differential expression and differential network analyses to identify more reliable and powerful cancer biomarker candidates. INDEED is further expanded as INDEED-M to utilize omic data at different levels of human biological system (e.g., transcriptomics, proteomics, metabolomics), which we believe is promising to increase our understanding of cancer. Matlab and R packages for the proposed methods are developed and available at Github (https://github.com/Hurricaner1989) to share with the research community. Ph. D.
- Published
- 2017
46. Fundamentals of Cache Aided Wireless Networks
- Author
-
Sengupta, Avik, Electrical and Computer Engineering, Clancy, Thomas Charles III, Tandon, Ravi, Chen, Ing-Ray, Yang, Yaling, and Reed, Jeffrey H.
- Subjects
Wireless Networks ,Content Delivery ,Caching - Abstract
Caching at the network edge has emerged as a viable solution for alleviating the severe capacity crunch in content-centric next generation 5G wireless networks by leveraging localized content storage and delivery. Caching generally works in two phases namely (i) storage phase where parts of popular content is pre-fetched and stored in caches at the network edge during time of low network load and (ii) delivery phase where content is distributed to users at times of high network load by leveraging the locally stored content. Cache-aided networks therefore have the potential to leverage storage at the network edge to increase bandwidth efficiency. In this dissertation we ask the following question - What are the theoretical and practical guarantees offered by cache aided networks for reliable content distribution while minimizing transmission rates and increasing network efficiency? We furnish an answer to this question by identifying fundamental Shannon-type limits for cache aided systems. To this end, we first consider a cache-aided network where the cache storage phase is assisted by a central server and users can demand multiple files at each transmission interval. To service these demands, we consider two delivery models - (i) centralized content delivery where demands are serviced by the central server; and (ii) device-to-device-assisted distributed delivery where demands are satisfied by leveraging the collective content of user caches. For such cache aided networks, we develop a new technique for characterizing information theoretic lower bounds on the fundamental storage-rate trade-off. Furthermore, using the new lower bounds, we establish the optimal storage-rate trade-off to within a constant multiplicative gap and show that, for the case of multiple demands per user, treating each set of demands independently is order-optimal. To address the concerns of privacy in multicast content delivery over such cache-aided networks, we introduce the problem of caching with secure delivery. We propose schemes which achieve information theoretic security in cache-aided networks and show that the achievable rate is within a constant multiplicative factor of the information theoretic optimal secure rate. We then extend our theoretical analysis to the wireless domain by studying a cloud and cache-aided wireless network from a perspective of low-latency content distribution. To this end, we define a new performance metric namely normalized delivery time, or NDT, which captures the worst-case delivery latency. We propose achievable schemes with an aim to minimize the NDT and derive information theoretic lower bounds which show that the proposed schemes achieve optimality to within a constant multiplicative factor of 2 for all values of problem parameters. Finally, we consider the problem of caching and content distribution in a multi-small-cell heterogeneous network from a reinforcement learning perspective for the case when the popularity of content is unknown. We propose a novel topology-aware learning-aided collaborative caching algorithm and show that collaboration among multiple small cells for cache-aided content delivery outperforms local caching in most network topologies of practical interest. The results presented in this dissertation show definitively that cache-aided systems help in appreciable increase of network efficiency and are a viable solution for the ever evolving capacity demands in the wireless communications landscape. Ph. D.
- Published
- 2016
47. Multi-Platform Molecular Data Integration and Disease Outcome Analysis
- Author
-
Youssef, Ibrahim Mohamed, Electrical and Computer Engineering, Yu, Guoqiang, Wang, Yue J., Ressom, Habtom W., Lu, Chang-Tien, and Clancy, Thomas Charles III
- Subjects
Cox proportional hazards model ,intratumor vascular heterogeneity ,molecular data integration ,survival analysis - Abstract
One of the most common measures of clinical outcomes is the survival time. Accurately linking cancer molecular profiling with survival outcome advances clinical management of cancer. However, existing survival analysis relies intensively on statistical evidence from a single level of data, without paying much attention to the integration of interacting multi-level data and the underlying biology. Advances in genomic techniques provide unprecedented power of characterizing the cancer tissue in a more complete manner than before, opening the opportunity of designing biologically informed and integrative approaches for survival analysis. Many cancer tissues have been profiled for gene expression levels and genomic variants (such as copy number alterations, sequence mutations, DNA methylation, and histone modification). However, it is not clear how to integrate the gene expression and genetic variants to achieve a better prediction and understanding of the cancer survival. To address this challenge, we propose two approaches for data integration in order to both biologically and statistically boost the features selection process for proper detection of the true predictive players of survival. The first approach is data-driven yet biologically informed. Consistent with the biological hierarchy from DNA to RNA, we prioritize each survival-relevant feature with two separate scores, predictive and mechanistic. With mRNA expression levels in concern, predictive features are those mRNAs whose variation in expression levels are associated with the survival outcome, and mechanistic features are those mRNAs whose variation in expression levels are associated with genomic variants (copy number alterations (CNAs) in this study). Further, we propose simultaneously integrating information from both the predictive model and the mechanistic model through our new approach GEMPS (Gene Expression as a Mediator for Predicting Survival). Applied on two cancer types (ovarian and glioblastoma multiforme), our method achieved better prediction power than peer methods. Gene set enrichment analysis confirms that the genes utilized for the final survival analysis are biologically important and relevant. The second approach is a generic mathematical framework to biologically regularize the Cox's proportional hazards model that is widely used in survival analysis. We propose a penalty function that both links the mechanistic model to the clinical model and reflects the biological downstream regulatory effect of the genomic variants on the mRNA expression levels of the target genes. Fast and efficient optimization principles like the coordinate descent and majorization-minimization are adopted in the inference process of the coefficients of the Cox model predictors. Through this model, we develop the regulator-target gene relationship to a new one: regulator-target-outcome relationship of a disease. Assessed via a simulation study and analysis of two real cancer data sets, the proposed method showed better performance in terms of selecting the true predictors and achieving better survival prediction. The proposed method gives insightful and meaningful interpretability to the selected model due to the biological linking of the mechanistic model and the clinical model. Other important forms of clinical outcomes are monitoring angiogenesis (formation of new blood vessels necessary for tumor to nourish itself and sustain its existence) and assessing therapeutic response. This can be done through dynamic imaging, in which a series of images at different time instances are acquired for a specific tumor site after injection of a contrast agent. Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is a noninvasive tool to examine tumor vasculature patterns based on accumulation and washout of the contrast agent. DCE-MRI gives indication about tumor vasculature permeability, which in turn indicates the tumor angiogenic activity. Observing this activity over time can reflect the tumor drug responsiveness and efficacy of the treatment plan. However, due to the limited resolution of the imaging scanners, a partial-volume effect (PVE) problem occurs, which is the result of signals from two or more tissues combining together to produce a single image concentration value within a pixel, with the effect of inaccurate estimation to the values of the pharmacokinetic parameters. A multi-tissue compartmental modeling (CM) technique supported by convex analysis of mixtures is used to mitigate the PVE by clustering pixels and constructing a simplex whose vertices are of a single compartment type. CAM uses the identified pure-volume pixels to estimate the kinetics of the tissues under investigation. We propose an enhanced version of CAM-CM to identify pure-volume pixels more accurately. This includes the consideration of the neighborhood effect on each pixel and the use of a barycentric coordinate system to identify more pure-volume pixels and to test those identified by CAM-CM. Tested on simulated DCE-MRI data, the enhanced CAM-CM achieved better performance in terms of accuracy and reproducibility. Ph. D.
- Published
- 2016
48. System and method for heterogenous spectrum sharing between commercial cellular operators and legacy incumbent users in wireless networks
- Author
-
Virginia Tech Intellectual Properties, Inc., FEDERATED WIRELESS, INC., Reed, Jeffrey H., Clancy, Thomas Charles III, Mitola, III, Joseph, Amanna, Ashwin, McGwier, Robert W., Kumar, Akshay, and Sengupta, Avik
- Subjects
H04W72/02 ,H04W72/0453 ,H04W72/12 ,H04W28/16 ,H04W72/04 ,H04W88/08 ,H04W16/14 ,H04W84/042 ,H04W72/08 ,H04B17/318 ,H04W64/00 - Abstract
Described herein are systems and methods for telecommunications spectrum sharing between multiple heterogeneous users, which leverage a hybrid approach that includes both distributed spectrum sharing, spectrum-sensing, and use of geo-reference databases.
- Published
- 2016
49. Antifragile Communications
- Author
-
Lichtman, Marc Louis, Electrical and Computer Engineering, Reed, Jeffrey H., De La Ree, Jaime, Clancy, Thomas Charles III, Roan, Michael J., and Sweeney, Dennis G.
- Subjects
Wireless System Security ,ComputerSystemsOrganization_COMPUTER-COMMUNICATIONNETWORKS ,Machine learning ,Cognitive radio networks ,Jamming ,Electronic Warfare ,Jammer Exploitation - Abstract
Jamming is an ongoing threat that plagues wireless communications in contested areas. Unfortunately, jamming complexity and sophistication will continue to increase over time. The traditional approach to addressing the jamming threat is to harden radios, such that they sacrifice communications performance for more advanced jamming protection. To provide an escape from this trend, we investigate the previously unexplored area of jammer exploitation. This dissertation develops the concept of antifragile communications, defined as the capability for a communications system to improve in performance due to a system stressor or harsh condition. Antifragility refers to systems that increase in capability, resilience, or robustness as a result of disorder (e.g., chaos, uncertainty, stress). An antifragile system is fundamentally different from one that is resilient (i.e., able to recover from failure) and robust (i.e., able to resist failure). We apply the concept of antifragility to wireless communications through several novel strategies that all involve exploiting a communications jammer. These strategies can provide an increase in throughput, efficiency, connectivity, or covertness, as a result of the jamming attack itself. Through analysis and simulation, we show that an antifragile gain is possible under a wide array of electronic warfare scenarios. Throughout this dissertation we provide guidelines for realizing these antifragile waveforms. Other major contributions of this dissertation include the development of a communications jamming taxonomy, feasibility study of reactive jamming in a SATCOM-type scenario, and a reinforcement learning-based reactive jamming mitigation strategy, for times when an antifragile approach is not practical. Most of the jammer exploitation strategies described in this dissertation fall under the category of jammer piggybacking, meaning the communications system turns the jammer into an unwitting relay. We study this jammer piggybacking approach under a variety of reactive jamming behaviors, with emphasis on the sense-and-transmit type. One piggybacking approach involves transmitting using a specialized FSK waveform, tailored to exploit a jammer that channelizes a block of spectrum and selectively jams active subchannels. To aid in analysis, we introduce a generalized model for reactive jamming, applicable to both repeater-based and sensing-based jamming behaviors. Despite being limited to electronic warfare scenarios, we hope that this work can pave the way for further research into antifragile communications. Ph. D.
- Published
- 2016
50. Efficient Resource Allocation Schemes for Wireless Networks with with Diverse Quality-of-Service Requirements
- Author
-
Kumar, Akshay, Electrical Engineering, Clancy, Thomas Charles III, Tandon, Ravi, Taaffe, Michael R., Reed, Jeffrey H., and Hsiao, Michael S.
- Subjects
Cross-Layer Optimization ,Dynamic Resource Allocation ,M2M Communication ,ComputerSystemsOrganization_COMPUTER-COMMUNICATIONNETWORKS ,Delay-Optimal Scheduler ,Quality-of-Service ,Distributed Storage - Abstract
Quality-of-Service (QoS) to users is a critical requirement of resource allocation in wireless networks and has drawn significant research attention over a long time. However, the QoS requirements differ vastly based on the wireless network paradigm. At one extreme, we have a millimeter wave small-cell network for streaming data that requires very high throughput and low latency. At the other end, we have Machine-to-Machine (M2M) uplink traffic with low throughput and low latency. In this dissertation, we investigate and solve QoS-aware resource allocation problems for diverse wireless paradigms. We first study cross-layer dynamic spectrum allocation in a LTE macro-cellular network with fractional frequency reuse to improve the spectral efficiency for cell-edge users. We show that the resultant optimization problem is NP-hard and propose a low-complexity layered spectrum allocation heuristic that strikes a balance between rate maximization and fairness of allocation. Next, we develop an energy efficient downlink power control scheme in a energy harvesting small-cell base station equipped with local cache and wireless backhaul. We also study the tradeoff between the cache size and the energy harvesting capabilities. We next analyzed the file read latency in Distributed Storage Systems (DSS). We propose a heterogeneous DSS model wherein the stored data is categorized into multiple classes based on arrival rate of read requests, fault-tolerance for storage etc. Using a queuing theoretic approach, we establish bounds on the average read latency for different scheduling policies. We also show that erasure coding in DSS serves the dual purpose of reducing read latency and increasing the energy efficiency. Lastly, we investigate the problem of delay-efficient packet scheduling in M2M uplink with heterogeneous traffic characteristics. We classify the uplink traffic into multiple classes and propose a proportionally-fair delay-efficient heuristic packet scheduler. Using a queuing theoretic approach, we next develop a delay optimal multiclass packet scheduler and later extend it to joint medium access control and packet scheduling for M2M uplink. Using extensive simulations, we show that the proposed schedulers perform better than state-of-the-art schedulers in terms of average delay and packet delay jitter. PHD
- Published
- 2016
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.