12 results on '"Guy Katz"'
Search Results
2. Artificial Intelligence for Safety-Critical Systems in Industrial and Transportation Domains: A Survey.
- Author
-
Perez-Cerrolaza, Jon, Abella, Jaume, Borg, Markus, Donzella, Carlo, Cerquides, Jesús, Cazorla, Francisco J., Englund, Cristofer, Tauber, Markus, Nikolakopoulos, George, and Flores, Jose Luis
- Subjects
ARTIFICIAL neural networks ,ARTIFICIAL intelligence ,MACHINE learning ,NATURAL language processing ,HIGH performance computing ,SOFTWARE verification ,MIDDLEWARE - Published
- 2024
- Full Text
- View/download PDF
3. Secure and Trustworthy Artificial Intelligence-extended Reality (AI-XR) for Metaverses.
- Author
-
Qayyum, Adnan, Butt, Muhammad Atif, Ali, Hassan, Usman, Muhammad, Halabi, Osama, Al-Fuqaha, Ala, Abbasi, Qammer H., Imran, Muhammad Ali, and Qadir, Junaid
- Subjects
ARTIFICIAL intelligence ,ARTIFICIAL neural networks ,MACHINE learning ,COMPUTER vision ,INFORMATION technology ,AVATARS (Virtual reality) ,BLOCKCHAINS - Published
- 2024
- Full Text
- View/download PDF
4. Deep Reinforcement Learning Verification: A Survey.
- Author
-
LANDERS, MATTHEW and DORYAB, AFSANEH
- Subjects
DEEP reinforcement learning ,REINFORCEMENT learning ,ARTIFICIAL neural networks ,DEEP learning ,OPERANT conditioning - Abstract
Deep reinforcement learning (DRL) has proven capable of superhuman performance on many complex tasks. To achieve this success, DRL algorithms train a decision-making agent to select the actions that maximize some long-term performance measure. In many consequential real-world domains, however, optimal performance is not enough to justify an algorithm's use--for example, sometimes a system's robustness, stability, or safety must be rigorously ensured. Thus, methods for verifying DRL systems have emerged. These algorithms can guarantee a system's properties over an infinite set of inputs, but the task is not trivial. DRL relies on deep neural networks (DNNs). DNNs are often referred to as "black boxes" because examining their respective structures does not elucidate their decision-making processes. Moreover, the sequential nature of the problems DRL is used to solve promotes significant scalability challenges. Finally, because DRL environments are often stochastic, verification methods must account for probabilistic behavior. To address these complications, a new subfield has emerged. In this survey, we establish the foundations of DRL and DRL verification, define a taxonomy for DRL verification methods, describe approaches for dealing with stochasticity, characterize considerations related to writing specifications, enumerate common testing tasks/environments, and detail opportunities for future research. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
5. Taxonomy of Machine Learning Safety: A Survey and Primer.
- Author
-
SINA MOHSENI, HAOTAO WANG, CHAOWEI XIAO, ZHIDING YU, ZHANGYANG WANG, and JAY YADAWA
- Subjects
SURGERY safety measures ,INDUSTRIAL safety ,TAXONOMY ,AUTONOMOUS vehicles ,SAFETY - Abstract
The open-world deployment of Machine Learning (ML) algorithms in safety-critical applications such as autonomous vehicles needs to address a variety of ML vulnerabilities such as interpretability, verifiability, and performance limitations. Research explores different approaches to improve ML dependability by proposing newmodels and training techniques to reduce generalization error, achieve domain adaptation, and detect outlier examples and adversarial attacks. However, there is a missing connection between ongoing ML research and well-established safety principles. In this article, we present a structured and comprehensive review of ML techniques to improve the dependability of ML algorithms in uncontrolled open-world settings. From this review, we propose the Taxonomy of ML Safety that maps state-of-the-art ML techniques to key engineering safety strategies. Our taxonomy of ML safety presents a safety-oriented categorization of ML techniques to provide guidance for improving dependability of the ML design and development. The proposed taxonomy can serve as a safety checklist to aid designers in improving coverage and diversity of safety strategies employed in any given ML system. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
6. A Survey of Algorithmic Recourse: Contrastive Explanations and Consequential Recommendations.
- Author
-
KARIMI, AMIR-HOSSEIN, BARTHE, GILLES, SCHÖLKOPF, BERNHARD, and VALERA, ISABEL
- Subjects
MACHINE learning ,EXPLANATION ,DECISION making ,LITERATURE reviews ,FAIRNESS - Abstract
Machine learning is increasingly used to inform decision making in sensitive situations where decisions have consequential effects on individuals’ lives. In these settings, in addition to requiring models to be accurate and robust, socially relevant values such as fairness, privacy, accountability, and explainability play an important role in the adoption and impact of said technologies. In this work, we focus on algorithmic recourse, which is concerned with providing explanations and recommendations to individuals who are unfavorably treated by automated decision-making systems.We first perform an extensive literature review, and align the efforts of many authors by presenting unified definitions, formulations, and solutions to recourse. Then, we provide an overview of the prospective research directions toward which the community may engage, challenging existing assumptions and making explicit connections to other ethical challenges such as security, privacy, and fairness. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
7. Machine Learning for Computer Systems and Networking: A Survey.
- Author
-
EVANGELOS KANAKIS, MARIOS, KHALILI, RAMIN, and LIN WANG
- Subjects
COMPUTER networks ,COMPUTER systems ,COMPUTER vision ,NETWORK PC (Computer) ,NATURAL language processing ,MACHINE learning - Abstract
Machine learning (ML) has become the de-facto approach for various scientific domains such as computer vision and natural language processing. Despite recent breakthroughs, machine learning has only made its way into the fundamental challenges in computer systems and networking recently. This article attempts to shed light on recent literature that appeals for machine learning-based solutions to traditional problems in computer systems and networking. To this end, we first introduce a taxonomy based on a set of major research problem domains. Then, we present a comprehensive review per domain, where we compare the traditional approaches against the machine learning-based ones. Finally, we discuss the general limitations of machine learning for computer systems and networking, including lack of training data, training overhead, real-time performance, and explainability, and reveal future research directions targeting these limitations. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
8. Adversarial Machine Learning in Image Classification: A Survey Toward the Defender’s Perspective.
- Author
-
RESENDE MACHADO, GABRIEL, SILVA, EUGÊNIO, and GOLDSCHMIDT, RONALDO RIBEIRO
- Subjects
ARTIFICIAL neural networks ,COMPUTER vision ,DEEP learning ,CLASSIFICATION ,MACHINE learning ,DRIVERLESS cars ,MATHEMATICAL optimization - Abstract
Deep Learning algorithms have achieved state-of-the-art performance for Image Classification. For this reason, they have been used even in security-critical applications, such as biometric recognition systems and self-driving cars. However, recent works have shown those algorithms, which can even surpass human capabilities, are vulnerable to adversarial examples. In Computer Vision, adversarial examples are images containing subtle perturbations generated by malicious optimization algorithms to fool classifiers. As an attempt to mitigate these vulnerabilities, numerous countermeasures have been proposed recently in the literature. However, devising an efficient defense mechanism has proven to be a difficult task, since many approaches demonstrated to be ineffective against adaptive attackers. Thus, this article aims to provide all readerships with a review of the latest research progress on Adversarial Machine Learning in Image Classification, nevertheless, with a defender’s perspective. This article introduces novel taxonomies for categorizing adversarial attacks and defenses, as well as discuss possible reasons regarding the existence of adversarial examples. In addition, relevant guidance is also provided to assist researchers when devising and evaluating defenses. Finally, based on the reviewed literature, this article suggests some promising paths for future research. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
9. Assuring the Machine Learning Lifecycle: Desiderata, Methods, and Challenges.
- Author
-
ASHMORE, ROB, CALINESCU, RADU, and PATERSON, COLIN
- Subjects
MACHINE learning ,POLITICAL agenda ,ACQUISITION of data - Abstract
Machine learning has evolved into an enabling technology for a wide range of highly successful applications. The potential for this success to continue and accelerate has placed machine learning (ML) at the top of research, economic, and political agendas. Such unprecedented interest is fuelled by a vision of ML applicability extending to healthcare, transportation, defence, and other domains of great societal importance. Achieving this vision requires the use of ML in safety-critical applications that demand levels of assurance beyond those needed for current ML applications. Our article provides a comprehensive survey of the state of the art in the assurance of ML, i.e., in the generation of evidence that ML is sufficiently safe for its intended use. The survey covers the methods capable of providing such evidence at different stages of the machine learning lifecycle, i.e., of the complex, iterative process that starts with the collection of the data used to train an ML component for a system, and ends with the deployment of that component within the system. The article begins with a systematic presentation of the ML lifecycle and its stages. We then define assurance desiderata for each stage, review existing methods that contribute to achieving these desiderata, and identify open challenges that require further research. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
10. Adversarial Machine Learning Attacks and Defense Methods in the Cyber Security Domain.
- Author
-
ROSENBERG, ISHAI, SHABTAI, ASAF, ELOVICI, YUVAL, and ROKACH, LIOR
- Subjects
INTERNET security ,MACHINE learning ,DEEP learning ,CYBERTERRORISM ,INSTRUCTIONAL systems - Abstract
In recent years, machine learning algorithms, and more specifically deep learning algorithms, have been widely used in many fields, including cyber security. However, machine learning systems are vulnerable to adversarial attacks, and this limits the application of machine learning, especially in non-stationary, adversarial environments, such as the cyber security domain, where actual adversaries (e.g., malware developers) exist. This article comprehensively summarizes the latest research on adversarial attacks against security solutions based on machine learning techniques and illuminates the risks they pose. First, the adversarial attack methods are characterized based on their stage of occurrence, and the attacker' s goals and capabilities. Then, we categorize the applications of adversarial attack and defense methods in the cyber security domain. Finally, we highlight some characteristics identified in recent research and discuss the impact of recent advancements in other adversarial learning domains on future research directions in the cyber security domain. To the best of our knowledge, this work is the first to discuss the unique challenges of implementing end-to-end adversarial attacks in the cyber security domain, map them in a unified taxonomy, and use the taxonomy to highlight future research directions. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
11. Adversarial Machine Learning in Image Classification: A Survey Toward the Defender’s Perspective.
- Author
-
RESENDE MACHADO, GABRIEL, SILVA, EUGÊNIO, and RIBEIRO GOLDSCHMIDT, RONALDO
- Abstract
Deep Learning algorithms have achieved state-of-the-art performance for Image Classification. For this reason, they have been used even in security-critical applications, such as biometric recognition systems and self-driving cars. However, recent works have shown those algorithms, which can even surpass human capabilities, are vulnerable to adversarial examples. In Computer Vision, adversarial examples are images containing subtle perturbations generated by malicious optimization algorithms to fool classifiers. As an attempt to mitigate these vulnerabilities, numerous countermeasures have been proposed recently in the literature. However, devising an efficient defense mechanism has proven to be a difficult task, since many approaches demonstrated to be ineffective against adaptive attackers. Thus, this article aims to provide all readerships with a review of the latest research progress on Adversarial Machine Learning in Image Classification, nevertheless, with a defender’s perspective. This article introduces novel taxonomies for categorizing adversarial attacks and defenses, as well as discuss possible reasons regarding the existence of adversarial examples. In addition, relevant guidance is also provided to assist researchers when devising and evaluating defenses. Finally, based on the reviewed literature, this article suggests some promising paths for future research. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
12. Adversarial Examples on Object Recognition: A Comprehensive Survey.
- Author
-
SERBAN, ALEX, POLL, ERIK, and VISSER, JOOST
- Abstract
Deep neural networks are at the forefront of machine learning research. However, despite achieving impressive performance on complex tasks, they can be very sensitive: Small perturbations of inputs can be sufficient to induce incorrect behavior. Such perturbations, called adversarial examples, are intentionally designed to test the network's sensitivity to distribution drifts. Given their surprisingly small size, a wide body of literature conjectures on their existence and how this phenomenon can be mitigated. In this article, we discuss the impact of adversarial examples on security, safety, and robustness of neural networks. We start by introducing the hypotheses behind their existence, the methods used to construct or protect against them, and the capacity to transfer adversarial examples between different machine learning models. Altogether, the goal is to provide a comprehensive and self-contained survey of this growing field of research. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.