43 results on '"Guy Katz"'
Search Results
2. Reluplex: a calculus for reasoning about deep neural networks
- Author
-
Clark Barrett, Kyle D. Julian, David L. Dill, Guy Katz, and Mykel J. Kochenderfer
- Subjects
Artificial neural network ,Computer science ,business.industry ,Activation function ,Theoretical Computer Science ,Verification procedure ,Airborne collision avoidance system ,Simplex algorithm ,Hardware and Architecture ,Obstacle ,Scalability ,Deep neural networks ,Artificial intelligence ,business ,Software - Abstract
Deep neural networks have emerged as a widely used and effective means for tackling complex, real-world problems. However, a major obstacle in applying them to safety-critical systems is the great difficulty in providing formal guarantees about their behavior. We present a novel, scalable, and efficient technique for verifying properties of deep neural networks (or providing counter-examples). The technique is based on the simplex method, extended to handle the non-convex Rectified Linear Unit (ReLU) activation function, which is a crucial ingredient in many modern neural networks. The verification procedure tackles neural networks as a whole, without making any simplifying assumptions. We evaluated our technique on a prototype deep neural network implementation of the next-generation airborne collision avoidance system for unmanned aircraft (ACAS Xu). Results show that our technique can successfully prove properties of networks that are an order of magnitude larger than the largest networks that could be verified previously.
- Published
- 2021
- Full Text
- View/download PDF
3. Global optimization of objective functions represented by ReLU networks
- Author
-
Haoze Wu, Christopher A. Strong, Guy Katz, Kyle D. Julian, Mykel J. Kochenderfer, Aleksandar Zeljić, and Clark Barrett
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Theoretical computer science ,Artificial neural network ,Computer science ,Process (engineering) ,Extension (predicate logic) ,Machine Learning (cs.LG) ,Verification procedure ,Domain (software engineering) ,Optimization and Control (math.OC) ,Artificial Intelligence ,FOS: Mathematics ,Mathematics - Optimization and Control ,Global optimization ,Software - Abstract
Neural networks can learn complex, non-convex functions, and it is challenging to guarantee their correct behavior in safety-critical contexts. Many approaches exist to find failures in networks (e.g., adversarial examples), but these cannot guarantee the absence of failures. Verification algorithms address this need and provide formal guarantees about a neural network by answering "yes or no" questions. For example, they can answer whether a violation exists within certain bounds. However, individual "yes or no" questions cannot answer qualitative questions such as "what is the largest error within these bounds"; the answers to these lie in the domain of optimization. Therefore, we propose strategies to extend existing verifiers to perform optimization and find: (i) the most extreme failure in a given input region and (ii) the minimum input perturbation required to cause a failure. A naive approach using a bisection search with an off-the-shelf verifier results in many expensive and overlapping calls to the verifier. Instead, we propose an approach that tightly integrates the optimization process into the verification procedure, achieving better runtime performance than the naive approach. We evaluate our approach implemented as an extension of Marabou, a state-of-the-art neural network verifier, and compare its performance with the bisection approach and MIPVerify, an optimization-based verifier. We observe complementary performance between our extension of Marabou and MIPVerify., 27 pages, 7 figures
- Published
- 2021
- Full Text
- View/download PDF
4. Verifying learning-augmented systems
- Author
-
Yafim Kazak, Michael Schapira, Tomer Eliyahu, and Guy Katz
- Subjects
Model checking ,Job scheduler ,Artificial neural network ,Computer science ,business.industry ,Distributed computing ,Deep learning ,computer.software_genre ,Network congestion ,Scalability ,Reinforcement learning ,Artificial intelligence ,business ,Formal verification ,computer - Abstract
The application of deep reinforcement learning (DRL) to computer and networked systems has recently gained significant popularity. However, the obscurity of decisions by DRL policies renders it hard to ascertain that learning-augmented systems are safe to deploy, posing a significant obstacle to their real-world adoption. We observe that specific characteristics of recent applications of DRL to systems contexts give rise to an exciting opportunity: applying formal verification to establish that a given system provably satisfies designer/user-specified requirements, or to expose concrete counter-examples. We present whiRL, a platform for verifying DRL policies for systems, which combines recent advances in the verification of deep neural networks with scalable model checking techniques. To exemplify its usefulness, we employ whiRL to verify natural equirements from recently introduced learning-augmented systems for three real-world environments: Internet congestion control, adaptive video streaming, and job scheduling in compute clusters. Our evaluation shows that whiRL is capable of guaranteeing that natural requirements from these systems are satisfied, and of exposing specific scenarios in which other basic requirements are not.
- Published
- 2021
- Full Text
- View/download PDF
5. AI Verification : First International Symposium, SAIV 2024, Montreal, QC, Canada, July 22–23, 2024, Proceedings
- Author
-
Guy Avni, Mirco Giacobbe, Taylor T. Johnson, Guy Katz, Anna Lukina, Nina Narodytska, Christian Schilling, Guy Avni, Mirco Giacobbe, Taylor T. Johnson, Guy Katz, Anna Lukina, Nina Narodytska, and Christian Schilling
- Subjects
- Artificial intelligence
- Abstract
This LNCS volume constitutes the proceedings of the First International Symposium on AI Verification, SAIV 2024, in Montreal, QC, Canada, during July 2024. The scope of the topics was broadly categorized into two groups. The first group, formal methods for artificial intelligence, comprised: formal specifications for systems with AI components; formal methods for analyzing systems with AI components; formal synthesis methods of AI components; testing approaches for systems with AI components; statistical approaches for analyzing systems with AI components; and approaches for enhancing the explainability of systems with AI components. The second group, artificial intelligence for formal methods, comprised: AI methods for formal verification; AI methods for formal synthesis; AI methods for safe control; and AI methods for falsification.
- Published
- 2024
6. Towards combining deep learning, verification, and scenario-based programming
- Author
-
Achiya Elyasaf and Guy Katz
- Subjects
Software ,Artificial neural network ,Computer science ,business.industry ,Deep learning ,Scenario based programming ,Artificial intelligence ,business ,Drone ,Domain (software engineering) - Abstract
Deep learning (DL) [4] is dramatically changing the world of software. The rapid improvement in deep neural network (DNN) technology now enables engineers to train models that achieve superhuman results, often surpassing algorithms that have been carefully hand-crafted by domain experts [19, 20]. There is even an intensifying trend of incorporating DNNs in safety-critical systems, e.g. as controllers for autonomous vehicles and drones [1, 12].
- Published
- 2021
- Full Text
- View/download PDF
7. An SMT-Based Approach for Verifying Binarized Neural Networks
- Author
-
Clark Barrett, Guy Amir, Haoze Wu, and Guy Katz
- Subjects
Artificial neural network ,Computer science ,business.industry ,Deep learning ,Novelty ,020206 networking & telecommunications ,02 engineering and technology ,Machine learning ,computer.software_genre ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Software system ,Artificial intelligence ,business ,computer ,Formal verification ,Efficient energy use - Abstract
Deep learning has emerged as an effective approach for creating modern software systems, with neural networks often surpassing hand-crafted systems. Unfortunately, neural networks are known to suffer from various safety and security issues. Formal verification is a promising avenue for tackling this difficulty, by formally certifying that networks are correct. We propose an SMT-based technique for verifyingbinarized neural networks— a popular kind of neural network, where some weights have been binarized in order to render the neural network more memory and energy efficient, and quicker to evaluate. One novelty of our technique is that it allows the verification of neural networks that include both binarized and non-binarized components. Neural network verification is computationally very difficult, and so we propose here various optimizations, integrated into our SMT procedure as deduction steps, as well as an approach for parallelizing verification queries. We implement our technique as an extension to the Marabou framework, and use it to evaluate the approach on popular binarized neural network architectures.
- Published
- 2021
- Full Text
- View/download PDF
8. Augmenting Deep Neural Networks with Scenario-Based Guard Rules
- Author
-
Guy Katz
- Subjects
Guard (information security) ,Scenario based ,Recurrent neural network ,Computer science ,business.industry ,Order (exchange) ,Encoding (memory) ,Deep learning ,Deep neural networks ,Artificial intelligence ,business ,Simple (philosophy) - Abstract
Deep neural networks (DNNs) are becoming widespread, and can often outperform manually-created systems. However, these networks are typically opaque to humans, and may demonstrate undesirable behavior in corner cases that were not encountered previously. In order to mitigate this risk, one approach calls for augmenting DNNs with hand-crafted override rules. These override rules serve to prevent the DNN from making certain decisions, when certain criteria are met. Here, we build on this approach and propose to bring together DNNs and the well-studied scenario-based modeling paradigm, by encoding override rules as simple and intuitive scenarios. We demonstrate that the scenario-based paradigm can render override rules more comprehensible to humans, while keeping them sufficiently powerful and expressive to increase the overall safety of the model. We propose a method for applying scenario-based modeling to this new setting, and apply it to multiple DNN models. (This paper substantially extends the paper titled “Guarded Deep Learning using Scenario-Based Modeling”, published in Modelsward 2020 [47]. Most notably, it includes an additional case study, extends the approach to recurrent neural networks, and discusses various aspects of the proposed paradigm more thoroughly).
- Published
- 2021
- Full Text
- View/download PDF
9. Minimal Modifications of Deep Neural Networks using Verification
- Author
-
Ben Goldberger, Joseph Keshet, Guy Katz, and Yossi Adi
- Subjects
Computer science ,business.industry ,Deep neural networks ,Artificial intelligence ,business - Abstract
Deep neural networks (DNNs) are revolutionizing the way complex systems are de- signed, developed and maintained. As part of the life cycle of DNN-based systems, there is often a need to modify a DNN in subtle ways that affect certain aspects of its behav- ior, while leaving other aspects of its behavior unchanged (e.g., if a bug is discovered and needs to be fixed, without altering other functionality). Unfortunately, retraining a DNN is often difficult and expensive, and may produce a new DNN that is quite different from the original. We leverage recent advances in DNN verification and propose a technique for modifying a DNN according to certain requirements, in a way that is provably minimal, does not require any retraining, and is thus less likely to affect other aspects of the DNN’s behavior. Using a proof-of-concept implementation, we demonstrate the usefulness and potential of our approach in addressing two real-world needs: (i) measuring the resilience of DNN watermarking schemes; and (ii) bug repair in already-trained DNNs.
- Published
- 2020
- Full Text
- View/download PDF
10. Verifying Recurrent Neural Networks Using Invariant Inference
- Author
-
Yuval Jacoby, Guy Katz, and Clark Barrett
- Subjects
050101 languages & linguistics ,Correctness ,Artificial neural network ,business.industry ,Computer science ,Reliability (computer networking) ,05 social sciences ,Complex system ,Inference ,02 engineering and technology ,Recurrent neural network ,0202 electrical engineering, electronic engineering, information engineering ,Key (cryptography) ,020201 artificial intelligence & image processing ,0501 psychology and cognitive sciences ,Artificial intelligence ,State (computer science) ,business - Abstract
Deep neural networks are revolutionizing the way complex systems are developed. However, these automatically-generated networks are opaque to humans, making it difficult to reason about them and guarantee their correctness. Here, we propose a novel approach for verifying properties of a widespread variant of neural networks, called recurrent neural networks. Recurrent neural networks play a key role in, e.g., speech recognition, and their verification is crucial for guaranteeing the reliability of many critical systems. Our approach is based on the inference of invariants, which allow us to reduce the complex problem of verifying recurrent networks into simpler, non-recurrent problems. Experiments with a proof-of-concept implementation of our approach demonstrate that it performs orders-of-magnitude better than the state of the art.
- Published
- 2020
- Full Text
- View/download PDF
11. Guarded Deep Learning using Scenario-Based Modeling
- Author
-
Guy Katz
- Subjects
FOS: Computer and information sciences ,Scenario based ,Computer science ,business.industry ,Deep learning ,Machine learning ,computer.software_genre ,Software Engineering (cs.SE) ,Computer Science - Software Engineering ,Lead (geology) ,Software deployment ,SAFER ,Deep neural networks ,Artificial intelligence ,business ,computer ,Simple (philosophy) - Abstract
Deep neural networks (DNNs) are becoming prevalent, often outperforming manually-created systems. Unfortunately, DNN models are opaque to humans, and may behave in unexpected ways when deployed. One approach for allowing safer deployment of DNN models calls for augmenting them with hand-crafted override rules, which serve to override decisions made by the DNN model when certain criteria are met. Here, we propose to bring together DNNs and the well-studied scenario-based modeling paradigm, by expressing these override rules as simple and intuitive scenarios. This approach can lead to override rules that are comprehensible to humans, but are also sufficiently expressive and powerful to increase the overall safety of the model. We describe how to extend and apply scenario-based modeling to this new setting, and demonstrate our proposed technique on multiple DNN models., Comment: This is a preprint version of the paper that appeared at Modelsward 2020
- Published
- 2020
- Full Text
- View/download PDF
12. Simplifying Neural Networks Using Formal Verification
- Author
-
Clark Barrett, Adi Malca, Sumathi Gokulanathan, Alexander Feldsher, and Guy Katz
- Subjects
Artificial neural network ,010308 nuclear & particles physics ,Computer science ,business.industry ,02 engineering and technology ,Machine learning ,computer.software_genre ,01 natural sciences ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering ,Leverage (statistics) ,Deep neural networks ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,computer ,Formal verification - Abstract
Deep neural network (DNN) verification is an emerging field, with diverse verification engines quickly becoming available. Demonstrating the effectiveness of these engines on real-world DNNs is an important step towards their wider adoption. We present a tool that can leverage existing verification engines in performing a novel application: neural network simplification, through the reduction of the size of a DNN without harming its accuracy. We report on the work-flow of the simplification process, and demonstrate its potential significance and applicability on a family of real-world DNNs for aircraft collision avoidance, whose sizes we were able to reduce by as much as 10%.
- Published
- 2020
- Full Text
- View/download PDF
13. Verifying Deep-RL-Driven Systems
- Author
-
Yafim Kazak, Michael Schapira, Clark Barrett, and Guy Katz
- Subjects
Correctness ,Artificial neural network ,business.industry ,Computer science ,Distributed computing ,Deep learning ,020206 networking & telecommunications ,02 engineering and technology ,Network congestion ,0202 electrical engineering, electronic engineering, information engineering ,Reinforcement learning ,020201 artificial intelligence & image processing ,Resource management ,Artificial intelligence ,Routing (electronic design automation) ,business ,Formal verification - Abstract
Deep reinforcement learning (RL) has recently been successfully applied to networking contexts including routing, flow scheduling, congestion control, packet classification, cloud resource management, and video streaming. Deep-RL-driven systems automate decision making, and have been shown to outperform state-of-the-art handcrafted systems in important domains. However, the (typical) non-explainability of decisions induced by the deep learning machinery employed by these systems renders reasoning about crucial system properties, including correctness and security, extremely difficult. We show that despite the obscurity of decision making in these contexts, verifying that deep-RL-driven systems adhere to desired, designer-specified behavior, is achievable. To this end, we initiate the study of formal verification of deep RL and present Verily, a system for verifying deep-RL-based systems that leverages recent advances in verification of deep neural networks. We employ Verily to verify recently-introduced deep-RL-driven systems for adaptive video streaming, cloud resource management, and Internet congestion control. Our results expose scenarios in which deep-RL-driven decision making yields undesirable behavior. We discuss guidelines for building deep-RL-driven systems that are both safer and easier to verify.
- Published
- 2019
- Full Text
- View/download PDF
14. DeepSafe: A Data-Driven Approach for Assessing Robustness of Neural Networks
- Author
-
Divya Gopinath, Clark Barrett, Corina S. Păsăreanu, and Guy Katz
- Subjects
0209 industrial biotechnology ,Artificial neural network ,Computer science ,business.industry ,02 engineering and technology ,Machine learning ,computer.software_genre ,Data-driven ,Constraint (information theory) ,Airborne collision avoidance system ,020901 industrial engineering & automation ,Robustness (computer science) ,Control theory ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,Cluster analysis ,business ,computer ,MNIST database - Abstract
Deep neural networks have achieved impressive results in many complex applications, including classification tasks for image and speech recognition, pattern analysis or perception in self-driving vehicles. However, it has been observed that even highly trained networks are very vulnerable to adversarial perturbations. Adding minimal changes to inputs that are correctly classified can lead to wrong predictions, raising serious security and safety concerns. Existing techniques for checking robustness against such perturbations only consider searching locally around a few individual inputs, providing limited guarantees. We propose DeepSafe, a novel approach for automatically assessing the overall robustness of a neural network. DeepSafe applies clustering over known labeled data and leverages off-the-shelf constraint solvers to automatically identify and check safe regions in which the network is robust, i.e. all the inputs in the region are guaranteed to be classified correctly. We also introduce the concept of targeted robustness, which ensures that the neural network is guaranteed not to misclassify inputs within a region to a specific target (adversarial) label. We evaluate DeepSafe on a neural network implementation of a controller for the next-generation Airborne Collision Avoidance System for unmanned aircraft (ACAS Xu) and for the well known MNIST network. For these networks, DeepSafe identified many regions which were safe, and also found adversarial perturbations of interest.
- Published
- 2018
- Full Text
- View/download PDF
15. Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks
- Author
-
Guy Katz, Mykel J. Kochenderfer, Kyle D. Julian, David L. Dill, and Clark Barrett
- Subjects
Artificial neural network ,Computer science ,business.industry ,Activation function ,020207 software engineering ,02 engineering and technology ,Airborne collision avoidance system ,Simplex algorithm ,Computer engineering ,Satisfiability modulo theories ,Obstacle ,Scalability ,0202 electrical engineering, electronic engineering, information engineering ,Deep neural networks ,020201 artificial intelligence & image processing ,Artificial intelligence ,business - Abstract
Deep neural networks have emerged as a widely used and effective means for tackling complex, real-world problems. However, a major obstacle in applying them to safety-critical systems is the great difficulty in providing formal guarantees about their behavior. We present a novel, scalable, and efficient technique for verifying properties of deep neural networks (or providing counter-examples). The technique is based on the simplex method, extended to handle the non-convex Rectified Linear Unit (ReLU) activation function, which is a crucial ingredient in many modern neural networks. The verification procedure tackles neural networks as a whole, without making any simplifying assumptions. We evaluated our technique on a prototype deep neural network implementation of the next-generation airborne collision avoidance system for unmanned aircraft (ACAS Xu). Results show that our technique can successfully prove properties of networks that are an order of magnitude larger than the largest networks verified using existing methods.
- Published
- 2017
- Full Text
- View/download PDF
16. Machine Learning in Metaverse Security: Current Solutions and Future Challenges.
- Author
-
Otoum, Yazan, Gottimukkala, Navya, Kumar, Neeraj, and Nayak, Amiya
- Subjects
ARTIFICIAL intelligence ,ARTIFICIAL neural networks ,MACHINE learning ,GENERATIVE artificial intelligence ,COGNITIVE robotics ,INTRUSION detection systems (Computer security) ,AVATARS (Virtual reality) - Published
- 2024
- Full Text
- View/download PDF
17. Artificial Intelligence for Safety-Critical Systems in Industrial and Transportation Domains: A Survey.
- Author
-
Perez-Cerrolaza, Jon, Abella, Jaume, Borg, Markus, Donzella, Carlo, Cerquides, Jesús, Cazorla, Francisco J., Englund, Cristofer, Tauber, Markus, Nikolakopoulos, George, and Flores, Jose Luis
- Subjects
ARTIFICIAL neural networks ,ARTIFICIAL intelligence ,MACHINE learning ,NATURAL language processing ,HIGH performance computing ,SOFTWARE verification ,MIDDLEWARE - Published
- 2024
- Full Text
- View/download PDF
18. Secure and Trustworthy Artificial Intelligence-extended Reality (AI-XR) for Metaverses.
- Author
-
Qayyum, Adnan, Butt, Muhammad Atif, Ali, Hassan, Usman, Muhammad, Halabi, Osama, Al-Fuqaha, Ala, Abbasi, Qammer H., Imran, Muhammad Ali, and Qadir, Junaid
- Subjects
ARTIFICIAL intelligence ,ARTIFICIAL neural networks ,MACHINE learning ,COMPUTER vision ,INFORMATION technology ,AVATARS (Virtual reality) ,BLOCKCHAINS - Published
- 2024
- Full Text
- View/download PDF
19. Risk of Stochastic Systems for Temporal Logic Specifications.
- Author
-
LINDEMANN, LARS, LEJUN JIANG, MATNI, NIKOLAI, and PAPPAS, GEORGE J.
- Subjects
STOCHASTIC systems ,SYSTEM failures ,LOGIC ,STOCHASTIC processes ,ARTIFICIAL intelligence ,DISCRETE time filters - Abstract
The wide availability of data coupled with the computational advances in artificial intelligence and machine learning promise to enable many future technologies such as autonomous driving. While there has been a variety of successful demonstrations of these technologies, critical system failures have repeatedly been reported. Even if rare, such system failures pose a serious barrier to adoption without a rigorous risk assessment. This article presents a framework for the systematic and rigorous risk verification of systems. We consider a wide range of system specifications formulated in signal temporal logic (STL) and model the system as a stochastic process, permitting discrete-time and continuous-time stochastic processes. We then define the STL robustness risk as the risk of lacking robustness against failure. This definition is motivated as system failures are often caused by missing robustness to modeling errors, system disturbances, and distribution shifts in the underlying data generating process. Within the definition, we permit general classes of risk measures and focus on tail risk measures such as the value-at-risk and the conditional value-at-risk. While the STL robustness risk is in general hard to compute, we propose the approximate STL robustness risk as a more tractable notion that upper bounds the STL robustness risk. We show how the approximate STL robustness risk can accurately be estimated from system trajectory data. For discrete-time stochastic processes, we show under which conditions the approximate STL robustness risk can even be computed exactly. We illustrate our verification algorithm in the autonomous driving simulator CARLA and show how a least risky controller can be selected among four neural network lane-keeping controllers for five meaningful system specifications. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
20. RobOT: Robustness-Oriented Testing for Deep Learning Systems.
- Author
-
Jingyi Wang, Jialuo Chen, Youcheng Sun, and Xingjun Ma
- Subjects
DEEP learning ,ARTIFICIAL intelligence ,SOFTWARE engineering ,ROBUST control ,QUALITY assurance - Abstract
Recently, there has been a significant growth of interest in applying software engineering techniques for the quality assurance of deep learning (DL) systems. One popular direction is deep learning testing, where adversarial examples (a.k.a. bugs) of DL systems are found either by fuzzing or guided search with the help of certain testing metrics. However, recent studies have revealed that the commonly used neuron coverage metrics by existing DL testing approaches are not correlated to model robustness. It is also not an effective measurement on the confidence of the model robustness after testing. In this work, we address this gap by proposing a novel testing framework called Robustness-Oriented Testing (RobOT). A key part of RobOT is a quantitative measurement on 1) the value of each test case in improving model robustness (often via retraining), and 2) the convergence quality of the model robustness improvement. RobOT utilizes the proposed metric to automatically generate test cases valuable for improving model robustness. The proposed metric is also a strong indicator on how well robustness improvement has converged through testing. Experiments on multiple benchmark datasets confirm the effectiveness and efficiency of RobOT in improving DL model robustness, with 67.02% increase on the adversarial robustness that is 50.65% higher than the state-of-the-art work DeepGini. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
21. ReluDiff: Differential Verification of Deep Neural Networks.
- Author
-
Paulsen, Brandon, Jingbo Wang, and Chao Wang
- Subjects
SOFTWARE engineering ,COMPUTER software development ,ARTIFICIAL neural networks ,ENERGY consumption ,ARTIFICIAL intelligence - Abstract
As deep neural networks are increasingly being deployed in practice, their efficiency has become an important issue. While there are compression techniques for reducing the network's size, energy consumption and computational requirement, they only demonstrate empirically that there is no loss of accuracy, but lack formal guarantees of the compressed network, e.g., in the presence of adversarial examples. Existing verification techniques such as Reluplex, ReluVal, and DeepPoly provide formal guarantees, but they are designed for analyzing a single network instead of the relationship between two networks. To fill the gap, we develop a new method for differential verification of two closely related networks. Our method consists of a fast but approximate forward interval analysis pass followed by a backward pass that iteratively refines the approximation until the desired property is verified. We have two main innovations. During the forward pass, we exploit structural and behavioral similarities of the two networks to more accurately bound the difference between the output neurons of the two networks. Then in the backward pass, we leverage the gradient differences to more accurately compute the most beneficial refinement. Our experiments show that, compared to state-of-theart verification tools, our method can achieve orders-of-magnitude speedup and prove many more properties than existing tools. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
22. Adversarial Sample Detection for Deep Neural Network through Model Mutation Testing.
- Author
-
Jingyi Wang, Guoliang Dong, Jun Sun, Xinyu Wang, and Peixin Zhang
- Subjects
ARTIFICIAL neural networks ,STATISTICAL hypothesis testing ,ARTIFICIAL intelligence ,COMPUTER science ,SOFTWARE engineering - Abstract
Deep neural networks (DNN) have been shown to be useful in a wide range of applications. However, they are also known to be vulnerable to adversarial samples. By transforming a normal sample with some carefully crafted human imperceptible perturbations, even highly accurate DNN make wrong decisions. Multiple defense mechanisms have been proposed which aim to hinder the generation of such adversarial samples. However, a recent work show that most of them are ineffective. In this work, we propose an alternative approach to detect adversarial samples at runtime. Our main observation is that adversarial samples are much more sensitive than normal samples if we impose random mutations on the DNN. We thus first propose a measure of 'sensitivity' and show empirically that normal samples and adversarial samples have distinguishable sensitivity. We then integrate statistical hypothesis testing and model mutation testing to check whether an input sample is likely to be normal or adversarial at runtime by measuring its sensitivity. We evaluated our approach on the MNIST and CIFAR10 datasets. The results show that our approach detects adversarial samples generated by state-of-the-art attacking methods efficiently and accurately. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
23. Formal Methods for Industrial Critical Systems : 29th International Conference, FMICS 2024, Milan, Italy, September 9–11, 2024, Proceedings
- Author
-
Anne E. Haxthausen, Wendelin Serwe, Anne E. Haxthausen, and Wendelin Serwe
- Subjects
- Compilers (Computer programs), Software engineering, Application software, Artificial intelligence, Computer science, Computer engineering, Computer networks
- Abstract
This book constitutes the proceedings of the 29th International Conference on Formal Methods for Industrial Critical Systems, FMICS 2024, held in Milan, Italy, during September 9–13, 2024. The 14 full papers included in this book were carefully reviewed and selected from 22 submissions. These papers have been organized in the following topical sections: Real-Time Systems/ Robotics; Semantics and Verification; Case Studies; Neural Networks.
- Published
- 2024
24. Model-Driven Engineering and Software Development : 11th International Conference, MODELSWARD 2023, Lisbon, Portugal, February 19–21, 2023, Revised Selected Papers
- Author
-
Francisco José Domínguez Mayo, Luís Ferreira Pires, Edwin Seidewitz, Francisco José Domínguez Mayo, Luís Ferreira Pires, and Edwin Seidewitz
- Subjects
- Software engineering, Computer systems, Artificial intelligence
- Abstract
This book constitutes the refereed post-proceedings of the 11th International Conference on Model-Driven Engineering and Software Development, MODELSWARD 2023, which took place in Lisbon, Portugal during February 19-21, 2023. The 8 full papers included in this book were carefully reviewed and selected from 41 submissions. The papers are categorized under the topical sections as follows: Applications and System Development and Modeling Languages, Tools and Architectures.
- Published
- 2024
25. Bridging the Gap Between AI and Reality : First International Conference, AISoLA 2023, Crete, Greece, October 23–28, 2023, Proceedings
- Author
-
Bernhard Steffen and Bernhard Steffen
- Subjects
- Computer science, Software engineering, Computers, Special purpose, Computer systems, Artificial intelligence
- Abstract
This book constitutes the proceedings of the First International Conference on Bridging the Gap between AI and Reality, AISoLA 2023, which took place in Crete, Greece, in October 2023. The papers included in this book focus on the following topics: The nature of AI-based systems; ethical, economic and legal implications of AI-systems in practice; ways to make controlled use of AI via the various kinds of formal methods-based validation techniques; dedicated applications scenarios which may allow certain levels of assistance; and education in times of deep learning.
- Published
- 2023
26. Computer Aided Verification : 35th International Conference, CAV 2023, Paris, France, July 17–22, 2023, Proceedings, Part I
- Author
-
Constantin Enea, Akash Lal, Constantin Enea, and Akash Lal
- Subjects
- Artificial intelligence, Computer software--Verification--Congresses
- Abstract
The open access proceedings set LNCS 13964, 13965, 13966 constitutes the refereed proceedings of the 35th International Conference on Computer Aided Verification, CAV 2023, which was held in Paris, France, in July 2023. The 67 full papers presented in these proceedings were carefully reviewed and selected from 261 submissions. The have been organized in topical sections as follows: Part I: Automata and logic; concurrency; cyber-physical and hybrid systems; synthesis; Part II: Decision procedures; model checking; neural networks and machine learning; Part II: Probabilistic systems; security and quantum systems; software verification.
- Published
- 2023
27. Model-Driven Engineering and Software Development : 9th International Conference, MODELSWARD 2021, Virtual Event, February 8–10, 2021, and 10th International Conference, MODELSWARD 2022, Virtual Event, February 6–8, 2022, Revised Selected Papers
- Author
-
Luís Ferreira Pires, Slimane Hammoudi, Edwin Seidewitz, Luís Ferreira Pires, Slimane Hammoudi, and Edwin Seidewitz
- Subjects
- Software engineering, Computer systems, Computers, Special purpose, Programming languages (Electronic computers), Computer programming, Artificial intelligence
- Abstract
This book constitutes the refereed post-proceedings of the 9th International Conference and 10th International Conference on Model-Driven Engineering and Software Development, MODELSWARD 2021 and MODELSWARD 2022, was held virtually due to the COVID-19 crisis on February 8–10, 2021 and February 6–8, 2022.The 11 full papers included in this book were carefully reviewed and selected from 121 submissions. The purpose of the International Conference on model-driven engineering and software development is to provide a platform for researchers, engineers, academics as well as industrial professionals from all over the world to present their research results and development activities in using models and model driven engineering techniques for system development.
- Published
- 2023
28. Automated Technology for Verification and Analysis : 20th International Symposium, ATVA 2022, Virtual Event, October 25–28, 2022, Proceedings
- Author
-
Ahmed Bouajjani, Lukáš Holík, Zhilin Wu, Ahmed Bouajjani, Lukáš Holík, and Zhilin Wu
- Subjects
- Software engineering, Computer engineering, Computer networks, Computers, Computer science, Artificial intelligence
- Abstract
This book constitutes the refereed proceedings of the 20th International Symposium on Automated Technology for Verification and Analysis, ATVA 2022, held in Beiging, China in October 2022. The symposium is dedicated to promoting research in theoretical and practical aspects of automated analysis, verification and synthesis by providing an international venue for the researchers to present new results. The 21 regular papers presented together with 5 tool papers and 1 invited paper were carefully reviewed and selected from 81 submissions.The papers are divided into the following topical sub-headings: reinforcement learning; program analysis and verification; smt and verification; automata and applications; active learning; probabilistic and stochastic systems; synthesis and repair; and verification of neural networks.
- Published
- 2022
29. Computer Aided Verification : 33rd International Conference, CAV 2021, Virtual Event, July 20–23, 2021, Proceedings, Part II
- Author
-
Alexandra Silva, K. Rustan M. Leino, Alexandra Silva, and K. Rustan M. Leino
- Subjects
- Artificial intelligence, Computer software--Verification--Congresses
- Abstract
This open access two-volume set LNCS 12759 and 12760 constitutes the refereed proceedings of the 33rd International Conference on Computer Aided Verification, CAV 2021, held virtually in July 2021.The 63 full papers presented together with 16 tool papers and 5 invited papers were carefully reviewed and selected from 290 submissions. The papers were organized in the following topical sections: Part I: invited papers; AI verification; concurrency and blockchain; hybrid and cyber-physical systems; security; and synthesis. Part II: complexity and termination; decision procedures and solvers; hardware and model checking; logical foundations; and software verification.
- Published
- 2021
30. Automated Technology for Verification and Analysis : 19th International Symposium, ATVA 2021, Gold Coast, QLD, Australia, October 18–22, 2021, Proceedings
- Author
-
Zhe Hou, Vijay Ganesh, Zhe Hou, and Vijay Ganesh
- Subjects
- Software engineering, Artificial intelligence, Computers, Computer engineering, Computer networks
- Abstract
This book constitutes the refereed proceedings of the 19th International Symposium on Automated Technology for Verification and Analysis, ATVA 2021, held in Gold Coast, Australia in October 2021. The symposium is dedicated to promoting research in theoretical and practical aspects of automated analysis, verification and synthesis by providing an international venue for the researchers to present new results. The 19 regular papers presented together with 4 tool papers and 1 invited paper were carefully reviewed and selected from 75 submissions. The papers are divided into the following topical sub-headings: Automata Theory; Machine learning for Formal Methods; Theorem Proving and Tools; Model Checking; Probabilistic Analysis; Software and Hardware Verification; System Synthesis and Approximation; and Verification of Machine Learning.
- Published
- 2021
31. Integration of Constraint Programming, Artificial Intelligence, and Operations Research : 18th International Conference, CPAIOR 2021, Vienna, Austria, July 5–8, 2021, Proceedings
- Author
-
Peter J. Stuckey and Peter J. Stuckey
- Subjects
- Computer science—Mathematics, Artificial intelligence, Computer engineering, Computer networks, Computer science, Software engineering
- Abstract
This volume LNCS 12735 constitutes the papers of the 18th International Conference on the Integration of Constraint Programming, Artificial Intelligence, and Operations Research, CPAIOR 2021, which was held in Vienna, Austria, in 2021. Due to the COVID-19 pandemic the conference was held online. The 30 regular papers presented were carefully reviewed and selected from a total of 75 submissions. The conference program included a Master Class on the topic'Explanation and Verification of Machine Learning Models'.
- Published
- 2021
32. Model-Driven Engineering and Software Development : 8th International Conference, MODELSWARD 2020, Valletta, Malta, February 25–27, 2020, Revised Selected Papers
- Author
-
Slimane Hammoudi, Luís Ferreira Pires, Bran Selić, Slimane Hammoudi, Luís Ferreira Pires, and Bran Selić
- Subjects
- Software engineering, Computer systems, Computers, Special purpose, Programming languages (Electronic computers), Computer programming, Artificial intelligence
- Abstract
This book constitutes thoroughly revised and selected papers from the 8th International Conference on Model-Driven Engineering and Software Development, MODELSWARD 2020, held in Valletta, Malta, in February 2020. The 15 revised and extended papers presented in this volume were carefully reviewed and selected from 66 submissions. They present recent research results and development activities in using models and model driven engineering techniques for software development. The papers are organized in topical sections on methodologies, processes and platforms; applications and software development; modeling languages, tools and architectures.
- Published
- 2021
33. Tools and Algorithms for the Construction and Analysis of Systems : 27th International Conference, TACAS 2021, Held As Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2021, Luxembourg City, Luxembourg, March 27 – April 1, 2021, Proceedings, Part I
- Author
-
Jan Friso Groote, Kim Guldstrand Larsen, Jan Friso Groote, and Kim Guldstrand Larsen
- Subjects
- System analysis--Congresses, Artificial intelligence, System design--Congresses, Computer software--Verification--Congresses
- Abstract
This open access two-volume set constitutes the proceedings of the 27th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2021, which was held during March 27 – April 1, 2021, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2021. The conference was planned to take place in Luxembourg and changed to an online format due to the COVID-19 pandemic.The total of 41 full papers presented in the proceedings was carefully reviewed and selected from 141 submissions. The volume also contains 7 tool papers; 6 Tool Demo papers, 9 SV-Comp Competition Papers. The papers are organized in topical sections as follows: Part I: Game Theory; SMT Verification; Probabilities; Timed Systems; Neural Networks; Analysis of Network Communication. Part II: Verification Techniques (not SMT); Case Studies; Proof Generation/Validation; Tool Papers; Tool Demo Papers; SV-Comp Tool Competition Papers.
- Published
- 2021
34. Computer Safety, Reliability, and Security. SAFECOMP 2020 Workshops : DECSoS 2020, DepDevOps 2020, USDAI 2020, and WAISE 2020, Lisbon, Portugal, September 15, 2020, Proceedings
- Author
-
António Casimiro, Frank Ortmeier, Erwin Schoitsch, Friedemann Bitsch, Pedro Ferreira, António Casimiro, Frank Ortmeier, Erwin Schoitsch, Friedemann Bitsch, and Pedro Ferreira
- Subjects
- Computer engineering, Computer networks, Artificial intelligence, Application software, Cryptography, Data encryption (Computer science), Expert systems (Computer science)
- Abstract
This book constitutes the proceedings of the Workshops held in conjunction with SAFECOMP 2020, 39th International Conference on Computer Safety, Reliability and Security, Lisbon, Portugal, September 2020.The 26 regular papers included in this volume were carefully reviewed and selected from 45 submissions; the book also contains one invited paper. The workshops included in this volume are: DECSoS 2020:15th Workshop on Dependable Smart Embedded and Cyber-Physical Systems and Systems-of-Systems. DepDevOps 2020:First International Workshop on Dependable Development-Operation Continuum Methods for Dependable Cyber-Physical Systems. USDAI 2020:First International Workshop on Underpinnings for Safe Distributed AI. WAISE 2020:Third International Workshop on Artificial Intelligence Safety Engineering.The workshops were held virtually due to the COVID-19 pandemic.
- Published
- 2020
35. NASA Formal Methods : 12th International Symposium, NFM 2020, Moffett Field, CA, USA, May 11–15, 2020, Proceedings
- Author
-
Ritchie Lee, Susmit Jha, Anastasia Mavridou, Dimitra Giannakopoulou, Ritchie Lee, Susmit Jha, Anastasia Mavridou, and Dimitra Giannakopoulou
- Subjects
- Software engineering, Computer engineering, Computer networks, Computer science, Artificial intelligence, Computer simulation
- Abstract
This book constitutes the proceedings of the 12th International Symposium on NASA Formal Methods, NFM 2020, held in Moffett Field, CA, USA, in May 2020.• The 20 full and 5 short papers presented in this volume were carefully reviewed and selected from 62 submissions. The papers are organized in the following topical sections: learning and formal synthesis; formal methods for DNNs; high assurance systems; requirement specification and testing; validation and solvers; solvers and program analysis; verification and times systems; autonomy and other applications; and hybrid and cyber-physical systems. •The conference was held virtually due to the COVID-19 pandemic.The chapter “Verifying a Solver for Linear Mixed Integer Arithmetic in Isabelle/HOL” is available open access under a Creative Commons Attribution 4.0 International License via link.springer.com.
- Published
- 2020
36. Automated Technology for Verification and Analysis : 18th International Symposium, ATVA 2020, Hanoi, Vietnam, October 19–23, 2020, Proceedings
- Author
-
Dang Van Hung, Oleg Sokolsky, Dang Van Hung, and Oleg Sokolsky
- Subjects
- Artificial intelligence, Computers, Special purpose, Computer systems, Natural language processing (Computer science), Data structures (Computer science), Information theory
- Abstract
This book constitutes the refereed proceedings of the 18th International Symposium on Automated Technology for Verification and Analysis, ATVA 2020, held in Hanoi, Vietnam, in October 2020. The 27 regular papers presented together with 5 tool papers and 2 invited papers were carefully reviewed and selected from 75 submissions. The symposium is dedicated to promoting research in theoretical and practical aspects of automated analysis, verification and synthesis by providing an international venue for the researchers to present new results. The papers focus on neural networks and machine learning; automata; logics; techniques for verification, analysis and testing; model checking and decision procedures; synthesis; and randomization and probabilistic systems.
- Published
- 2020
37. Computer Safety, Reliability, and Security : SAFECOMP 2019 Workshops, ASSURE, DECSoS, SASSUR, STRIVE, and WAISE, Turku, Finland, September 10, 2019, Proceedings
- Author
-
Alexander Romanovsky, Elena Troubitsyna, Ilir Gashi, Erwin Schoitsch, Friedemann Bitsch, Alexander Romanovsky, Elena Troubitsyna, Ilir Gashi, Erwin Schoitsch, and Friedemann Bitsch
- Subjects
- Computer engineering, Computer networks, Artificial intelligence, Data protection, Cryptography, Data encryption (Computer science), Software engineering, Computer vision
- Abstract
This book constitutes the proceedings of the Workshops held in conjunction with SAFECOMP 2019, 38th International Conference on Computer Safety, Reliability and Security, in September 2019 in Turku, Finland. The 32 regular papers included in this volume were carefully reviewed and selected from 43 submissions; the book also contains two invited papers. The workshops included in this volume are: ASSURE 2019: 7th International Workshop on Assurance Cases for Software-Intensive Systems DECSoS 2019: 14th ERCIM/EWICS/ARTEMIS Workshop on Dependable Smart Embedded and Cyber-Physical Systems and Systems-of-Systems SASSUR 2019: 8th International Workshop on Next Generation of System Assurance Approaches for Safety-Critical Systems STRIVE 2019: Second International Workshop on Safety, securiTy, and pRivacy In automotiVe systEms WAISE 2019: Second International Workshop on Artificial Intelligence Safety Engineering
- Published
- 2019
38. Formal Methods – The Next 30 Years : Third World Congress, FM 2019, Porto, Portugal, October 7–11, 2019, Proceedings
- Author
-
Maurice H. ter Beek, Annabelle McIver, José N. Oliveira, Maurice H. ter Beek, Annabelle McIver, and José N. Oliveira
- Subjects
- Software engineering, Compilers (Computer programs), Computer science, Machine theory, Algorithms, Artificial intelligence
- Abstract
This book constitutes the refereed proceedings of the 23rd Symposium on Formal Methods, FM 2019, held in Porto, Portugal, in the form of the Third World Congress on Formal Methods, in October 2019. The 44 full papers presented together with 3 invited presentations were carefully reviewed and selected from 129 submissions. The papers are organized in topical sections named: Invited Presentations; Verification; Synthesis Techniques; Concurrency; Model Checking Circus; Model Checking; Analysis Techniques; Specification Languages; Reasoning Techniques; Modelling Languages; Learning-Based Techniques and Applications; Refactoring and Reprogramming; I-Day Presentations.
- Published
- 2019
39. Computer Aided Verification : 31st International Conference, CAV 2019, New York City, NY, USA, July 15-18, 2019, Proceedings, Part I
- Author
-
Isil Dillig, Serdar Tasiran, Isil Dillig, and Serdar Tasiran
- Subjects
- Computer science, Logic design, Software engineering, Computer logic, Artificial intelligence, Computer industry, Computer programs--Verification--Congresses
- Abstract
This open access two-volume set LNCS 11561 and 11562 constitutes the refereed proceedings of the 31st International Conference on Computer Aided Verification, CAV 2019, held in New York City, USA, in July 2019. The 52 full papers presented together with 13 tool papers and 2 case studies, were carefully reviewed and selected from 258 submissions. The papers were organized in the following topical sections: Part I: automata and timed systems; security and hyperproperties; synthesis; model checking; cyber-physical systems and machine learning; probabilistic systems, runtime techniques; dynamical, hybrid, and reactive systems; Part II: logics, decision procedures; and solvers; numerical programs; verification; distributed systems and networks; verification and invariants; and concurrency.
- Published
- 2019
40. Computer Aided Verification : 29th International Conference, CAV 2017, Heidelberg, Germany, July 24-28, 2017, Proceedings, Part I
- Author
-
Rupak Majumdar, Viktor Kunčak, Rupak Majumdar, and Viktor Kunčak
- Subjects
- Computer science, Software engineering, Computer simulation, Computers, Professions, Electronic digital computers—Evaluation, Artificial intelligence
- Abstract
The two-volume set LNCS 10426 and LNCS 10427 constitutes the refereed proceedings of the 29th International Conference on Computer Aided Verification, CAV 2017, held in Heidelberg, Germany, in July 2017.The total of 50 full and 7 short papers presented together with 5 keynotes and tutorials in the proceedings was carefully reviewed and selected from 191 submissions. The CAV conference series is dedicated to the advancement of the theory and practice of computer-aided formal analysis of hardware and software systems. The conference covers the spectrum from theoretical results to concrete applications, with an emphasis on practical verification tools and the algorithms and techniques that are needed for their implementation.
- Published
- 2017
41. Hardware and Software: Verification and Testing : 13th International Haifa Verification Conference, HVC 2017, Haifa, Israel, November 13-15, 2017, Proceedings
- Author
-
Ofer Strichman, Rachel Tzoref-Brill, Ofer Strichman, and Rachel Tzoref-Brill
- Subjects
- Software engineering, Computer science, Compilers (Computer programs), Machine theory, Artificial intelligence, Computer networks
- Abstract
This book constitutes the refereed proceedings of the 13th International Haifa Verification Conference, HVC 2017, held in Haifa, Israel in November 2017.The 13 revised full papers presented together with 4 poster and 5 tool demo papers were carefully reviewed and selected from 45 submissions. They are dedicated to advance the state of the art and state of the practice in verification and testing and are discussing future directions of testing and verification for hardware, software, and complex hybrid systems.
- Published
- 2017
42. Transactions on Computational Collective Intelligence XVI
- Author
-
Ryszard Kowalczyk, Ngoc Thanh Nguyen, Ryszard Kowalczyk, and Ngoc Thanh Nguyen
- Subjects
- Artificial intelligence, Computational intelligence, Software engineering, Computer simulation, Computer networks
- Abstract
These transactions publish research in computer-based methods of computational collective intelligence (CCI) and their applications in a wide range of fields such as the semantic web, social networks, and multi-agent systems. TCCI strives to cover new methodological, theoretical and practical aspects of CCI understood as the form of intelligence that emerges from the collaboration and competition of many individuals (artificial and/or natural). The application of multiple computational intelligence technologies, such as fuzzy systems, evolutionary computation, neural systems, consensus theory, etc., aims to support human and other collective intelligence and to create new forms of CCI in natural and/or artificial systems. This 16th issue contains 8 regular papers selected via peer-review process.
- Published
- 2014
43. Logic for Programming, Artificial Intelligence, and Reasoning : 19th International Conference, LPAR-19, Stellenbosch, South Africa, December 14-19, 2013, Proceedings
- Author
-
Ken McMillan, Aart Middeldorp, Andrei Voronkov, Ken McMillan, Aart Middeldorp, and Andrei Voronkov
- Subjects
- Software engineering, Artificial intelligence, Computer science, Machine theory, Computer programming, Compilers (Computer programs)
- Abstract
This book constitutes the proceedings of the 19th International Conference on Logic for Programming, Artificial Intelligence and Reasoning, LPAR-19, held in December 2013 in Stellenbosch, South Africa. The 44 regular papers and 8 tool descriptions and experimental papers included in this volume were carefully reviewed and selected from 152 submissions. The series of International Conferences on Logic for Programming, Artificial Intelligence and Reasoning (LPAR) is a forum where year after year, some of the most renowned researchers in the areas of logic, automated reasoning, computational logic, programming languages and their applications come to present cutting-edge results, to discuss advances in these fields and to exchange ideas in a scientifically emerging part of the world.
- Published
- 2013
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.