1,186 results on '"Formal equivalence checking"'
Search Results
2. Efficient Formal Verification of Galois-Field Arithmetic Circuits Using ZDD Representation of Boolean Polynomials
- Author
-
Akira Ito, Naofumi Homma, and Rei Ueno
- Subjects
Polynomial ,Gröbner basis ,Binary decision diagram ,Computer science ,Formal equivalence checking ,Netlist ,Electrical and Electronic Engineering ,Arithmetic ,Formal methods ,Computer Graphics and Computer-Aided Design ,Equivalence (measure theory) ,Formal verification ,Software - Abstract
In this study, we present a new formal method for verifying the functionality of Galois-field (GF) arithmetic circuits. Assuming that the input–output relation (i.e., the specification of a GF arithmetic circuit) can be represented as polynomials over 2, the proposed method formally checks the equivalence between GF polynomials derived from a netlist and the specification. To efficiently verify the equivalence, we employ a zero-suppressed binary decision diagram (ZDD) to represent polynomials over 2. Even though polynomial reduction is the most time-consuming process of verification (i.e., equivalence checking), our new algorithm can efficiently reduce the GF polynomials in the form of a zero-suppressed binary decision diagram derived from the target netlist. The proposed algorithm derives the polynomials representing all intermediate nodes (i.e., the outputs of all gates) in the order from primary inputs to those primary outputs that are in accordance with the reverse topological traversal order. We demonstrated the efficiency and effectiveness of the proposed method via a set of experimental verifications. In particular, we confirmed that the proposed method can verify practical GF multipliers (including those used in standardized elliptic curve cryptography) approximately 30 times faster on average and at most 170 times faster than the best conventional method.
- Published
- 2022
- Full Text
- View/download PDF
3. Advanced Equivalence Checking for Quantum Circuits
- Author
-
Lukas Burgholzer and Robert Wille
- Subjects
FOS: Computer and information sciences ,Quantum Physics ,Theoretical computer science ,Computer science ,Formal equivalence checking ,Design flow ,FOS: Physical sciences ,Computer Science - Emerging Technologies ,02 engineering and technology ,Quantum entanglement ,Computer Graphics and Computer-Aided Design ,020202 computer hardware & architecture ,Superposition principle ,Emerging Technologies (cs.ET) ,Qubit ,0202 electrical engineering, electronic engineering, information engineering ,Quantum algorithm ,Electrical and Electronic Engineering ,Quantum Physics (quant-ph) ,Quantum ,Software ,Quantum computer ,Abstraction (linguistics) - Abstract
Quantum computing will change the way we tackle certain problems. It promises to dramatically speed-up many chemical, financial, and machine-learning applications. However, to capitalize on those promises, complex design flows composed of steps such as compilation, decomposition, or mapping need to be employed before being able to execute a conceptual quantum algorithm on an actual device. This results in descriptions at various levels of abstraction which may significantly differ from each other. The complexity of the underlying design problems necessitates to not only provide efficient solutions for the single steps, but also to verify that the originally intended functionality is preserved throughout all levels of abstraction. This motivates methods for equivalence checking of quantum circuits. However, most existing methods are inspired by the classical realm and have merely been extended to support quantum circuits (i.e., circuits which do not only rely on 0's and 1's, but also employ superposition and entanglement). In this work, we propose an advanced methodology which takes the different paradigms of quantum circuits not only as a burden, but as an opportunity. In fact, the proposed methodology explicitly utilizes characteristics unique to quantum computing in order to overcome the shortcomings of existing approaches. We show that, by exploiting the reversibility of quantum circuits, complexity can be kept feasible in many cases. Moreover, we show that, in contrast to the classical realm, simulation is very powerful in verifying quantum circuits. Experimental evaluations confirm that the resulting methodology allows one to conduct equivalence checking dramatically faster than ever before--in many cases just a single simulation run is sufficient. An implementation of the proposed methodology is publicly available at https://iic.jku.at/eda/research/quantum_verification/., 14 pages, 10 figures
- Published
- 2021
- Full Text
- View/download PDF
4. A Game Characterization for Contrasimilarity
- Author
-
Benjamin Bisping and Luisa Montanari
- Subjects
FOS: Computer and information sciences ,Computer Science - Logic in Computer Science ,F.3.2 ,F.4.1 ,Computer science ,Programming language ,Formal equivalence checking ,HOL ,Characterization (mathematics) ,computer.software_genre ,Logic in Computer Science (cs.LO) ,TheoryofComputation_MATHEMATICALLOGICANDFORMALLANGUAGES ,Computer Science - Computer Science and Game Theory ,TheoryofComputation_LOGICSANDMEANINGSOFPROGRAMS ,computer ,Computer Science and Game Theory (cs.GT) - Abstract
We present the first game characterization of contrasimilarity, the weakest form of bisimilarity. The game is finite for finite-state processes and can thus be used for contrasimulation equivalence checking, of which no tool has been capable to date. A machine-checked Isabelle/HOL formalization backs our work and enables further use of contrasimilarity in verification contexts., In Proceedings EXPRESS/SOS 2021, arXiv:2108.09624
- Published
- 2021
- Full Text
- View/download PDF
5. From Lustre to Simulink
- Author
-
Christophe Garion, Xavier Thirioux, Pierre-Loïc Garoche, and Hamza Bourbouh
- Subjects
Control and Optimization ,Computer Networks and Communications ,Lustre (programming language) ,business.industry ,Computer science ,Formal equivalence checking ,020207 software engineering ,02 engineering and technology ,Toolchain ,Human-Computer Interaction ,Imperative programming ,Artificial Intelligence ,Hardware and Architecture ,020204 information systems ,Embedded system ,Formal specification ,Model-based design ,0202 electrical engineering, electronic engineering, information engineering ,ComputerSystemsOrganization_SPECIAL-PURPOSEANDAPPLICATION-BASEDSYSTEMS ,Code generation ,business ,computer ,Formal verification ,computer.programming_language - Abstract
Model-based design is now unavoidable when building embedded systems and, more specifically, controllers. Among the available model languages, the synchronous dataflow paradigm, as implemented in languages such as MATLAB Simulink or ANSYS SCADE, has become predominant in critical embedded system industries. Both of these frameworks are used to design the controller itself but also provide code generation means, enabling faster deployment to target and easier V&V activities performed earlier in the design process, at the model level. Synchronous models also ease the definition of formal specification through the use of synchronous observers, attaching requirements to the model in the very same language, mastered by engineers and tooled with simulation means or code generation. However, few works address the automatic synthesis of MATLAB Simulink annotations from lower-level models or code. This article presents a compilation process from Lustre models to genuine MATLAB Simulink, without the need to rely on external C functions or MATLAB functions. This translation is based on the modular compilation of Lustre to imperative code and preserves the hierarchy of the input Lustre model within the generated Simulink one. We implemented the approach and used it to validate a compilation toolchain, mapping Simulink to Lustre and then C, thanks to equivalence testing and checking. This backward compilation from Lustre to Simulink also provides the ability to produce automatically Simulink components modeling specification, proof arguments, or test cases coverage criteria.
- Published
- 2021
- Full Text
- View/download PDF
6. Constraint Solving for Synthesis and Verification of Threshold Logic Circuits
- Author
-
Nian-Ze Lee and Jie-Hong R. Jiang
- Subjects
Computer science ,Formal equivalence checking ,Computer Graphics and Computer-Aided Design ,Multiplexer ,Logic synthesis ,Logic gate ,Subset sum problem ,Electrical and Electronic Engineering ,Conjunctive normal form ,Boolean function ,Hardware_REGISTER-TRANSFER-LEVELIMPLEMENTATION ,Algorithm ,Software ,Logic programming - Abstract
Threshold logic (TL) circuits gain increasing attention due to their feasible realization with emerging technologies and strong bind to neural network applications. In this work, we devise techniques for automatic synthesis and verification of TL circuits based on constraint solving. For synthesis, we formulate a fundamental operation to collapse TL functions, and derive a necessary and sufficient condition of collapsibility for linear combination of two TL functions. An approach based on solving the subset sum problem is proposed for fast circuit transformation. For verification, we propose a procedure to convert a TL function to a multiplexer (MUX) tree and to pseudo-Boolean (PB) constraints for formal Boolean and PB reasoning, respectively. Experiments on synthesis show that the collapse operation further reduces gate counts of synthesized TL circuits by an average of 18%. Experiments on verification demonstrate good scalability of the MUX-based method for equivalence checking of synthesized TL circuits, and efficiency of PB constraint conversion in cases where the conjunctive normal form (CNF) formula conversion and MUX tree conversion suffer from memory explosion.
- Published
- 2021
- Full Text
- View/download PDF
7. Equivalence checking of quantum finite-state machines
- Author
-
Qisheng Wang, Mingsheng Ying, and Junyi Liu
- Subjects
FOS: Computer and information sciences ,Formal Languages and Automata Theory (cs.FL) ,Computer Networks and Communications ,Computer science ,FOS: Physical sciences ,Computer Science - Formal Languages and Automata Theory ,0102 computer and information sciences ,02 engineering and technology ,01 natural sciences ,Minimisation (clinical trials) ,Computation Theory & Mathematics ,Theoretical Computer Science ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Quantum ,PSPACE ,Quantum Physics ,Finite-state machine ,Efficient algorithm ,Applied Mathematics ,Formal equivalence checking ,Algebra ,Computational Theory and Mathematics ,010201 computation theory & mathematics ,TheoryofComputation_LOGICSANDMEANINGSOFPROGRAMS ,0802 Computation Theory and Mathematics, 0805 Distributed Computing, 0806 Information Systems ,Quantum Physics (quant-ph) - Abstract
In this paper, we introduce the model of quantum Mealy machines and study the equivalence checking and minimisation problems of them. Two efficient algorithms are developed for checking equivalence of two states in the same machine and for checking equivalence of two machines. As an application, they are used in equivalence checking of quantum circuits. Moreover, the minimisation problem is proved to be in $\textbf{PSPACE}$., Minor corrections. 29 pages, 2 figures, 3 tables, 2 algorithms
- Published
- 2021
- Full Text
- View/download PDF
8. Counterexample-guided correlation algorithm for translation validation
- Author
-
Shubhani Gupta, Abhishek Rose, and Sorav Bansal
- Subjects
Source code ,Assembly language ,Computer science ,media_common.quotation_subject ,Formal equivalence checking ,Translation validation ,computer.file_format ,Identification (information) ,Scalability ,Executable ,Safety, Risk, Reliability and Quality ,computer ,Algorithm ,Software ,media_common ,Counterexample ,computer.programming_language - Abstract
Automatic translation validation across the unoptimized intermediate representation (IR) of the original source code and the optimized executable assembly code is a desirable capability, and has the potential to compete with existing approaches to verified compilation such as CompCert. A difficult subproblem is the automatic identification of the correlations across the transitions between the two programs' respective locations. We present a counterexample-guided algorithm to identify these correlations in a robust and scalable manner. Our algorithm has both theoretical and empirical advantages over prior work in this problem space.
- Published
- 2020
- Full Text
- View/download PDF
9. Parallel Combinational Equivalence Checking
- Author
-
Alan Mishchenko, Renato P. Ribas, Vinicius N. Possani, and Andre I. Reis
- Subjects
Correctness ,Speedup ,Computer science ,Formal equivalence checking ,02 engineering and technology ,Parallel computing ,Computer Graphics and Computer-Aided Design ,020202 computer hardware & architecture ,Reduction (complexity) ,Logic synthesis ,Parallel processing (DSP implementation) ,0202 electrical engineering, electronic engineering, information engineering ,Electronic design automation ,Electrical and Electronic Engineering ,Equivalence (measure theory) ,Software - Abstract
Combinational equivalence checking (CEC) has been widely applied to ensure design correctness after logic synthesis and technology-dependent optimization in digital integrated circuit design. CEC runtime is often critical for large designs, even when advanced techniques are employed. Three complementary ways for enabling parallelism in CEC are proposed, addressing different design and verification scenarios. The experimental results have demonstrated the speedups up to $63\times $ when comparing the proposed approach to a single-threaded implementation of a similar CEC engine. A practical impact of such a speedup, for instance, is the runtime reduction from 19 h to only 18 min when checking equivalence of AND-inverter graphs comprising more than 20 million nodes. Therefore, the proposed solution presents great potential for improving current electronic design automation environments.
- Published
- 2020
- Full Text
- View/download PDF
10. Efficient Equivalence Checking Technique for Some Classes of Finite-State Machines
- Author
-
Vladimir A. Zakharov
- Subjects
Discrete mathematics ,Finite-state machine ,TheoryofComputation_COMPUTATIONBYABSTRACTDEVICES ,language equation ,Computer science ,Language equation ,Formal equivalence checking ,Pushdown automaton ,Information technology ,decision procedure ,T58.5-58.64 ,prefix-free language ,transducer ,Deterministic finite automaton ,TheoryofComputation_MATHEMATICALLOGICANDFORMALLANGUAGES ,Regular language ,biatomaton ,two-tape automaton ,equivalence checking ,simple grammar ,Equivalence (measure theory) ,Time complexity ,Computer Science::Formal Languages and Automata Theory - Abstract
Finite transducers, two-tape automata, and biautomata are related computational models descended from the concept of Finite-State Automaton. In these models an automaton controls two heads that read or write symbols on the tapes in the one-way mode. The computations of these three types of automata show many common features, and it is surprising that the methods for analyzing the behavior of automata developed for one of these models do not find suitable utilization in other models. The goal of this paper is to develop a uniform technique for building polynomial-time equivalence checking algorithms for some classes of automata (finite transducers, two-tape automata, biautomata, single-state pushdown automata) which exhibit certain features of the deterministic or unambiguous behavior. This new technique reduces the equivalence checking of automata to solvability checking of certain systems of equations over the semirings of languages or transductions. It turns out that such a checking can be performed by the variable elimination technique which relies on some combinatorial and algebraic properties of prefix-free regular languages. The main results obtained in this paper are as follows: 1. Using the algebraic approach a new algorithm for checking the equivalence of states of deterministic finite automata is constructed; time complexity of this algorithm is O ( n log n ). 2. A new class of prefix-free finite transducers is distinguished and it is shown that the developed algebraic approach provides the equivalence checking of transducers from this class in quadratic time (for real-time prefix-free transducers) and cubic (for prefix-free transducers with ɛ -transitions) relative to the sizes of analysed machines. 3. It is shown that the equivalence problem for deterministic two-tape finite automata can be reduced to the same problem for prefix-free finite transducers and solved in cubic time relative to the size of the analysed machines. 4. In the same way it is proved that the equivalence problem for deterministic finite biautomata can be solved in cubic time relative to the sizes of analysed machines. 5. By means of the developed approach an efficient equivalence checking algorithm for the class of simple grammars corresponding to deterministic single-state pushdown automata is constructed.
- Published
- 2020
11. Verification of Scheduling of Conditional Behaviors in High-Level Synthesis
- Author
-
Chandan Karfa and Ramanuj Chouksey
- Subjects
Very-large-scale integration ,Correctness ,Computer science ,Formal equivalence checking ,02 engineering and technology ,020202 computer hardware & architecture ,Scheduling (computing) ,Software bug ,Hardware and Architecture ,High-level synthesis ,0202 electrical engineering, electronic engineering, information engineering ,Electrical and Electronic Engineering ,Equivalence (formal languages) ,Equivalence (measure theory) ,Algorithm ,Software ,Register-transfer level - Abstract
High-level synthesis (HLS) technique translates the behaviors written in high-level languages like C/C++ into register transfer level (RTL) design. Due to its complexity, proving the correctness of an HLS tool is prohibitively expensive. Translation validation is the process of proving that the target code is a correct translation of the source program being compiled. The path-based equivalence checking (PBEC) method is a widely used translation validation method for verification of the scheduling phase of HLS. The existing PBEC methods cannot handle significant control structure modification that occurs in the efficient scheduling of conditional behaviors. Hence, they produce a false-negative result. In this article, we identify some scenarios involving path merge/split where the state-of-the-art PBEC approaches fail to show the equivalence even though behaviors are equivalent. We propose a value propagation-based PBEC method along with a new cutpoint selection scheme to overcome this limitation. Our method can also handle the scenario where adjacent conditional blocks (CBs) having an equivalent conditional expression are combined into one CB. Experimental results demonstrate the usefulness of our method over the existing methods.
- Published
- 2020
- Full Text
- View/download PDF
12. Application specified soft-error failure rate analysis using sequential equivalence checking techniques
- Author
-
Sikun Li, Tun Li, Hai Wan, and Qinhan Yu
- Subjects
Multidisciplinary ,Sequential logic ,Soft error ,Computer engineering ,Computer science ,Formal equivalence checking ,Benchmark (computing) ,Overhead (computing) ,Failure rate ,Node (circuits) ,Hardware_PERFORMANCEANDRELIABILITY ,Formal methods ,Hardware_LOGICDESIGN - Abstract
Soft errors have become a critical challenge as a result of technology scaling. Existing circuit-hardening techniques are commonly associated with prohibitive overhead of performance, area, and power. However, evaluating the influence of soft errors in Flip-Flops (FFs) on the failure of circuit is a difficult verification problem. Here, we proposed a novel flip-flop soft-error failure rate analysis methodology using a formal method with respect to application behaviors. Approach and optimization techniques to implement the proposed methodology based on the given formula using Sequential Equivalence Checking (SEC) are introduced. The proposed method combines the advantage of formal technique-based approaches in completeness and the advantage of application behaviors in accuracy to differentiate vulnerability of components. As a result, the FFs in a circuit are sorted by their failure rates, and designers can use this information to perform optimal hardening of selected sequential components against soft errors. Experimental results of an implementation of a SpaceWire end node and the largest ISCAS'89 benchmark sequential circuits indicate the feasibility and potential scalability of our approach. A case study on an instruction decoder of a practical 32-bit microprocessor demonstrates the applicability of our method.
- Published
- 2020
- Full Text
- View/download PDF
13. An Analysis of the Authors of A Dream of Red Mansions Based on Equivalence Checking and Feature Clustering
- Subjects
Feature (computer vision) ,Computer science ,business.industry ,media_common.quotation_subject ,Formal equivalence checking ,General Materials Science ,Pattern recognition ,Artificial intelligence ,Dream ,Cluster analysis ,business ,media_common - Published
- 2020
- Full Text
- View/download PDF
14. Architecture of a Machine Code Deductive Verification System
- Subjects
Loop invariant ,Source code ,Correctness ,Computer science ,Programming language ,media_common.quotation_subject ,Formal equivalence checking ,computer.software_genre ,Formal methods ,Formal specification ,General Earth and Planetary Sciences ,Compiler ,computer ,Machine code ,General Environmental Science ,media_common - Abstract
In recent years, ISP RAS has been developing a system for machine (binary) code deductive verification. The motivation is rather clear: modern compilers, such as GCC and Clang/LLVM, are not free of bugs; thereby, it is not superfluous (at least for safety- and security-critical components) to check the correctness of the generated code. The key feature of the suggested approach is the ability to reuse source-code-level formal specifications (pre- and postconditions, loop invariants, lemma functions, etc.) at the machine code level. The tool is highly automated: provided that the target instruction set is formalized, it disassembles the machine code, extracts its semantics, adapts the high-level specifications, and generates the verification conditions. The system utilizes a number of third-party components including a source code analyzer (Frama-C), a machine code analyzer (MicroTESK), and an SMT solver (CVC4). The modular design enables replacing one component with another when switching an input format and/or a verification engine. In this paper, we discuss the tool architecture, describe our implementation, and present a case study on verifying the memset C library function.
- Published
- 2020
- Full Text
- View/download PDF
15. Deep Integration of Circuit Simulator and SAT Solver
- Author
-
He-Teng Zhang, Luca Amaru, Jie-Hong R. Jiang, Robert K. Brayton, and Alan Mishchenko
- Subjects
Speedup ,Logic synthesis ,Interfacing ,Computer science ,Formal equivalence checking ,Electronic design automation ,Parallel computing ,Solver ,Boolean satisfiability problem ,Formal verification ,Hardware_LOGICDESIGN - Abstract
The paper addresses a key aspect of efficient computation in logic synthesis and formal verification, namely, the integration of a circuit simulator and a Boolean satisfiability solver. A novel way of interfacing these is proposed along with a fast preprocessing step to detect easy SAT instances and a new hybrid SAT solver, which is more robust for hardware designs than are state-of-the-art CNF-based solvers. The proposed integration enables a 10x speedup in essential computation engines widely used in industrial EDA tools, including SAT sweeping, combinational and sequential equivalence checking, and computing structural choices for technology mapping. The speedup does not lead to a loss in quality because the computed equivalences are canonical.
- Published
- 2021
- Full Text
- View/download PDF
16. Utilization of Machine Learning In RTL-GL Signals Correlation
- Author
-
Muhammad Ziad Muhammad Ghazy, Medhat Ashraf Alhaddad, Abanoub Ghadban Helmy, Seif Eldin Mohamed Hussein, Nagy Raouf Nagy, and Ahmed H. Yousef
- Subjects
Signal processing ,Functional verification ,Computer science ,business.industry ,Formal equivalence checking ,Design flow ,Process (computing) ,Machine learning ,computer.software_genre ,Clock domain crossing ,Logic gate ,Electronic design automation ,Artificial intelligence ,business ,computer - Abstract
Verification is an important part of the Electronic Design Automation (EDA) design flow which currently takes a considerable amount of time. During the synthesis process, Different optimizations are done to the Register-Transfer-Level (RTL) code to optimize the power, area, and speed of the circuit. These optimizations result in changes in the names of signals at the gate level. Automatic signal mapping can improve the verification process and can be used to guide functional verification activities between the two presentations using (Clock domain crossing (CDC) analysis in RTL, Gate Level CDC) as well as Formal Sequential Equivalence Checking. Machine Learning has applications everywhere in our lives, and EDA is not an exception. Machine Learning can help EDA tools optimize their results by learning from their previous choices which would improve their results. Machine Learning can help in decreasing the amount of time used in the verification process and increase productivity. In this paper, we discuss the methods used in the signal correlation, propose a modification to one of them as well as present a new technique using Machine Learning to solve the problem of signal correlation between the RTL and the gate level.
- Published
- 2021
- Full Text
- View/download PDF
17. Aquila
- Author
-
Jiaqi Gao, Yanqing Chen, Mengjing Ma, Minlan Yu, Feng Yan, Mengqi Liu, Ennan Zhai, Jie Lu, Yu Zhou, Ming Zhang, Ming Tang, Li Dai, Xionglie Wei, Bingchuan Tian, Chen Tian, and Hongqiang Harry Liu
- Subjects
Computer engineering ,Computer science ,Scalability ,Formal equivalence checking ,Forwarding plane ,ComputerApplications_COMPUTERSINOTHERSYSTEMS ,Enhanced Data Rates for GSM Evolution ,USable ,Representation (mathematics) ,Formal methods ,Expression (mathematics) - Abstract
This paper presents Aquila, the first practically usable verification system for Alibaba's production-scale programmable data planes. Aquila addresses four challenges in building a practically usable verification: (1) specification complexity; (2) verification scalability; (3) bug localization; and (4) verifier self validation. Specifically, first, Aquila proposes a high-level language that facilitates easy expression of specifications, reducing lines of specification codes by tenfold compared to the state-of-the-art. Second, Aquila constructs a sequential encoding algorithm to circumvent the exponential growth of states associated with the upscaling of data plane programs to production level. Third, Aquila adopts an automatic and accurate bug localization approach that can narrow down suspects based on reported violations and pinpoint the culprit by simulating a fix for each suspect. Fourth and finally, Aquila can perform self validation based on refinement proof, which involves the construction of an alternative representation and subsequent equivalence checking. To this date, Aquila has been used in the verification of our production-scale programmable edge networks for over half a year, and it has successfully prevented many potential failures resulting from data plane bugs.
- Published
- 2021
- Full Text
- View/download PDF
18. Lightweight Equivalence Checking of Code Transformation for Code Pointer Integrity
- Author
-
Gyuho Lee, Jaegwan Yu, Kyungmin Bae, Jaeseo Lee, and Tae-Hyoung Choi
- Subjects
Programming language ,Computer science ,Pointer (computer programming) ,Formal equivalence checking ,Translation validation ,computer.software_genre ,Code transformation ,computer - Published
- 2019
- Full Text
- View/download PDF
19. Formal Modeling and Verification of PCHB Asynchronous Circuits
- Author
-
Sudarshan K. Srinivasan, Ashiq A. Sakib, and Scott C. Smith
- Subjects
Correctness ,Computer science ,business.industry ,Liveness ,Formal equivalence checking ,Hardware_PERFORMANCEANDRELIABILITY ,02 engineering and technology ,Deadlock ,020202 computer hardware & architecture ,Hardware and Architecture ,Asynchronous communication ,Robustness (computer science) ,Embedded system ,Logic gate ,Scalability ,0202 electrical engineering, electronic engineering, information engineering ,Electrical and Electronic Engineering ,business ,Formal verification ,Software ,Hardware_LOGICDESIGN ,Electronic circuit - Abstract
Precharge half buffer (PCHB) is one of the major quasi-delay insensitive (QDI) asynchronous design paradigms, which has been utilized in several commercial applications due to its low power and inherent robustness. In industry, QDI circuits are often synthesized from a synchronous specification using custom synthesis tools. Design validation of the implemented QDI circuits mostly relies on extensive simulation, which may fail to detect corner-case bugs, especially in complex designs. Hence, a formal verification scheme for PCHB circuits is much needed. In this article, we present a formal verification methodology for PCHB circuits synthesized from a Boolean/synchronous specification, which is based on equivalence checking and can guarantee both safety (full functional correctness) and liveness (absence of deadlock). The approach is fast, scalable, and applicable to combinational as well as sequential PCHB circuits. We demonstrate the method using several multipliers, multiply and accumulate circuits (MACs), and IEEE International Symposium on Circuits and Systems (ISCAS) benchmarks.
- Published
- 2019
- Full Text
- View/download PDF
20. Efficient Equivalence-Checking Algorithms for Procedural Programs in Progressive Semigroup Gateway Models
- Author
-
V. V. Podymov
- Subjects
0209 industrial biotechnology ,Polynomial ,Control and Optimization ,Semigroup ,Computer science ,010102 general mathematics ,Formal equivalence checking ,02 engineering and technology ,Gateway (computer program) ,01 natural sciences ,Decidability ,Human-Computer Interaction ,Computational Mathematics ,020901 industrial engineering & automation ,State (computer science) ,0101 mathematics ,Equivalence (measure theory) ,Algorithm - Abstract
The problem of program equivalence, checking whether programs have the same (equivalent) behavior in a given model, is investigated. The considered models of programs with “gateway” procedures were proposed somewhat recently, and almost nothing is known about solving the problem of equivalence for them. An approach is proposed to solving the problem of the consistent halting of programs without procedures. Based on known results, it allows to state decidability and polynomial decidability for the equivalence problem with procedures in many gateway models, and the corresponding decision algorithms to be constructed.
- Published
- 2019
- Full Text
- View/download PDF
21. Equivalence Checking and Compaction of n-input Majority Terms Using Implicants of Majority
- Author
-
Mahesh Balakrishnan, Rajeswari Devadoss, and Kolin Paul
- Subjects
OR gate ,Logical equivalence ,Exploit ,Computer science ,020208 electrical & electronic engineering ,Formal equivalence checking ,Implicant ,02 engineering and technology ,020202 computer hardware & architecture ,Logic synthesis ,0202 electrical engineering, electronic engineering, information engineering ,Electrical and Electronic Engineering ,Arithmetic ,Boolean function ,Equivalence (measure theory) - Abstract
Recent advances in nanotechnology have led to the emergence of energy efficient circuit technologies like spintronics and domain wall magnets that use Majority gates as their primary logic elements. For logic synthesis methods targeted to such technologies to be effective and efficient, they need to be able to use, manipulate, and exploit large Majority terms in their synthesis flow. One of the problems that turn up in such an endeavor is the determination of equivalence of two n-input Majority terms. As Majority gates implement more complex Boolean functions than traditional AND/OR gates, this is a non-trivial problem—one that demands to be solved before proceeding to harder problems dealing with networks of Majority gates. We provide an algorithm based on prime implicants as a solution to this problem. In addition, we provide an algorithm that compacts an n-input Majority term to an equivalent n-input Majority term that has the fewest number of inputs. In this quest, we extend the concept of implicants to two cases — 1-implicants and prime 1-implicants that imply that a function evaluates to ‘1’, and 0-implicants and prime 0-implicants that imply that it evaluates to ‘0’. We exploit the properties of Majority to create an efficient algorithm to generate the sums of all prime 1-implicants and all prime 0-implicants of an n-input Majority term. As Majority and Threshold functions have been shown to be logically equivalent, our algorithms are applicable to Threshold functions as well. Being based on implicants of Majority, our algorithms improve on the known naive algorithms for equivalence checking and compaction for threshold logic terms.
- Published
- 2019
- Full Text
- View/download PDF
22. Simple Fixpoint Iteration To Solve Parity Games
- Author
-
Dijk, Tom van, Rubbens, Bob, Leroux, Jérôme, Raskin, Jean-François, and Formal Methods and Tools
- Subjects
FOS: Computer and information sciences ,Computer Science - Logic in Computer Science ,Computer Science::Computer Science and Game Theory ,Theoretical computer science ,Computer science ,lcsh:Mathematics ,Formal equivalence checking ,D.2.4 ,Fixed point ,lcsh:QA1-939 ,F.4.1 ,lcsh:QA75.5-76.95 ,Logic in Computer Science (cs.LO) ,Parity game ,Reactive synthesis ,TheoryofComputation_LOGICSANDMEANINGSOFPROGRAMS ,lcsh:Electronic computers. Computer science ,Parity (mathematics) ,Time complexity - Abstract
A naive way to solve the model-checking problem of the mu-calculus uses fixpoint iteration. Traditionally however mu-calculus model-checking is solved by a reduction in linear time to a parity game, which is then solved using one of the many algorithms for parity games. We now consider a method of solving parity games by means of a naive fixpoint iteration. Several fixpoint algorithms for parity games have been proposed in the literature. In this work, we introduce an algorithm that relies on the notion of a distraction. The idea is that this offers a novel perspective for understanding parity games. We then show that this algorithm is in fact identical to two earlier published fixpoint algorithms for parity games and thus that these earlier algorithms are the same. Furthermore, we modify our algorithm to only partially recompute deeper fixpoints after updating a higher set and show that this modification enables a simple method to obtain winning strategies. We show that the resulting algorithm is simple to implement and offers good performance on practical parity games. We empirically demonstrate this using games derived from model-checking, equivalence checking and reactive synthesis and show that our fixpoint algorithm is the fastest solution for model-checking games., In Proceedings GandALF 2019, arXiv:1909.05979
- Published
- 2019
23. Verification at RTL Using Separation of Design Concerns
- Author
-
Rouwaida Kanj, Maya H. Safieddine, Fadi A. Zaraket, Wolfgang Roesner, and Ali El-Zein
- Subjects
Computer science ,business.industry ,Formal equivalence checking ,Inference ,02 engineering and technology ,Computer Graphics and Computer-Aided Design ,020202 computer hardware & architecture ,Transformation (function) ,Embedded system ,Logic gate ,0202 electrical engineering, electronic engineering, information engineering ,Hardware design languages ,Electrical and Electronic Engineering ,business ,Hardware_REGISTER-TRANSFER-LEVELIMPLEMENTATION ,Formal verification ,Software ,Register-transfer level - Abstract
Design-for-test, logic built-in self-test, memory technology mapping, and clocking concerns require team-months of verification time as they traditionally happen at gate-level. We present a novel concern-oriented methodology that enables automatic insertion of these concerns at the register-transfer-level where verification is easier. The methodology involves three main phases: 1) flipflop inference and instantiation algorithms that handle parametric register transfer level (RTL) modules; 2) transformations that take entry RTL and produce RTL modules where memory elements are separated from functionality; and 3) a concern weaving tool that automatically inserts memory related design concerns implemented in recipe files into the RTL modules. The transformation is sound as proven and validated by equivalence checking using formal verification. We implemented the methodology in a tool that is currently used in an industrial setting wherein it reduced design verification time by more than 40%. The methodology is also effective with open source embedded system frameworks.
- Published
- 2019
- Full Text
- View/download PDF
24. Formal Verification of Digital Circuits Using Simulator with Mathematical Foundation
- Author
-
Abdul Moeed Khan, Wilayat Khan, Noman Shahid, Basim Azam, and Ahtisham Shaheen
- Subjects
010302 applied physics ,Digital electronics ,021110 strategic, defence & security studies ,Engineering drawing ,Computer science ,business.industry ,Formal equivalence checking ,0211 other engineering and technologies ,Foundation (engineering) ,Digital logic circuits ,02 engineering and technology ,General Medicine ,computer.software_genre ,01 natural sciences ,Theorem provers ,0103 physical sciences ,Computer Aided Design ,business ,computer ,Formal verification ,Hardware_LOGICDESIGN - Abstract
To ease hardware design process, circuits are normally designed in description languages such as Verilog and VHDL. The correctness of circuits is normally checked by exhaustive simulation in simulators such as Icarus and VCS. Both the description languages Verilog/VHDL and simulators Icarus/VCS do not have mathematical foundations and hence are not reliable and cannot be used to mathematically prove correctness of circuit designs. Hardware description languages with mathematical (formal) foundation such as VeriFormal, on the other hand, are more reliable, trustworthy and can be used for robust design. In this paper, we report our results of formal verifications of two simple hardware circuits designed in the formal description language VeriFormal. Using the VeriFormal simulator and the accompanied type checker tools, we prove reliability properties type safety, functional correctness and functional equivalence of the digital circuits.
- Published
- 2019
- Full Text
- View/download PDF
25. Verification and Synthesis of Clock-Gated Circuits
- Author
-
Yu-Yun Dai and Robert Brayton
- Subjects
Model checking ,Digital electronics ,Sequential logic ,Computer science ,business.industry ,Formal equivalence checking ,Clock gating ,02 engineering and technology ,Computer Graphics and Computer-Aided Design ,020202 computer hardware & architecture ,Linear temporal logic ,Logic gate ,0202 electrical engineering, electronic engineering, information engineering ,Electrical and Electronic Engineering ,business ,Algorithm ,Software ,Hardware_LOGICDESIGN ,Electronic circuit - Abstract
To reduce dynamic power dissipation in digital circuits, a dependency graph (DG) is derived for a sequential circuit to accomplish verification and synthesis of clock-gated circuits. This is used recursively to derive sufficient conditions for a given bank of flops (flip-flops) to be legally clock gated (disabled.) These conditions are expressed with linear temporal logic (LTL)/past LTL (PLTL) properties, which can be used to create hardware monitors and justified by hardware model checkers. For sequential equivalence checking (SEC), LTL/PLTL properties are formulated to be proved on a clock-gated circuit ( R ) derived from a “golden” circuit ( G ). If these sufficient conditions can be proved on R , then the clock gating structures are proved redundant and can be removed. This creates a simplified circuit ( R’ ) and makes the SEC task easier. Experiments were performed on a set of benchmarks. It was observed that since the properties are expressed in terms of the control signals which only appear in the DG, they are quite easy to prove on R because the DG abstracts away complicated arithmetic logic. Similarly, the miter between G and R’ is usually proved easily by model-checking methods because of the increased similarity between G and R’ in sequential behaviors, compared to the changes between G and R . The proposed formulation is extended to provide a systematic and automatic method for sequential clock-gating synthesis. Experiments showed that the DG-based framework for synthesis gave encouraging results.
- Published
- 2019
- Full Text
- View/download PDF
26. Equivalence Checking for Superconducting RSFQ Logic Circuits
- Author
-
Huang Junying, Zhimin Zhang, and Rongliang Fu
- Subjects
Structure (mathematical logic) ,Digital electronics ,Computer science ,business.industry ,Formal equivalence checking ,01 natural sciences ,Computer Science::Hardware Architecture ,Computer Science::Emerging Technologies ,TheoryofComputation_LOGICSANDMEANINGSOFPROGRAMS ,Computer Science::Logic in Computer Science ,Rapid single flux quantum ,Satisfiability modulo theories ,Logic gate ,Component (UML) ,0103 physical sciences ,Arithmetic ,010306 general physics ,business ,Hardware_LOGICDESIGN ,Electronic circuit - Abstract
Equivalence checking is a key component of the verification methodology for digital circuit designs. In this paper, we propose an equivalence checking framework for superconducting rapid single-flux-quantum (RSFQ) logic circuits which include acyclic circuits and bit-slice-based cyclic circuits. It consists of a structure checker and a logic checker. The structure checker is used to check whether the circuit meets the design rules of superconducting RSFQ logic circuits. The logic checker can be used to check whether two RSFQ gate-level circuits have the same logic function. For the logic checker, we propose a logic equivalence checking method based on logic cone partition. The circuit network is simplified layer by layer and iteratively partitioned into logic cones, each of which is verified by the SMT solver. The experimental results show the feasibility of our approach on superconducting RSFQ logic circuits.
- Published
- 2021
- Full Text
- View/download PDF
27. EPEX
- Author
-
Lucas Klemmer and Daniel Große
- Subjects
Functional verification ,Programming language ,Computer science ,Formal equivalence checking ,Ranging ,02 engineering and technology ,computer.software_genre ,020202 computer hardware & architecture ,Instruction set ,Core (game theory) ,RISC-V ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Reference model ,computer ,Formal verification - Abstract
Verifying processors has been and still is a major challenge. Therefore, intensive research has led to advanced verification solutions ranging from ISS-based reference models, (cross-level) simulation down to formal verification at the RTL. During the verification of the processor implementation at the Instruction Set Architecture (ISA) level, test stimuli, i.e. test programs are needed. They are either created manually or with the aid of sophisticated test program generators. However, significant effort is required to produce thorough test programs. In this paper, we devise a novel approach for processor verification by Equivalent Program EXecution (EPEX). Our approach is based on a new form of equivalence checking Instead of comparing the architectural states of two models which execute the same program P, we derive a second, but equivalent program P^ from P (wrt. to a formal ISA model), and check that executing P and P^ will produce equal architectural states on the same design. We show that EPEX can easily be used in a simulation-based verification environment and broadens existing tests automatically. In a RISC-V case study using different core configurations of the well-known VexRiscv core, we demonstrate the bug-finding capabilities of EPEX.
- Published
- 2021
- Full Text
- View/download PDF
28. PEQCHECK: Localized and Context-aware Checking of Functional Equivalence
- Author
-
Marie-Christine Jakobs
- Subjects
Soundness ,Functional specification ,Programming language ,business.industry ,Computer science ,Formal equivalence checking ,Context (language use) ,computer.software_genre ,Software ,Code refactoring ,Code segment ,Code (cryptography) ,business ,computer - Abstract
A refactoring must preserve the program’s functionality. However, not all refactorings are correct. Thus, preservation of the functionality must be checked. Since programs are rarely formally specified, we use the original program as functional specification and check whether the original and refactored program are functionally equivalent. More concretely, our PEQCHECK technique follows a common approach and reduces equivalence checking to program verification. To increase efficiency, PEQCHECK generates several verification tasks, namely one per refactored code segment and not one per function as typically done by prior work. Additionally, PEQCHECK takes the context of the code segments into account. For example, only modified, live variables need to be equivalent and read-only variables can be shared between original and refactored code segments. We proved soundness of our PEQCHECK technique and implemented it in a prototype tool. Our evaluation shows that the localized checking of PEQCHECK can indeed be beneficial.
- Published
- 2021
- Full Text
- View/download PDF
29. EqBench: A Dataset of Equivalent and Non-equivalent Program Pairs
- Author
-
Yi Li, Julia Rubin, and Sahar Badihi
- Subjects
Java ,Programming language ,business.industry ,Computer science ,String (computer science) ,Formal equivalence checking ,computer.software_genre ,Software ,Benchmark (computing) ,Programming constructs ,business ,computer ,Equivalence (measure theory) ,computer.programming_language - Abstract
Equivalence checking techniques help establish whether two versions of a program exhibit the same behavior. The majority of popular techniques for formally proving/refuting equivalence are evaluated on small and simplistic benchmarks, omitting "difficult" programming constructs, such as non-linear arithmetic, loops, floating-point arithmetic, and string and array manipulation. This hinders efficient evaluation of these techniques and the ability to establish their practical applicability in real scenarios. This paper addresses this gap by contributing EqBench – the largest and most comprehensive benchmark for equivalence checking analysis, which contains 147 equivalent and 125 non-equivalent cases, in both C and Java languages. We believe EqBench can facilitate a more realistic evaluation of equivalence checking techniques, assessing their individual strength and weaknesses. EqBench is publicly available at: https://osf.io/93s5b/.
- Published
- 2021
- Full Text
- View/download PDF
30. A Formal Approach to Identifying Hardware Trojans in Cryptographic Hardware
- Author
-
Naofumi Homma, Rei Ueno, and Akira Ito
- Subjects
business.industry ,Binary decision diagram ,Computer science ,Advanced Encryption Standard ,Formal equivalence checking ,Datapath ,Netlist ,Hardware_ARITHMETICANDLOGICSTRUCTURES ,Elliptic curve cryptography ,business ,Encryption ,Formal methods ,Computer hardware - Abstract
This paper presents a new formal method for detecting and identifying hardware Trojans (HTs) inserted into the datapath of cryptographic hardware based on Galois-field arithmetic such as for the Advanced Encryption Standard and elliptic curve cryptography. To detect HTs, our method first performs equivalence checking between the specifications given as Galois-field polynomials (or the reference circuit of cryptographic hardware) and polynomials representing the input-output relations of a gate-level netlist. Our method exploits zero-suppressed binary decision diagrams for efficient verification. Once an HT is found, the proposed method then detects the trigger condition of the HT using the characteristics of zero-suppressed binary decision diagrams. It also identifies the HT localization using a novel computer algebra method. Our experimental results show that the proposed method can verify netlists and identify HT trigger conditions and locations on a 233-bit multiplier commonly used in elliptic curve cryptography within 1.8 seconds. In addition, we show that if the reference circuit is given, the proposed method can detect a realistic HT inserted into the entire Advanced Encryption Standard hardware, including control logic, in approximately three seconds.
- Published
- 2021
- Full Text
- View/download PDF
31. Approximate Equivalence Checking of Noisy Quantum Circuits
- Author
-
Yuan Feng, Xin Hong, Xiangzhen Zhou, Mingsheng Ying, and Sanjiang Li
- Subjects
FOS: Computer and information sciences ,Quantum Physics ,Noise measurement ,Computer science ,Formal equivalence checking ,Quantum noise ,FOS: Physical sciences ,Approximation algorithm ,Quantum circuit ,Computer Science - Data Structures and Algorithms ,Data Structures and Algorithms (cs.DS) ,Electronic design automation ,Quantum Physics (quant-ph) ,Equivalence (measure theory) ,Algorithm ,Quantum - Abstract
We study the fundamental design automation problem of equivalence checking in the NISQ (Noisy Intermediate-Scale Quantum) computing realm where quantum noise is present inevitably. The notion of approximate equivalence of (possibly noisy) quantum circuits is defined based on the Jamiolkowski fidelity which measures the average distance between output states of two super-operators when the input is chosen at random. By employing tensor network contraction, we present two algorithms, aiming at different situations where the number of noises varies, for computing the fidelity between an ideal quantum circuit and its noisy implementation. The effectiveness of our algorithms is demonstrated by experimenting on benchmarks of real NISQ circuits. When compared with the state-of-the-art implementation incorporated in Qiskit, experimental results show that the proposed algorithms outperform in both efficiency and scalability.
- Published
- 2021
32. A General Equivalence Checking Framework for Multivalued Logic
- Author
-
Chun-Yao Wang, Pei-Pei Chen, Sheng-Hsiu Wei, Chia-Chun Lin, Hsin-Ping Yen, and Yung-Chih Chen
- Subjects
010302 applied physics ,Theoretical computer science ,Computer science ,020208 electrical & electronic engineering ,Formal equivalence checking ,Process (computing) ,CPU time ,02 engineering and technology ,01 natural sciences ,Set (abstract data type) ,Development (topology) ,Application-specific integrated circuit ,Encoding (memory) ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering ,Electronic design automation ,Hardware_LOGICDESIGN - Abstract
Logic equivalence checking is a critical task in the ASIC design flow. Due to the rapid development in nanotechnology-based devices, an efficient implementation of multivalued logic becomes practical. As a result, many synthesis algorithms for ternary logic were proposed. In this paper, we bring out an equivalence checking framework based on multivalued logic exploiting the modern SAT solvers. Furthermore, a structural conflict-driven clause learning (SCDCL) technique is also proposed to accelerate the SAT solving process. The SCDCL algorithm deploys some strategies to cut off the search space for SAT algorithms. The experimental results show that the proposed SCDCL technique saves 42% CPU time from SAT solvers on average over a set of industrial benchmarks.
- Published
- 2021
- Full Text
- View/download PDF
33. Model checking large design spaces: Theory, tools, and experiments
- Author
-
Rohit Dureja
- Subjects
Model checking ,Programming language ,Design space exploration ,Computer science ,Formal equivalence checking ,Formal methods ,computer.software_genre ,computer - Published
- 2021
- Full Text
- View/download PDF
34. CoCEC: An Automatic Combinational Circuit Equivalence Checker Based on the Interactive Theorem Prover
- Author
-
Farrukh Aslam Khan, Adi Alhudhaif, Abdelouahid Derhab, and Wilayat Khan
- Subjects
Model checking ,Combinational logic ,021110 strategic, defence & security studies ,Multidisciplinary ,General Computer Science ,Article Subject ,Programming language ,Computer science ,Formal equivalence checking ,Hardware description language ,Proof assistant ,0211 other engineering and technologies ,QA75.5-76.95 ,02 engineering and technology ,computer.software_genre ,Automated theorem proving ,Electronic computers. Computer science ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,State (computer science) ,computer ,Equivalence (measure theory) ,computer.programming_language - Abstract
Checking the equivalence of two Boolean functions, or combinational circuits modeled as Boolean functions, is often desired when reliable and correct hardware components are required. The most common approaches to equivalence checking are based on simulation and model checking, which are constrained due to the popular memory and state explosion problems. Furthermore, such tools are often not user-friendly, thereby making it tedious to check the equivalence of large formulas or circuits. An alternative is to use mathematical tools, called interactive theorem provers, to prove the equivalence of two circuits; however, this requires human effort and expertise to write multiple output functions and carry out interactive proof of their equivalence. In this paper, we (1) define two simple, one formal and the other informal, gate-level hardware description languages, (2) design and develop a formal automatic combinational circuit equivalence checker (CoCEC) tool, and (3) test and evaluate our tool. The tool CoCEC is based on human-assisted theorem prover Coq, yet it checks the equivalence of circuit descriptions purely automatically through a human-friendly user interface. It either returns a machine-readable proof (term) of circuits’ equivalence or a counterexample of their inequality. The interface enables users to enter or load two circuit descriptions written in an easy and natural style. It automatically proves, in few seconds, the equivalence of circuits with as many as 45 variables (3.5 × 10 13 states). CoCEC has a mathematical foundation, and it is reliable, quick, and easy to use. The tool is intended to be used by digital logic circuit designers, logicians, students, and faculty during the digital logic design course.
- Published
- 2021
- Full Text
- View/download PDF
35. Program Equivalence Checking for the Facilitation of Quantum Offloading
- Author
-
Jukka K. Nurminen, Jon Speer, Department of Computer Science, and Empirical Software Engineering research group
- Subjects
Computer science ,Programming language ,Semantics (computer science) ,Shor's algorithm ,Formal equivalence checking ,Optimizing compiler ,0102 computer and information sciences ,02 engineering and technology ,computer.software_genre ,113 Computer and information sciences ,01 natural sciences ,Program Equivalence Checking ,Semantic equivalence ,010201 computation theory & mathematics ,0202 electrical engineering, electronic engineering, information engineering ,Computer Science::Programming Languages ,020201 artificial intelligence & image processing ,Quantum algorithm ,Quantum Computing ,Shor's Algorithm ,Programmer ,computer ,Quantum computer ,Quantum Offloading - Abstract
Computational offloading involves the transfer of computational tasks to a separate device. We apply this concept to quantum computing, whereby particular algorithms (i.e., "quantum algorithms") are automatically recognized and executed on a quantum computer. We propose a method that utilizes program equivalence checking to discern between code suited for execution on a conventional computer and a quantum computer. This process involves comparing a quantum algorithm's implementation with code written by a programmer, with semantic equivalence between the two implying that the programmer's code should be executed on a quantum computer instead of a conventional computer. Using a novel compiler optimization verification tool named CORK, we test for semantic equivalence between a portion of Shor's algorithm (the "prototype") and various modified versions of this code (representing the arbitrary code written by a programmer). Some of the modified versions are intended to be semantically equivalent to the prototype while others semantically inequivalent. Our approach is able to correctly determine semantic equivalence or semantic inequivalence in a majority of cases.
- Published
- 2021
36. Detecting call indirection obfuscation through equivalence checking in android environment
- Author
-
Fabio Martinelli, Tiziano Marinaro, Antonella Santone, and Francesco Mercaldo
- Subjects
Indirection ,Computer science ,Equivalence checking ,Obfuscation ,Formal methods ,Formal equivalence checking ,ComputingMilieux_LEGALASPECTSOFCOMPUTING ,computer.software_genre ,Computer security ,Malware ,Signature (logic) ,Android ,Security ,General Earth and Planetary Sciences ,Android application ,Android (operating system) ,computer ,General Environmental Science - Abstract
The detection mechanism provided by current antimalware is the so-called signature based, requiring that a threat must be widespread to be recognised by the antimalware. Even if a malware is rightly recognized, by applying even trivial obfuscation techniques, it is really easy to bypass the antimalware detection mechanism. In this paper we propose a method to detect if an Android application is obfuscated with the call indirection obfuscation techniques by exploiting formal equivalence checking. In the experimental analysis we show the effectiveness of the propose approach for call indirection obfuscation technique detection, by exploiting two obfuscation tools.
- Published
- 2021
37. Scaling client-specific equivalence checking via impact boundary search
- Author
-
Federico Mora, Vincent Hui, Nick Feng, and Marsha Chechik
- Subjects
Set (abstract data type) ,Theoretical computer science ,Scale (ratio) ,Computational complexity theory ,Computer science ,Formal equivalence checking ,Scalability ,Perspective (graphical) ,0202 electrical engineering, electronic engineering, information engineering ,Boundary (topology) ,020207 software engineering ,02 engineering and technology ,Scaling - Abstract
Client-specific equivalence checking (CSEC) is a technique proposed previously to perform impact analysis of changes to downstream components (libraries) from the perspective of an unchanged system (client). Existing analysis techniques, whether general (re-gression verification, equivalence checking) or special-purpose, when applied to CSEC, either require users to provide specifications, or do not scale. We propose a novel solution to the CSEC problem, called 2clever, that is based on searching the control-flow of a program for impact boundaries. We evaluate a prototype implementation of 2clever on a comprehensive set of benchmarks and conclude that our prototype performs well compared to the state-of-the-art.
- Published
- 2020
- Full Text
- View/download PDF
38. Validating GCSE in the scheduling of high-level synthesis
- Author
-
Long Yu, Jie Cheng, Yun Kang, Haitao Yang, Yongyang Hu, and Jian Hu
- Subjects
Computer science ,Formal equivalence checking ,Scheduling (production processes) ,02 engineering and technology ,Parallel computing ,020202 computer hardware & architecture ,Finite state machine with datapath ,High-level synthesis ,VHDL ,0202 electrical engineering, electronic engineering, information engineering ,Verilog ,Common subexpression elimination ,Hardware_REGISTER-TRANSFER-LEVELIMPLEMENTATION ,computer ,Data-flow analysis ,computer.programming_language - Abstract
High-level synthesis (HLS) compiles a algorithmic description (C or C++) into a digital hardware implementation (VHDL or Verilog) through a sequence of transformations. However, the complex compiling process may introduce an error in the produced register-transfer level (RTL) implementation. Global common subexpression elimination (GCSE) as a commonly used code motion technique in the scheduling of HLS is an error-prone and complex process that need to be validated. In this paper, we propose an equivalence checking method to validate GCSE with non-common variables used in the rest code in the scheduling of HLS by enhancing the path equivalence criteria. The source and target programs are modeled using Finite State Machine with Datapath (FSMD) that is essentially a Control and Data Flow Graph (CDFG). The experimental results show that our method can indeed validate the GCSE with non-common variables used in the rest code in HLS which has not been solved in the existing papers.
- Published
- 2020
- Full Text
- View/download PDF
39. Formal Verification of Non-Functional Strategies of System-Level Power Management Architecture in Modern Processors
- Author
-
Reza Sharafinejad, Bijan Alizadeh, and Tooraj Nikoubin
- Subjects
Power management ,Consistency (database systems) ,Software bug ,Control theory ,Computer science ,Formal equivalence checking ,Formal verification ,Power domains ,Reliability engineering ,Power (physics) - Abstract
As the complexity of low-power designs grows, simultaneous verification of both the design functionality and the consistency of power management controllers with the low-level power intent is a big challenge. This paper presents a method which attempts to resolve such a problem for complicated processors with tens of power domains. In order to ensure that the functionality of the processor after inserting power management controllers does not change, an efficient equivalence checking is performed between the low-power implementation model and its specification model. However, this kind of verification is not sufficient due to non-functional behavior of system-level power management strategies. Therefore, the proposed method checks the consistency between PMU and UPF by high-level power rules which are extracted from UPF. The experimental results show that the proposed method helps designers not only to create a correct high-level power management controller but also to identify the low-power functional and non-functional bugs in their designs.
- Published
- 2020
- Full Text
- View/download PDF
40. ARDiff: scaling program equivalence checking via iterative abstraction and refinement of common code
- Author
-
Sahar Badihi, Yi Li, Julia Rubin, and Faridah Akinotcho
- Subjects
Correctness ,Theoretical computer science ,Computer science ,Formal equivalence checking ,020207 software engineering ,02 engineering and technology ,Symbolic execution ,computer.software_genre ,Set (abstract data type) ,Program analysis ,Code refactoring ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Benchmark (computing) ,computer ,Equivalence (measure theory) - Abstract
Equivalence checking techniques help establish whether two versions of a program exhibit the same behavior. The majority of popular techniques for formally proving/refuting equivalence relies on symbolic execution – a static analysis approach that reasons about program behaviors in terms of symbolic input variables. Yet, symbolic execution is difficult to scale in practice due to complex programming constructs, such as loops and non-linear arithmetic. This paper proposes an approach, named ARDiff, for improving the scalability of symbolic-execution-based equivalence checking techniques when comparing syntactically-similar versions of a program, e.g., for verifying the correctness of code upgrades and refactoring. Our approach relies on a set of novel heuristics to determine which parts of the versions’ common code can be effectively pruned during the analysis, reducing the analysis complexity without sacrificing its effectiveness. Furthermore, we devise a new equivalence checking benchmark, extending existing benchmarks with a set of real-life methods containing complex math functions and loops. We evaluate the effectiveness and efficiency of ARDiff on this benchmark and show that it outperforms existing method-level equivalence checking techniques by solving 86% of all equivalent and 55% of non-equivalent cases, compared with 47% to 69% for equivalent and 38% to 52% for non-equivalent cases in related work.
- Published
- 2020
- Full Text
- View/download PDF
41. ICCAD-2020 CAD contest in X-value equivalence checking and benchmark suite
- Author
-
Kei-Yong Khoo, Chi-An Wu, Chih-Jen Hsu, and Ching-Yi Huang
- Subjects
Heuristic ,Computer science ,Ask price ,Programming language ,Suite ,Formal equivalence checking ,Benchmark (computing) ,CAD ,Heuristics ,CONTEST ,computer.software_genre ,computer - Abstract
Equivalence checking is the practical industrial solution to sign-off digital functionality for large-scale circuits. However, when design contains implicit and explicit X-values, the complexity of equivalence checking increases and heuristics used in binary-value equivalence checking may not be applicable for X-value equivalence checking. The goal of the ICCAD 2020 CAD contest is to ask the good algorithm and heuristic to solve the X-value equivalence checking. In this contest, we provide the benchmark suites for contestant to evaluate their program. We hope the contest result can improve industry applications and bring more research interests.
- Published
- 2020
- Full Text
- View/download PDF
42. Multi-thread Simulation-based Equivalence Checking between SIM and RTI
- Author
-
Yun Kang, Yongyang Hu, Haitao Yang, Jian Hu, Guilin Chen, Guanwu Wang, and Kang Wang
- Subjects
System level modeling ,Computer engineering ,Computer science ,Formal equivalence checking ,Thread (computing) ,Simulation based ,Register-transfer level - Abstract
The always increasing complexity of digital system is overcome by System Level Modeling (SLM). The implementation starts from a SLM high-level description and, following a top-down refinement approach, it is refined towards a corresponding Register-transfer level (RTL) model. However, the refinement process is very complex and error prone. Therefore, it is crucial to guarantee the functional consistency between SLM and RTL. This paper presents a multi-thread simulation-based method to check the equivalence between SLM and RTL. The method utilizes multi-thread simulation to improve the efficiency of traditional simulation-based equivalence checking. The promising experimental results demonstrate the effectiveness and efficiency of the proposed method.
- Published
- 2020
- Full Text
- View/download PDF
43. Formal Verification of Arithmetic RTL: Translating Verilog to C++ to ACL2
- Author
-
David M. Russinoff
- Subjects
FOS: Computer and information sciences ,Computer Science - Symbolic Computation ,Computer Science - Logic in Computer Science ,Sequential logic ,Correctness ,Computer science ,Proof assistant ,Formal equivalence checking ,ACL2 ,Symbolic Computation (cs.SC) ,Logic in Computer Science (cs.LO) ,Computer Science::Hardware Architecture ,Verilog ,Arithmetic ,computer ,Formal verification ,Hardware_REGISTER-TRANSFER-LEVELIMPLEMENTATION ,computer.programming_language ,Integer (computer science) - Abstract
We present a methodology for formal verification of arithmetic RTL designs that combines sequential logic equivalence checking with interactive theorem proving. An intermediate model of a Verilog module is hand-coded in Restricted Algorithmic C (RAC), a primitive subset of C augmented by the integer and fixed-point register class templates of Algorithmic C. The model is designed to be as abstract and compact as possible, but sufficiently faithful to the RTL to allow efficient equivalence checking with a commercial tool. It is then automatically translated to the logic of ACL2, enabling a mechanically checked proof of correctness with respect to a formal architectural specification. In this paper, we describe the RAC language, the translation process, and some techniques that facilitate formal analysis of the resulting ACL2 code., In Proceedings ACL2 2020, arXiv:2009.12521
- Published
- 2020
44. Formal Verification of GCSE in the Scheduling of High-level Synthesis: Work-in-Progress
- Author
-
Haitao Yang, Jian Hu, Yun Kang, Wentao Wang, Long Yu, Yongyang Hu, and Jie Cheng
- Subjects
010302 applied physics ,Computer science ,Programming language ,Formal equivalence checking ,02 engineering and technology ,Work in process ,computer.software_genre ,01 natural sciences ,020202 computer hardware & architecture ,Scheduling (computing) ,SystemC ,High-level synthesis ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering ,Common subexpression elimination ,Compiler ,Hardware_REGISTER-TRANSFER-LEVELIMPLEMENTATION ,Formal verification ,computer ,computer.programming_language - Abstract
High-level synthesis entails application of a sequence of transformations to compile a high-level description of a hardware design (e.g., in C/C++/SystemC) into a register-transfer level (RTL) implementation. However, an error may exist in the RTL implementation from the compiler in the high-level synthesis due to the complex and error prone compiling process. Global common subexpression elimination (GCSE) is a commonly used code motion technique in the scheduling of high-level synthesis. In this paper, we present an equivalence checking method to verify GCSE in the scheduling of high-level synthesis by enhancing the path equivalence criteria. The initial experimental results demonstrate our method can indeed verify the GCSE which has not been properly addressed in the past.
- Published
- 2020
- Full Text
- View/download PDF
45. Verifying Results of the IBM Qiskit Quantum Circuit Compilation Flow
- Author
-
Lukas Burgholzer, Rudy Raymond, and Robert Wille
- Subjects
Quantum circuit ,Quantum Physics ,Computer engineering ,Computer science ,Formal equivalence checking ,FOS: Physical sciences ,Quantum algorithm ,Equivalence (formal languages) ,IBM ,Quantum Physics (quant-ph) ,Quantum ,Electronic circuit ,Quantum computer - Abstract
Realizing a conceptual quantum algorithm on an actual physical device necessitates the algorithm's quantum circuit description to undergo certain transformations in order to adhere to all constraints imposed by the hardware. In this regard, the individual high-level circuit components are first synthesized to the supported low-level gate-set of the quantum computer, before being mapped to the target's architecture---utilizing several optimizations in order to improve the compilation result. Specialized tools for this complex task exist, e.g., IBM's Qiskit, Google's Cirq, Microsoft's QDK, or Rigetti's Forest. However, to date, the circuits resulting from these tools are hardly verified, which is mainly due to the immense complexity of checking if two quantum circuits indeed realize the same functionality. In this paper, we propose an efficient scheme for quantum circuit equivalence checking---specialized for verifying results of the IBM Qiskit quantum circuit compilation flow. To this end, we combine characteristics unique to quantum computing, e.g., its inherent reversibility, and certain knowledge about the compilation flow into a dedicated equivalence checking strategy. Experimental evaluations confirm that the proposed scheme allows to verify even large circuit instances with tens of thousands of operations within seconds or even less, whereas state-of-the-art techniques frequently time-out or require substantially more runtime. A corresponding open source implementation of the proposed method is publicly available at https://github.com/iic-jku/qcec., 10 pages, to be published at International Conference on Quantum Computing and Engineering (QCE20)
- Published
- 2020
46. Formal Verification of Completion-Completeness for NCL Circuits
- Author
-
Sudarshan K. Srinivasan, Scott C. Smith, and Son N. Le
- Subjects
050101 languages & linguistics ,Computer science ,Programming language ,Pipeline (computing) ,05 social sciences ,Formal equivalence checking ,02 engineering and technology ,Formal methods ,computer.software_genre ,Completeness (logic) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,0501 psychology and cognitive sciences ,computer ,Formal verification ,Hardware_LOGICDESIGN ,Electronic circuit - Abstract
Ensuring completion-completeness is required for delay-insensitivity when utilizing bit-wise completion to pipeline NCL circuits comprised of input-incomplete logic functions. Hence, this work presents an automated formal method to detect NCL circuits that are not completion-complete.
- Published
- 2020
- Full Text
- View/download PDF
47. Exploiting Dual-Rail Register Invariants for Equivalence Verification of NCL Circuits
- Author
-
Son N. Le, Scott C. Smith, and Sudarshan K. Srinivasan
- Subjects
Computer science ,Formal equivalence checking ,ComputingMethodologies_DOCUMENTANDTEXTPROCESSING ,Hardware_INTEGRATEDCIRCUITS ,Invariant (mathematics) ,Formal methods ,Formal verification ,Algorithm ,Equivalence (measure theory) ,Electronic circuit - Abstract
Equivalence checking is one of the most scalable and useful verification techniques in industry. NULL Convention Logic (NCL) circuits utilize dual-rail signals (i.e., two wires to represent one bit of DATA), where the wires are inverses of each other during a DATA wavefront. In this paper, a technique that exploits this invariant at NCL register boundaries is proposed to improve the efficiency of equivalence verification of NCL circuits.
- Published
- 2020
- Full Text
- View/download PDF
48. Equivalence Checking Methods for Analog Circuits Using Continuous Reachable Sets
- Author
-
Markus Olbrich, Malgorzata Rechmal-Lesse, Niklas Kochdumper, Lars Hedrich, and Ahmad Tarraf
- Subjects
0209 industrial biotechnology ,Analogue electronics ,Computer science ,Circuit design ,Formal equivalence checking ,02 engineering and technology ,020202 computer hardware & architecture ,Domain (software engineering) ,020901 industrial engineering & automation ,0202 electrical engineering, electronic engineering, information engineering ,Trajectory ,Netlist ,Hardware design languages ,Algorithm ,Abstraction (linguistics) - Abstract
In this paper, we propose a new methodology for equivalence checking in the AMS domain. On top of comparing two existing approaches, we define a new methodology using continuous reachable sets. The approaches are illustrated upon two existing abstract modeling techniques. Moreover, the generated models are compared against a conformant model that captures the measured system behavior from the real circuit. These modeling methodologies yield hybrid automatons (HAs), which along with the netlist, are compared using different equivalence checking methods demonstrating verification techniques on different abstraction levels. Finally, we discuss the methodologies with respect to an efficient and safe circuit design.
- Published
- 2020
- Full Text
- View/download PDF
49. The Power of Simulation for Equivalence Checking in Quantum Computing
- Author
-
Robert Wille and Lukas Burgholzer
- Subjects
Theoretical computer science ,Computer science ,Computation ,Formal equivalence checking ,Design flow ,0202 electrical engineering, electronic engineering, information engineering ,02 engineering and technology ,Representation (mathematics) ,Equivalence (measure theory) ,Realization (systems) ,Quantum ,020202 computer hardware & architecture ,Quantum computer - Abstract
The rapid rate of progress in the physical realization of quantum computers sparked the development of elaborate design flows for quantum computations on such devices. Each stage of these flows comes with its own representation of the intended functionality. Ensuring that each design step preserves this intended functionality is of utmost importance. However, existing solutions for equivalence checking of quantum computations heavily struggle with the complexity of the underlying problem and, thus, no conclusions on the equivalence may be reached with reasonable efforts in many cases. In this work, we uncover the power of simulation for equivalence checking in quantum computing. We show that, in contrast to classical computing, it is in general not necessary to compare the complete representation of the respective computations. Even small errors frequently affect the entire representation and, thus, can be detected within a couple of simulations. The resulting equivalence checking flow substantially improves upon the state of the art by drastically accelerating the detection of errors or providing a highly probable estimate of the operations’ equivalence.
- Published
- 2020
- Full Text
- View/download PDF
50. The Last Mile: High-Assurance and High-Speed Cryptographic Implementations
- Author
-
Tiago Oliveira, Adrien Koutsos, José B. Almeida, Pierre-Yves Strub, Vincent Laporte, Manuel Barbosa, Gilles Barthe, Benjamin Grégoire, Institute for Systems and Computer Engineering, Technology and Science [Porto] (INESC TEC), Faculdade de Ciências da Universidade do Porto (FCUP), Universidade do Porto, Institute IMDEA Software [Madrid], Max Planck Institute for Security and Privacy [Bochum] (MPI Security and Privacy), Sûreté du logiciel et Preuves Mathématiques Formalisées (STAMP), Inria Sophia Antipolis - Méditerranée (CRISAM), Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria), Laboratoire Spécification et Vérification (LSV), Université Paris-Saclay-Centre National de la Recherche Scientifique (CNRS)-Ecole Normale Supérieure Paris-Saclay (ENS Paris Saclay), Proof techniques for security protocols (PESTO), Inria Nancy - Grand Est, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Department of Formal Methods (LORIA - FM), Laboratoire Lorrain de Recherche en Informatique et ses Applications (LORIA), Institut National de Recherche en Informatique et en Automatique (Inria)-Université de Lorraine (UL)-Centre National de la Recherche Scientifique (CNRS)-Institut National de Recherche en Informatique et en Automatique (Inria)-Université de Lorraine (UL)-Centre National de la Recherche Scientifique (CNRS)-Laboratoire Lorrain de Recherche en Informatique et ses Applications (LORIA), Institut National de Recherche en Informatique et en Automatique (Inria)-Université de Lorraine (UL)-Centre National de la Recherche Scientifique (CNRS)-Université de Lorraine (UL)-Centre National de la Recherche Scientifique (CNRS), Laboratoire d'informatique de l'École polytechnique [Palaiseau] (LIX), École polytechnique (X)-Centre National de la Recherche Scientifique (CNRS), Universidade do Porto = University of Porto, and Universidade do Minho
- Subjects
FOS: Computer and information sciences ,Correctness ,Computer Science - Cryptography and Security ,Computer science ,Cryptography ,02 engineering and technology ,computer.software_genre ,[INFO.INFO-CR]Computer Science [cs]/Cryptography and Security [cs.CR] ,0202 electrical engineering, electronic engineering, information engineering ,Implementation ,Equivalence (measure theory) ,computer.programming_language ,Science & Technology ,[INFO.INFO-PL]Computer Science [cs]/Programming Languages [cs.PL] ,Assembly language ,Programming language ,business.industry ,Proof assistant ,Formal equivalence checking ,020207 software engineering ,Automated theorem proving ,020201 artificial intelligence & image processing ,Compiler ,business ,Low-level programming language ,computer ,Cryptography and Security (cs.CR) ,Engenharia e Tecnologia::Engenharia Eletrotécnica, Eletrónica e Informática - Abstract
We develop a new approach for building cryptographic implementations. Our approach goes the last mile and delivers assembly code that is provably functionally correct, protected against side-channels, and as efficient as handwritten assembly. We illustrate our approach using ChaCha20Poly1305, one of the two ciphersuites recommended in TLS 1.3, and deliver formally verified vectorized implementations which outperform the fastest non-verified code.We realize our approach by combining the Jasmin framework, which offers in a single language features of high-level and low-level programming, and the EasyCrypt proof assistant, which offers a versatile verification infrastructure that supports proofs of functional correctness and equivalence checking. Neither of these tools had been used for functional correctness before. Taken together, these infrastructures empower programmers to develop efficient and verified implementations by "game hopping", starting from reference implementations that are proved functionally correct against a specification, and gradually introducing program optimizations that are proved correct by equivalence checking.We also make several contributions of independent interest, including a new and extensible verified compiler for Jasmin, with a richer memory model and support for vectorized instructions, and a new embedding of Jasmin in EasyCrypt., This work is partially supported by project ONR N00014-19-1-2292. Manuel Barbosa was supported by grant SFRH/BSAB/143018/2018 awarded by FCT. This work was partially funded by national funds via FCT in the context of project PTDC/CCI-INF/31698/2017.
- Published
- 2020
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.