13 results on '"Robert C. Armstrong"'
Search Results
2. Digital/Analog Cosimulation using CocoTB and Xyce
- Author
-
A. Smith, Jackson R. Mayo, Peter E. Sholander, Richard Louis Schiek, Robert C. Armstrong, and Ting Mei
- Subjects
business.industry ,Digital analog ,business ,Computer hardware - Published
- 2018
- Full Text
- View/download PDF
3. Survey of Existing Tools for Formal Verification
- Author
-
Mayo Jackson, Matthew H. Wong, Ratish J. Punnoose, and Robert C. Armstrong
- Subjects
Set (abstract data type) ,Open source ,Workflow ,Work (electrical) ,Computer science ,Formal specification ,Systems engineering ,Formal methods ,Formal verification - Abstract
Formal methods have come into wide use because of their effectiveness in verifying %22safety and security%22 requirements of digital systems; a set of requirements for which testing is mostly ineffective. Formal methods are routinely used in the design and verification of high-consequence digital systems in industry. This report outlines our work in assessing the capabilities of commercial and open source formal tools and the ways in which they can be leveraged in digital design workflows.
- Published
- 2014
- Full Text
- View/download PDF
4. Leveraging Formal Methods and Fuzzing to Verify Security and Reliability Properties of Large-Scale High-Consequence Systems
- Author
-
Jackson R. Mayo, Joseph. R. Ruthruff, Robert C. Armstrong, Benjamin Garry Davis, and Ratish J. Punnoose
- Subjects
Engineering ,Class (computer programming) ,business.industry ,Scale (chemistry) ,Research community ,Systems engineering ,Fuzz testing ,business ,Formal methods ,Reliability (statistics) - Abstract
Formal methods describe a class of system analysis techniques that seek to prove specific propertiesabout analyzed designs, or locate flaws compromising those properties. As an analysis capability,these techniques are the subject of increased interest fromboth internal and external customersof Sandia National Laboratories. Given this lab's other areas of expertise, Sandia is uniquelypositioned to advance the state-of-the-art with respect toseveral research and application areaswithin formal methods. This research project was a one-yeareffort funded by Sandia's CyberSecurity S&T Investment Area in its Laboratory Directed Research&Development program toinvestigate the opportunities for formal methods to impactSandia's present mission areas, morefully understand the needs of the research community in the area of formal methods and whereSandia can contribute, and clarify from those potential research paths those that would best advancethe mission-area interests of Sandia. The accomplishmentsfrom this project reinforce the utilityof formal methods in Sandia, particularly in areas relevantto Cyber Security, and set the stagefor continued Sandia investments to ensure this capabilityis utilized and advanced within thislaboratory to serve the national interest.4
- Published
- 2012
- Full Text
- View/download PDF
5. COMPOSE-HPC: A Transformational Approach to Exascale
- Author
-
Wael R. Elwasif, David E. Bernholdt, Ajay Panyala, Tamara L. Dahlgren, T Epperly, Samantha S. Foley, Benjamin A. Allan, Robert C. Armstrong, Geoffrey C. Hulette, Sriram Krishnamoorthy, Daniel Chavarría-Miranda, Adrian Prantl, and Matthew J. Sottile
- Subjects
Computer architecture ,Computer science ,Programming language ,computer.software_genre ,computer - Published
- 2012
- Full Text
- View/download PDF
6. The theory of diversity and redundancy in information system security : LDRD final report
- Author
-
Lyndon G. Pierson, Robert C. Armstrong, Andrea Mae Walker, Jackson R. Mayo, Mark Dolan Torgerson, and Benjamin A. Allan
- Subjects
Trusted computing base ,Security service ,Computer science ,Software security assurance ,Distributed computing ,Security through obscurity ,Redundancy (engineering) ,Computer security model ,Security testing ,Security information and event management - Abstract
The goal of this research was to explore first principles associated with mixing of diverse implementations in a redundant fashion to increase the security and/or reliability of information systems. Inspired by basic results in computer science on the undecidable behavior of programs and by previous work on fault tolerance in hardware and software, we have investigated the problem and solution space for addressing potentially unknown and unknowable vulnerabilities via ensembles of implementations. We have obtained theoretical results on the degree of security and reliability benefits from particular diverse system designs, and mapped promising approaches for generating and measuring diversity. We have also empirically studied some vulnerabilities in common implementations of the Linux operating system and demonstrated the potential for diversity to mitigate these vulnerabilities. Our results provide foundational insights for further research on diversity and redundancy approaches for information systems.
- Published
- 2010
- Full Text
- View/download PDF
7. Peer-to-peer architectures for exascale computing : LDRD final report
- Author
-
Robert C. Armstrong, Sandia Report, Yevgeniy Vorobeychik, Don W. Rudish, Ronald G. Minnich, and Jackson R. Mayo
- Subjects
Computer architecture ,Computer science ,Distributed computing ,Peer-to-peer ,computer.software_genre ,computer ,Exascale computing - Published
- 2010
- Full Text
- View/download PDF
8. Approaches for scalable modeling and emulation of cyber systems : LDRD final report
- Author
-
Jackson R. Mayo, Robert C. Armstrong, Ronald G. Minnich, and Don W. Rudish
- Subjects
Emulation ,business.product_category ,Computer science ,business.industry ,Distributed computing ,Botnet ,computer.software_genre ,Computer cluster ,Cyberterrorism ,Internet access ,Malware ,The Internet ,business ,Communications protocol ,computer - Abstract
The goal of this research was to combine theoretical and computational approaches to better understand the potential emergent behaviors of large-scale cyber systems, such as networks of {approx} 10{sup 6} computers. The scale and sophistication of modern computer software, hardware, and deployed networked systems have significantly exceeded the computational research community's ability to understand, model, and predict current and future behaviors. This predictive understanding, however, is critical to the development of new approaches for proactively designing new systems or enhancing existing systems with robustness to current and future cyber threats, including distributed malware such as botnets. We have developed preliminary theoretical and modeling capabilities that can ultimately answer questions such as: How would we reboot the Internet if it were taken down? Can we change network protocols to make them more secure without disrupting existing Internet connectivity and traffic flow? We have begun to address these issues by developing new capabilities for understanding and modeling Internet systems at scale. Specifically, we have addressed the need for scalable network simulation by carrying out emulations of a network with {approx} 10{sup 6} virtualized operating system instances on a high-performance computing cluster - a 'virtual Internet'. We have also explored mappings between previouslymore » studied emergent behaviors of complex systems and their potential cyber counterparts. Our results provide foundational capabilities for further research toward understanding the effects of complexity in cyber systems, to allow anticipating and thwarting hackers.« less
- Published
- 2009
- Full Text
- View/download PDF
9. Notes on 'Modeling, simulation and analysis of complex networked systems'
- Author
-
Robert C. Armstrong and Jackson R. Mayo
- Subjects
Modeling and simulation ,Computer science ,business.industry ,Artificial intelligence ,Software engineering ,business ,Field (computer science) ,Course (navigation) - Abstract
This is meant as a place to put commentary on the whitepaper and is meant to be pretty much ad-hoc. Because the whitepaper describes a potential program in DOE ASCR and because it concerns many researchers in the field, these notes are meant to be extendable by anyone willing to put in the effort. Of course criticisms of the contents of the notes themselves are also welcome.
- Published
- 2009
- Full Text
- View/download PDF
10. Copy of Using Emulation and Simulation to Understand the Large-Scale Behavior of the Internet
- Author
-
Robert C. Armstrong, Helgi Adalsteinsson, Ann C. Gentile, Don W. Rudish, Jamie A Van Randwyk, Ronald G. Minnich, Ken Chiang, Levi Lloyd, and Keith Vanderveen
- Subjects
World Wide Web ,Engineering ,Emulation ,SIMPLE (military communications protocol) ,business.industry ,Scale (chemistry) ,The Internet ,business ,Software engineering - Abstract
We report on the work done in the late-start LDRDUsing Emulation and Simulation toUnderstand the Large-Scale Behavior of the Internet. We describe the creation of a researchplatform that emulates many thousands of machines to be used for the study of large-scale inter-net behavior. We describe a proof-of-concept simple attack we performed in this environment.We describe the successful capture of a Storm bot and, from the study of the bot and furtherliterature search, establish large-scale aspects we seek to understand via emulation of Storm onour research platform in possible follow-on work. Finally, we discuss possible future work.3
- Published
- 2008
- Full Text
- View/download PDF
11. Mathematical approaches for complexity/predictivity trade-offs in complex system models : LDRD final report
- Author
-
Keith Vanderveen, Michael E. Goldsby, Jackson R. Mayo, Arnab Bhattacharyya, and Robert C. Armstrong
- Subjects
Theoretical computer science ,Mathematical model ,Computer science ,Robustness (computer science) ,Trade offs ,Complex system ,Complex variables ,Cellular automaton - Abstract
The goal of this research was to examine foundational methods, both computational and theoretical, that can improve the veracity of entity-based complex system models and increase confidence in their predictions for emergent behavior. The strategy was to seek insight and guidance from simplified yet realistic models, such as cellular automata and Boolean networks, whose properties can be generalized to production entity-based simulations. We have explored the usefulness of renormalization-group methods for finding reduced models of such idealized complex systems. We have prototyped representative models that are both tractable and relevant to Sandia mission applications, and quantified the effect of computational renormalization on the predictive accuracy of these models, finding good predictivity from renormalized versions of cellular automata and Boolean networks. Furthermore, we have theoretically analyzed the robustness properties of certain Boolean networks, relevant for characterizing organic behavior, and obtained precise mathematical constraints on systems that are robust to failures. In combination, our results provide important guidance for more rigorous construction of entity-based models, which currently are often devised in an ad-hoc manner. Our results can also help in designing complex systems with the goal of predictable behavior, e.g., for cybersecurity.
- Published
- 2008
- Full Text
- View/download PDF
12. Parallel computing in enterprise modeling
- Author
-
Zach Heath, Benjamin A. Allan, Keith Vanderveen, Jaideep Ray, Robert C. Armstrong, Michael E. Goldsby, and Max S. Shneider
- Subjects
Class (computer programming) ,Data model ,Information model ,Computer science ,Principal (computer security) ,Plug-in ,Parallel computing ,Discrete event simulation ,computer.software_genre ,computer ,Enterprise modelling ,Domain (software engineering) - Abstract
This report presents the results of our efforts to apply high-performance computing to entity-based simulations with a multi-use plugin for parallel computing. We use the term 'Entity-based simulation' to describe a class of simulation which includes both discrete event simulation and agent based simulation. What simulations of this class share, and what differs from more traditional models, is that the result sought is emergent from a large number of contributing entities. Logistic, economic and social simulations are members of this class where things or people are organized or self-organize to produce a solution. Entity-based problems never have an a priori ergodic principle that will greatly simplify calculations. Because the results of entity-based simulations can only be realized at scale, scalable computing is de rigueur for large problems. Having said that, the absence of a spatial organizing principal makes the decomposition of the problem onto processors problematic. In addition, practitioners in this domain commonly use the Java programming language which presents its own problems in a high-performance setting. The plugin we have developed, called the Parallel Particle Data Model, overcomes both of these obstacles and is now being used by two Sandia frameworks: the Decision Analysis Center, and the Seldon socialmore » simulation facility. While the ability to engage U.S.-sized problems is now available to the Decision Analysis Center, this plugin is central to the success of Seldon. Because Seldon relies on computationally intensive cognitive sub-models, this work is necessary to achieve the scale necessary for realistic results. With the recent upheavals in the financial markets, and the inscrutability of terrorist activity, this simulation domain will likely need a capability with ever greater fidelity. High-performance computing will play an important part in enabling that greater fidelity.« less
- Published
- 2008
- Full Text
- View/download PDF
13. Smart sensor technology for joint test assembly flights
- Author
-
Jason L. Dimkoff, Donald A. Sheaffer, Nina M. Berry, Rene Lynn Bierbaum, Adele Beatrice Doser, Travis Jay Deyle, Carmen M. Pancerella, Edward J. Walsh, Robert C. Armstrong, and Kenneth D. Marx
- Subjects
Focus (computing) ,Task (computing) ,Engineering ,Software ,Computer engineering ,business.industry ,Computation ,Embedded system ,Reliability (computer networking) ,Joint (building) ,Electric power ,business ,Variety (cybernetics) - Abstract
The world relies on sensors to perform a variety of tasks from the mundane to sophisticated. Currently, processors associated with these sensors are sufficient only to handle rudimentary logic tasks. Though multiple sensors are often present in such devices, there is insufficient processing power for situational understanding. Until recently, no processors that met the electrical power constraints for embedded systems were powerful enough to perform sophisticated computations. Sandia performs many expensive tests using sensor arrays. Improving the efficacy, reliability and information content resulting from these sensor arrays is of critical importance. With the advent of powerful commodity processors for embedded use, a new opportunity to do just that has presented itself. This report describes work completed under Laboratory-Directed Research and Development (LDRD) Project 26514, Task 1. The goal of the project was to demonstrate the feasibility of using embedded processors to increase the amount of useable information derived from sensor arrays while improving the believability of the data. The focus was on a system of importance to Sandia: Joint Test Assemblies for ICBM warheads. Topics discussed include: (1) two electromechanical systems to provide data, (2) sensors used to monitor those systems, (3) the processors that provide decision-making capability and datamore » manipulation, (4) the use of artificial intelligence and other decision-making software, and (5) a computer model for the training of artificial intelligence software.« less
- Published
- 2003
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.