71 results on '"Dan S. Wallach"'
Search Results
2. VAULT-Style Risk-Limiting Audits and the Inyo County Pilot
- Author
-
Dan S. Wallach, Josh Benaloh, Philip B. Stark, Vanessa Teague, and Kammi Foote
- Subjects
Computer Networks and Communications ,Computer science ,business.industry ,ComputingMilieux_LEGALASPECTSOFCOMPUTING ,Limiting ,Audit ,Transparency (behavior) ,Software implementation ,Vault (architecture) ,General election ,Operations management ,Verifiable secret sharing ,Electrical and Electronic Engineering ,business ,Law ,Risk management - Abstract
In 2020, Inyo County, California partnered with nonprofit VotingWorks to pilot the use of the Verifiable Audits Using Limited Transparency technique (called VAULT) to conduct an efficient, privacy-preserving, publicly verifiable risk-limiting audit of seven contests in the November general election. We describe VAULT, the pilot, and the software implementation that made this pilot possible.
- Published
- 2021
3. How Human Factors Can Help Preserve Democracy in the Age of Pandemics
- Author
-
Claudia Ziegler Acemyan, Dan S. Wallach, Robert M. Stein, Elizabeth Vann, and Philip Kortum
- Subjects
Safety Management ,2019-20 coronavirus outbreak ,Systems Analysis ,Coronavirus disease 2019 (COVID-19) ,media_common.quotation_subject ,Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) ,Pneumonia, Viral ,Human Factors and Ergonomics ,Behavioral Neuroscience ,Politics ,Voting ,Political science ,Pandemic ,050602 political science & public administration ,Humans ,0501 psychology and cognitive sciences ,Pandemics ,050107 human factors ,Applied Psychology ,media_common ,User-centered design ,05 social sciences ,COVID-19 ,Democracy ,United States ,0506 political science ,Political economy ,Ergonomics ,Coronavirus Infections - Abstract
Objective To describe user-centered voting systems that would support the safe conduct of voting in a pandemic environment. Background The COVID-19 pandemic has complicated our democratic processes. Voters and poll workers feel threatened by the potential dangers of voting in business-as-usual polling stations. Indeed, significant problems were encountered in the recent 2020 primary elections in Wisconsin, where the National Guard had to be mobilized because so few poll workers reported to work, and more than 90% of polling places had to remain closed. Method We describe a number of possible user-centered solutions that would help protect voters and poll workers in times of pandemic, and also report the results of a survey that asked voters and poll workers about what kinds of systems might make them willing to vote. Results Political as well as safety considerations will need to be considered as these safer voting solutions are designed since, surprisingly, the kinds of solutions preferred depend on the political affiliation of the voters. Conclusion Human factors professionals have a large role to play in realizing the safe, successful implementation of these user-centered systems. Good human factors analysis can help minimize the risk to voters and poll workers. Moreover, human factors methods can help safeguard democracy by creating safe and well-engineered environments that are conducive to voting in the age of pandemics. Application Creating safe and effective voting solutions that protect voters and poll workers during pandemic outbreaks is crucial to the preservation of democracy.
- Published
- 2020
4. Summative Usability Assessments of STAR-Vote: A Cryptographically Secure e2e Voting System That Has Been Empirically Proven to Be Easy to Use
- Author
-
Claudia Ziegler Acemyan, Michael D. Byrne, Philip Kortum, and Dan S. Wallach
- Subjects
Computer science ,media_common.quotation_subject ,0211 other engineering and technologies ,ComputingMilieux_LEGALASPECTSOFCOMPUTING ,Human Factors and Ergonomics ,Intention ,02 engineering and technology ,Star (graph theory) ,Computer security ,computer.software_genre ,USable ,User-Computer Interface ,Behavioral Neuroscience ,020204 information systems ,Voting ,0202 electrical engineering, electronic engineering, information engineering ,Humans ,Computer Security ,Applied Psychology ,media_common ,021110 strategic, defence & security studies ,business.industry ,Politics ,Usability ,Summative assessment ,business ,computer - Abstract
Background: From the project’s inception, STAR-Vote was intended to be one of the first usable, end-to-end (e2e) voting systems with sophisticated security. To realize STAR-Vote, computer security experts, statistical auditors, human factors (HF)/human-computer interaction (HCI) researchers, and election officials collaborated throughout the project and relied upon a user-centered, iterative design and development process, which included human factors research and usability testing, to make certain the system would be both usable and secure. Objective: While best practices in HF/HCI methods for design were used and all apparent usability problems were identified and fixed, summative system usability assessments were conducted toward the end of the user-centered design process to determine whether STAR-Vote is in fact easy to use. Method and Results: After collecting efficiency, effectiveness, and satisfaction measurements per ISO 9241-11’s system usability criteria, an analysis of the data revealed that there is evidence for STAR-Vote being the most usable, cryptographically secure voting system to date when compared with the previously tested e2e systems: Helios, Prêt à Voter, and Scantegrity. Conclusion and Application: STAR-Vote being one of the first e2e voting systems that is both highly usable and secure is a significant accomplishment, because tamper-resistant voting systems can be used in U.S. elections to ensure the integrity of the electoral process, while still ensuring that voter intent is accurately reflected in the cast ballots. Moreover, this research empirically shows that a complex, secure system can still be usable—meaning that implemented security is not an excuse for poor usability.
- Published
- 2018
5. 2FA Might Be Secure, But It’s Not Usable: A Summative Usability Assessment of Google’s Two-factor Authentication (2FA) Methods
- Author
-
Dan S. Wallach, Jeffrey Xiong, Claudia Ziegler Acemyan, and Philip Kortum
- Subjects
Password ,Authentication ,Computer science ,020206 networking & telecommunications ,02 engineering and technology ,Multi-factor authentication ,USable ,Computer security ,computer.software_genre ,Medical Terminology ,Usability assessment ,Summative assessment ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,computer ,Medical Assisting and Transcription ,Hacker - Abstract
Computer security experts recommend that people use two-factor authentication (2FA) on password protected systems to help keep hackers out. Providing two pieces of information to verify a person’s identity adds extra security to an account. However, it is not clear if the added security and procedures impact system usability. This paper aims to answer this question by assessing per ISO 9241-11’s suggested measurements the usability of Google’s optional 2FA methods. We found few differences across four different 2FA methods when comparing efficiency, effectiveness and satisfaction measures—illustrating that one method is not necessarily more or less usable then another. Overall, the measures indicated that the systems’ usability needed to be improved, especially with regard to the initial setup of 2FA. In conclusion, developers need to focus more attention on making 2FA easier and faster to use, especially since it is often optional for password users, yet makes accounts significantly more secure.
- Published
- 2018
6. Picking up the trash: Exploiting generational GC for memory analysis
- Author
-
Adam Pridgen, Dan S. Wallach, and Simson Garfinkel
- Subjects
Java ,Exploit ,Computer science ,business.industry ,020207 software engineering ,Timeline ,02 engineering and technology ,computer.software_genre ,Memory forensics ,Computer Science Applications ,Medical Laboratory Technology ,Software ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Operating system ,Malware analysis ,business ,computer ,Law ,computer.programming_language ,Garbage collection ,Heap (data structure) - Abstract
Memory analysis is slowly moving up the software stack. Early analysis efforts focused on core OS structures and services. As this field evolves, more information becomes accessible because analysis tools can build on foundational frameworks like Volatility and Rekall. This paper demonstrates and establishes memory analysis techniques for managed runtimes, namely the HotSpot Java Virtual Machine (JVM). We exploit the fact that residual artifacts remain in the JVM's heap to create basic timelines, reconstruct objects, and extract contextual information. These artifacts exist because the JVM copies objects from one place to another during garbage collection and fails to overwrite old data in a timely manner. This work focuses on the Hotspot JVM, but it can be generalized to other managed run-times like Microsoft .Net or Google's V8 JavaScript Engine.
- Published
- 2017
- Full Text
- View/download PDF
7. Total Recall: Persistence of Passwords in Android
- Author
-
Jaeho Lee, Ang Chen, and Dan S. Wallach
- Subjects
Password ,Recall ,Computer science ,business.industry ,Internet privacy ,Android (operating system) ,business - Published
- 2019
8. An Historical Analysis of the SEAndroid Policy Evolution
- Author
-
Ang Chen, Dan S. Wallach, and Bum-Jin Im
- Subjects
FOS: Computer and information sciences ,021110 strategic, defence & security studies ,Computer Science - Cryptography and Security ,Computer science ,0211 other engineering and technologies ,020207 software engineering ,02 engineering and technology ,Computer security ,computer.software_genre ,Security policy ,Mandatory access control ,0202 electrical engineering, electronic engineering, information engineering ,Android (operating system) ,computer ,Cryptography and Security (cs.CR) - Abstract
Android adopted SELinux's mandatory access control (MAC) mechanisms in 2013. Since then, billions of Android devices have benefited from mandatory access control security policies. These policies are expressed in a variety of rules, maintained by Google and extended by Android OEMs. Over the years, the rules have grown to be quite complex, making it challenging to properly understand or configure these policies. In this paper, we perform a measurement study on the SEAndroid repository to understand the evolution of these policies. We propose a new metric to measure the complexity of the policy by expanding policy rules, with their abstraction features such as macros and groups, into primitive "boxes", which we then use to show that the complexity of the SEAndroid policies has been growing exponentially over time. By analyzing the Git commits, snapshot by snapshot, we are also able to analyze the "age" of policy rules, the trend of changes, and the contributor composition. We also look at hallmark events in Android's history, such as the "Stagefright" vulnerability in Android's media facilities, pointing out how these events led to changes in the MAC policies. The growing complexity of Android's mandatory policies suggests that we will eventually hit the limits of our ability to understand these policies, requiring new tools and techniques., Comment: 16 pages, 11 figures, published in ACSAC '18
- Published
- 2018
- Full Text
- View/download PDF
9. Removing Secrets from Android's TLS
- Author
-
Dan S. Wallach and Jaeho Lee
- Subjects
Computer science ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Operating system ,020207 software engineering ,02 engineering and technology ,Android (operating system) ,computer.software_genre ,computer - Published
- 2018
10. The Mason Test: A Defense Against Sybil Attacks in Wireless Networks Without Trusted Authorities
- Author
-
Robert P. Dick, Yue Liu, David R. Bild, Z. Morley Mao, and Dan S. Wallach
- Subjects
FOS: Computer and information sciences ,Computer Science - Cryptography and Security ,Wi-Fi array ,Computer Networks and Communications ,Computer science ,Wireless ad hoc network ,0211 other engineering and technologies ,Mobile computing ,02 engineering and technology ,Computer security ,computer.software_genre ,0202 electrical engineering, electronic engineering, information engineering ,Sybil attack ,Multiple Access with Collision Avoidance for Wireless ,Wireless ,Electrical and Electronic Engineering ,021110 strategic, defence & security studies ,Vehicular ad hoc network ,business.industry ,Wireless network ,ComputerSystemsOrganization_COMPUTER-COMMUNICATIONNETWORKS ,020206 networking & telecommunications ,Mobile ad hoc network ,Ad hoc wireless distribution service ,Key distribution in wireless sensor networks ,Optimized Link State Routing Protocol ,business ,Cryptography and Security (cs.CR) ,computer ,Software ,Computer network - Abstract
Wireless networks are vulnerable to Sybil attacks, in which a malicious node poses as many identities in order to gain disproportionate influence. Many defenses based on spatial variability of wireless channels exist, but depend either on detailed, multi-tap channel estimation - something not exposed on commodity 802.11 devices - or valid RSSI observations from multiple trusted sources, e.g., corporate access points - something not directly available in ad hoc and delay-tolerant networks with potentially malicious neighbors. We extend these techniques to be practical for wireless ad hoc networks of commodity 802.11 devices. Specifically, we propose two efficient methods for separating the valid RSSI observations of behaving nodes from those falsified by malicious participants. Further, we note that prior signalprint methods are easily defeated by mobile attackers and develop an appropriate challenge-response defense. Finally, we present the Mason test, the first implementation of these techniques for ad hoc and delay-tolerant networks of commodity 802.11 devices. We illustrate its performance in several real-world scenarios.
- Published
- 2015
11. Accountable wiretapping – or – I know they can hear you now
- Author
-
Micah Sherr, Clay Shields, Patrick Traynor, Adam Bates, Kevin R. B. Butler, and Dan S. Wallach
- Subjects
Computer Networks and Communications ,Hardware and Architecture ,Computer science ,business.industry ,Accountability ,Internet privacy ,Safety, Risk, Reliability and Quality ,Computer security ,computer.software_genre ,business ,computer ,Software - Published
- 2015
12. Present but Unreachable: Reducing Persistentlatent Secrets in HotSpot JVM
- Author
-
Adam Pridgen, Dan S. Wallach, and Simson L. Garfinkel
- Subjects
Computer science ,020204 information systems ,Hotspot (geology) ,0202 electrical engineering, electronic engineering, information engineering ,Operating system ,020207 software engineering ,02 engineering and technology ,computer.software_genre ,computer ,Garbage collection - Published
- 2017
13. Verification of STAR-Vote and Evaluation of FDR and ProVerif
- Author
-
Dan S. Wallach and Murat Moran
- Subjects
Theoretical computer science ,Computer science ,Privacy analysis ,business.industry ,media_common.quotation_subject ,Cryptography ,0102 computer and information sciences ,02 engineering and technology ,Star (graph theory) ,Cryptographic protocol ,Formal methods ,01 natural sciences ,010201 computation theory & mathematics ,020204 information systems ,Voting ,0202 electrical engineering, electronic engineering, information engineering ,Systems design ,business ,media_common - Abstract
We present the first automated privacy analysis of STAR-Vote, a real world voting system design with sophisticated “end-to-end” cryptography, using FDR and ProVerif. We also evaluate the effectiveness of these tools. Despite the complexity of the voting system, we were able to verify that our abstracted formal model of STAR-Vote provides ballot-secrecy using both formal approaches. Notably, ProVerif is radically faster than FDR, making it more suitable for rapid iteration and refinement of the formal model.
- Published
- 2017
14. Clash Attacks and the STAR-Vote System
- Author
-
Dan S. Wallach and Olivier Pereira
- Subjects
business.industry ,Computer science ,media_common.quotation_subject ,Cryptography ,Plaintext ,Collision ,Computer security ,computer.software_genre ,Encryption ,Bulletin board ,Ballot ,Voting ,Cryptographic hash function ,ComputingMilieux_COMPUTERSANDSOCIETY ,business ,computer ,media_common - Abstract
STAR-Vote is an end-to-end cryptographic voting system that produces both plaintext paper ballots and encrypted electronic records of each ballot. We describe how clash attacks against STAR-Vote could weaken its security guarantees: corrupt voting terminals could identify voters with identical ballot preferences and print identical receipts for them, while generating electronic ballot ciphertexts for other candidates. Each voter would then be able to “verify” their ballot on the public bulletin board, but the electronic tally would include alternative ciphertexts corresponding to the duplicate voters. We describe how this threat can be exploited and mitigated with existing STAR-Vote mechanisms, including STAR-Vote’s use of Benaloh challenges and a cryptographic hash chain. We also describe how this threat can be mitigated through statistical sampling of the printed paper ballots as an extension to the risk-limiting audits that STAR-Vote already requires.
- Published
- 2017
15. Public Evidence from Secret Ballots
- Author
-
Peter Y. A. Ryan, Poorvi L. Vora, Philip B. Stark, J. Alex Halderman, Ronald L. Rivest, Vanessa Teague, Dan S. Wallach, Josh Benaloh, and Matthew Bernhard
- Subjects
0301 basic medicine ,business.industry ,Computer science ,media_common.quotation_subject ,Context (language use) ,Cryptography ,16. Peace & justice ,Computer security ,computer.software_genre ,USable ,03 medical and health sciences ,Adversarial system ,030104 developmental biology ,0302 clinical medicine ,Ballot ,030220 oncology & carcinogenesis ,Voting ,Secrecy ,Key (cryptography) ,business ,computer ,media_common - Abstract
Elections seem simple—aren’t they just about counting? But they have a unique, challenging combination of security and privacy requirements. The stakes are high; the context is adversarial; the electorate needs to be convinced that the results are correct; and the secrecy of the ballot must be ensured. They also have practical constraints: time is of the essence, and voting systems need to be affordable and maintainable, as well as usable by voters, election officials, and pollworkers. It is thus not surprising that voting is a rich research area spanning theory, applied cryptography, practical systems analysis, usable security, and statistics. Election integrity involves two key concepts: convincing evidence that outcomes are correct and privacy, which amounts to convincing assurance that there is no evidence about how any given person voted. These are obviously in tension. We examine how current systems walk this tightrope.
- Published
- 2017
16. Robust and Reverse-Engineering Resilient PUF Authentication and Key-Exchange by Substring Matching
- Author
-
Srinivas Devadas, Farinaz Koushanfar, Mehrdad Majzoobi, Dan S. Wallach, and Masoud Rostami
- Subjects
Hardware security module ,Computer science ,Network security ,business.industry ,Physical unclonable function ,Cryptographic protocol ,Gas meter prover ,Substring ,Computer Science Applications ,Human-Computer Interaction ,ComputingMilieux_MANAGEMENTOFCOMPUTINGANDINFORMATIONSYSTEMS ,Robustness (computer science) ,Computer Science (miscellaneous) ,business ,Key exchange ,Information Systems ,Computer network - Abstract
This paper proposes novel robust and low-overhead physical unclonable function (PUF) authentication and key exchange protocols that are resilient against reverse-engineering attacks. The protocols are executed between a party with access to a physical PUF (prover) and a trusted party who has access to the PUF compact model (verifier). The proposed protocols do not follow the classic paradigm of exposing the full PUF responses or a transformation of them. Instead, random subsets of the PUF response strings are sent to the verifier so the exact position of the subset is obfuscated for the third-party channel observers. Authentication of the responses at the verifier side is done by matching the substring to the available full response string; the index of the matching point is the actual obfuscated secret (or key) and not the response substring itself. We perform a thorough analysis of resiliency of the protocols against various adversarial acts, including machine learning and statistical attacks. The attack analysis guides us in tuning the parameters of the protocol for an efficient and secure implementation. The low overhead and practicality of the protocols are evaluated and confirmed by hardware implementation.
- Published
- 2014
17. Rebooting the CS publication process
- Author
-
Dan S. Wallach
- Subjects
World Wide Web ,Metadata ,General Computer Science ,Process (engineering) ,Computer science ,Learning to rank ,Publication process ,Reboot - Abstract
Many computer science academics have been grousing about failures in our publication process. This paper catalogs many of the specific complaints that are raised and proposes some radical new solutions based on the assumption that, by eliminating physical paper entirely and going with a centralized system to manage papers, we can rethink the entire process: paper submission, revision and publication. Furthermore, having all of the metadata standardized and easily available, ranking algorithms can be easily conceived to aid in tenure cases and departmental rankings.
- Published
- 2011
18. Authenticated Dictionaries
- Author
-
Dan S. Wallach and Scott A. Crosby
- Subjects
Theoretical computer science ,General Computer Science ,Computer science ,business.industry ,Hash function ,Cryptography ,Merkle tree ,Data structure ,Server ,Cache ,Persistent data structure ,Tuple ,Safety, Risk, Reliability and Quality ,business - Abstract
Authenticated dictionaries are a widely discussed paradigm to enable verifiable integrity for data storage on untrusted servers, such as today’s widely used “cloud computing” resources, allowing a server to provide a “proof,” typically in the form of a slice through a cryptographic data structure, that the results of any given query are the correct answer, including that the absence of a query result is correct. Persistent authenticated dictionaries (PADs) further allow queries against older versions of the structure. This research presents implementations of a variety of different PAD algorithms, some based on Merkle tree-style data structures and others based on individually signed “tuple” statements (with and without RSA accumulators). We present system throughput benchmarks, indicating costs in terms of time, storage, and bandwidth as well as considering how much money would be required given standard cloud computing costs. We conclude that Merkle tree PADs are preferable in cases with frequent updates, while tuple-based PADs are preferable with higher query rates. For Merkle tree PADs, red-black trees outperform treaps and skiplists. Applying Sarnak-Tarjan’s versioned node strategy, with a cache of old hashes at every node, to red-black trees yields the fastest Merkle tree PAD implementation, notably using half the memory of the more commonly used mutation-free path copying strategy. For tuple PADs, although we designed and implemented an algorithm using RSA accumulators that offers constant update size, constant storage per update, constant proof size, and sublinear computation per update, we found that RSA accumulators are so expensive that they are never worthwhile. We find that other optimizations in the literature for tuple PADs are more cost-effective.
- Published
- 2011
19. Opportunities and Limits of Remote Timing Attacks
- Author
-
Rudolf H. Riedi, Dan S. Wallach, and Scott A. Crosby
- Subjects
Web server ,General Computer Science ,business.industry ,Computer science ,Real-time computing ,Local area network ,computer.software_genre ,Public-key cryptography ,Timing attack ,Server ,Information leakage ,The Internet ,Safety, Risk, Reliability and Quality ,business ,computer ,Computer network ,Jitter - Abstract
Many algorithms can take a variable amount of time to complete depending on the data being processed. These timing differences can sometimes disclose confidential information. Indeed, researchers have been able to reconstruct an RSA private key purely by querying an SSL Web server and timing the results. Our work analyzes the limits of attacks based on accurately measuring network response times and jitter over a local network and across the Internet. We present the design of filters to significantly reduce the effects of jitter, allowing an attacker to measure events with 15-100 μ s accuracy across the Internet, and as good as 100ns over a local network. Notably, security-related algorithms on Web servers and other network servers need to be carefully engineered to avoid timing channel leaks at the accuracy demonstrated in this article.
- Published
- 2009
20. Known Unknowns
- Author
-
Dan S. Wallach, Devika Subramanian, Tanmay Thakur, Zhouhan Chen, and Rima S. Tanash
- Subjects
Government ,business.industry ,Computer science ,Turkish ,media_common.quotation_subject ,Internet privacy ,Censorship ,Quarter (United States coin) ,Transparency (behavior) ,language.human_language ,Politics ,language ,Turkish government ,business ,Web site ,media_common - Abstract
Twitter, widely used around the world, has a standard interface for government agencies to request that individual tweets or even whole accounts be censored. Twitter, in turn, discloses country-by-country statistics about this censorship in its transparency reports as well as reporting specific incidents of censorship to the Chilling Effects web site. Twitter identifies Turkey as the country issuing the largest number of censorship requests, so we focused our attention there. Collecting over 20 million Turkish tweets from late 2014 to early 2015, we discovered over a quarter million censored tweets - two orders of magnitude larger than what Twitter itself reports. We applied standard machine learning / clustering techniques, and found the vast bulk of censored tweets contained political content, often critical of the Turkish government. Our work establishes that Twitter radically under-reports censored tweets in Turkey, raising the possibility that similar trends hold for censored tweets from other countries as well. We also discuss the relative ease of working around Twitter's censorship mechanisms, although we can not easily measure how many users take such steps.
- Published
- 2015
21. Exploiting military OpSec through open-source vulnerabilities
- Author
-
Christopher Bronk, Dan S. Wallach, and Judson Dressler
- Subjects
SIMPLE (military communications protocol) ,business.industry ,Internet privacy ,Vulnerability ,Usability ,Adversary ,Computer security ,computer.software_genre ,Work (electrical) ,Content analysis ,Social media ,Sociology ,business ,Operations security ,computer - Abstract
With the ease of use and connectivity provided by social media, there has become a growing tension between military users' personal needs and military operational security. Like everyone, military members post seemingly trivial information and pictures on sites such as Facebook, Twitter, or LinkedIn; all of which can be used, augmented, and aggregated by an adversary to determine the utility and feasibility of possible intelligence targets. In this work, we investigate the current state of DoD policy regarding social media. We then designed an automated approach to determine the amount of openly available information provided by U.S. military members through social media; analyzed it through content analysis; then applied machine learning techniques to learn as much from the provided data as possible. We then ranked the vulnerability of each individual found by the data provided with a simple scoring system; classifying them as least vulnerable, vulnerable, highly vulnerable. In all, we discovered 1100+ potential intelligence targets; hypothesized about potential scenarios in which this publicly available information could have negative consequences; and recommend actions that could mitigate this vulnerability.
- Published
- 2015
22. Performance and Energy Consumption Analysis of a Delay-Tolerant Network for Censorship-Resistant Communication
- Author
-
Z. Morley Mao, Dan S. Wallach, David Adrian, Robert P. Dick, David R. Bild, Gulshan Singh, and Yue Liu
- Subjects
Battery energy ,Delay-tolerant networking ,education.field_of_study ,business.industry ,Microblogging ,Computer science ,media_common.quotation_subject ,Population ,Censorship ,Energy consumption ,Computer security ,computer.software_genre ,Flooding (computer networking) ,Social media ,business ,education ,computer ,Mobile device ,Computer network ,media_common - Abstract
Delay Tolerant Networks (DTNs) composed of commodity mobile devices have the potential to support communication applications resistant to blocking and censorship, as well as certain types of surveillance. We analyze the performance and energy consumption of such a network, and consider the impact of random and targeted denial-of-service and censorship attacks. To gather wireless connectivity traces for a DTN composed of human-carried commodity smartphones, we implemented and deployed a prototype DTN-based micro-blogging application, called 1am, in a college town. We analyzed the system during a time period with 111 users. Although the study provided detailed enough connectivity traces to enable analysis, message posting was too infrequent to draw strong conclusions based on user-initiated messages, alone. We therefore simulated more frequent message initiations and used measured connectivity traces to analyze message propagation. Using a flooding protocol, we found that with an adoption rate of 0.2% of a college town's student and faculty population, the median one-week delivery rate is 85% and the median delivery delay is 13 hours. We also found that the network delivery rate and delay are robust to denial-of service and censorship attacks eliminating more than half of the participants. Using a measurement-based energy model, we also found that the DTN system would use less than 10.0% of a typical smartphone's battery energy per day in a network of 2,500 users.
- Published
- 2015
23. Users’ Mental Models for Three End-to-End Voting Systems: Helios, Prêt à Voter, and Scantegrity II
- Author
-
Claudia Ziegler Acemyan, Philip Kortum, Dan S. Wallach, and Michael D. Byrne
- Subjects
Ballot ,Work (electrical) ,End-to-end principle ,Computer science ,Disapproval voting ,Voting ,media_common.quotation_subject ,HeliOS ,Computer security ,computer.software_genre ,computer ,Cardinal voting systems ,media_common - Abstract
This study sought to understand voter's mental models for three end-to-end e2e voting systems: Helios, Pret i Voter, and Scantegrity II. To study voters' mental models of e2e systems, 16 Houston area voters participated in mock elections that required them to vote first with a paper ballot and then with the three e2e systems. After using each system, subjects were asked to draw their mental model--or how the system works, then describe it to the experimenter, and last complete an interview. We found that most participants think about the systems first and foremost in terms of how-to-vote procedures, rather than detailed, conceptual models that describe all aspects of a system, including how they work. When designing e2e voting systems, the findings from this study can be used by system developers to ensure that voters find the systems easy to use and that the designs align with voters' pre-existing mental models for voting.
- Published
- 2015
24. Performance analysis of TLS Web servers
- Author
-
Dan S. Wallach, Peter Druschel, and Cristian Coarfa
- Subjects
Web server ,General Computer Science ,business.industry ,Computer science ,Cryptography ,computer.software_genre ,Client–server model ,Operating system ,Overhead (computing) ,The Internet ,Central processing unit ,Session (computer science) ,business ,Communications protocol ,computer ,Computer network - Abstract
TLS is the protocol of choice for securing today's e-commerce and online transactions but adding TLS to a Web server imposes a significant overhead relative to an insecure Web server on the same platform. We perform a comprehensive study of the performance costs of TLS. Our methodology is to profile TLS Web servers with trace-driven workloads, replace individual components inside TLS with no-ops, and measure the observed increase in server throughput. We estimate the relative costs of each TLS processing stage, identifying the areas for which future optimizations would be worthwhile. Our results show that while the RSA operations represent the largest performance cost in TLS Web servers, they do not solely account for TLS overhead. RSA accelerators are effective for e-commerce site workloads since they experience low TLS session reuse. Accelerators appear to be less effective for sites where all the requests are handled by a TLS server because they have a higher session reuse rate. In this case, investing in a faster CPU might provide a greater boost in performance. Our experiments show that having a second CPU is at least as useful as an RSA accelerator. Our results seem to suggest that, as CPUs become faster, the cryptographic costs of TLS will become dwarfed by the CPU costs of the nonsecurity aspects of a Web server. Optimizations aimed at general purpose Web servers should continue to be a focus of research and would benefit secure Web servers as well.
- Published
- 2006
25. Iterative adaptation for mobile clients using existing APIs
- Author
-
E. de Lara, Rajnish Kumar, N. Vaghela, Dan S. Wallach, Y. Chopra, and Willy Zwaenepoel
- Subjects
Source code ,Ubiquitous computing ,Computer science ,business.industry ,Application adaptation ,media_common.quotation_subject ,Distributed computing ,Mobile computing ,Middleware mobile computing ,Microsoft Office ,low-bandwidth operation ,middleware ,Computational Theory and Mathematics ,Hardware and Architecture ,pervasive computing ,Middleware ,Signal Processing ,Wireless ,The Internet ,business ,media_common - Abstract
Iterative Adaptation is a novel approach to adaptation for resource-limited mobile and wireless environments that supports powerful application-specific adaptations without requiring modifications to the application's source code. Common productivity applications, such as browsers, word processors, and presentation tools, export APIs that allow external applications to control their operation. The novel premise in iterative adaptation is that these APIs are sufficient to support a wide range of adaptation policies for applications running on resource-limited devices. In addition to allowing adaptation without having to change the application's source code, this approach has a unique combination of advantages. First, it supports centralized management of resources across multiple applications. Second, it makes it possible to modify application behavior after the application has been deployed. This paper evaluates the extent to which existing APIs can be used for the purposes of adapting document-based applications to run on bandwidth-limited devices. In particular, we implement a large number of bandwidth adaptations for applications from the Microsoft Office and the OpenOffice productivity suites and for Internet Explorer. Although we find limitations in their APIs, we are able to implement many adaptation policies without much complexity and with good performance. Moreover, iterative adaptation achieves performance similar to an approach that implements adaptation by modifying the application, while requiring only a fraction of the coding effort.
- Published
- 2005
26. Robotics-Based Location Sensing Using Wireless Ethernet
- Author
-
Lydia E. Kavraki, Algis Rudys, Kostas E. Bekris, Dan S. Wallach, and Andrew M. Ladd
- Subjects
Ethernet ,Computer Networks and Communications ,Network packet ,Wireless network ,business.industry ,Computer science ,Noise (signal processing) ,Real-time computing ,Context (language use) ,Sensor fusion ,Base station ,Embedded system ,Wireless ,Electrical and Electronic Engineering ,business ,Information Systems - Abstract
A key subproblem in the construction of location-aware systems is the determination of the position of a mobile device. This article describes the design, implementation and analysis of a system for determining position inside a building from measured RF signal strengths of packets on an IEEE 802.11b wireless Ethernet network. Previous approaches to location-awareness with RF signals have been severely hampered by non-Gaussian signals, noise, and complex correlations due to multi-path effects, interference and absorption. The design of our system begins with the observation that determining position from complex, noisy and non-Gaussian signals is a well-studied problem in the field of robotics. Using only off-the-shelf hardware, we achieve robust position estimation to within a meter in our experimental context and after adequate training of our system. We can also coarsely determine our orientation and can track our position as we move. Our results show that we can localize a stationary device to within 1.5 meters over 80% of the time and track a moving device to within 1 meter over 50% of the time. Both localization and tracking run in real-time. By applying recent advances in probabilistic inference of position and sensor fusion from noisy signals, we show that the RF emissions from base stations as measured by off-the-shelf wireless Ethernet cards are sufficiently rich in information to permit a mobile device to reliably track its location.
- Published
- 2005
27. Managing the Performance Impact of Web Security
- Author
-
Dan S. Wallach, Aviel D. Rubin, and Adam Stubblefield
- Subjects
Web server ,Web development ,Computer science ,business.industry ,Economics, Econometrics and Finance (miscellaneous) ,Content Security Policy ,computer.software_genre ,Computer security ,Web application security ,Internet security ,Human-Computer Interaction ,Security association ,Security service ,Web service ,business ,computer - Abstract
Security and performance are usually at odds with each other. Current implementations of security on the web have been adopted at the extreme end of the spectrum, where strong cryptographic protocols are employed at the expense of performance. The SSL protocol is not only computationally intensive, but it makes web caching impossible, thus missing out on potential performance gains. In this paper we discuss the requirements for web security and present a solution that takes into account performance impact and backwards compatibility.
- Published
- 2005
28. On the feasibility of using wireless ethernet for indoor localization
- Author
-
Algis Rudys, Dan S. Wallach, Lydia E. Kavraki, Kostas E. Bekris, and Andrew M. Ladd
- Subjects
Ethernet ,Engineering ,business.industry ,Network packet ,ComputerSystemsOrganization_COMPUTER-COMMUNICATIONNETWORKS ,Real-time computing ,Mobile computing ,Mobile robot ,Synchronous Ethernet ,Control and Systems Engineering ,Embedded system ,Wireless ,Mobile telephony ,Electrical and Electronic Engineering ,business ,Wireless sensor network - Abstract
IEEE 802.11b wireless Ethernet is becoming the standard for indoor wireless communication. This paper proposes the use of measured signal strength of Ethernet packets as a sensor for a localization system. We demonstrate that off-the-shelf hardware can accurately be used for location sensing and real-time tracking by applying a Bayesian localization framework.
- Published
- 2004
29. Hack-a-vote: security issues with electronic voting systems
- Author
-
D.W. Price, Dan S. Wallach, Algis Rudys, J. Singer, and Jonathan Bannet
- Subjects
Computer Networks and Communications ,Electronic voting ,Computer science ,business.industry ,media_common.quotation_subject ,Internet privacy ,ComputingMilieux_LEGALASPECTSOFCOMPUTING ,Audit ,Computer security ,computer.software_genre ,Voting ,Electrical and Electronic Engineering ,business ,Law ,computer ,Legitimacy ,media_common ,Hacker - Abstract
In a quest for election legitimacy, officials are increasingly deploying direct recording electronic (DRE) voting systems. A project to assess their trustworthiness revealed both the ease of introducing bugs into such systems and the difficulty of detecting them during audits.
- Published
- 2004
30. Run-time support for distributed sharing in safe languages
- Author
-
Alan L. Cox, Willy Zwaenepoel, Dan S. Wallach, Y. Charlie Hu, and Weimin Yu
- Subjects
Distributed shared memory ,General Computer Science ,Computer science ,Address space ,Distributed computing ,False sharing ,TreadMarks ,Distributed object ,computer.software_genre ,Shared resource ,Programming paradigm ,Operating system ,computer ,Garbage collection - Abstract
We present a new run-time system that supports object sharing in a distributed system. The key insight in this system is that a handle-based implementation of such a system enables efficient and transparent sharing of data with both fine- and coarse-grained access patterns. In addition, it supports efficient execution of garbage-collected programs. In contrast, conventional distributed shared memory (DSM) systems are limited to providing only one granularity with good performance, and have experienced difficulty in efficiently supporting garbage collection. A safe language, in which no pointer arithmetic is allowed, can transparently be compiled into a handle-based system and constitutes its preferred mode of use. A programmer can also directly use a handle-based programming model that avoids pointer arithmetic on the handles, and achieve the same performance but without the programming benefits of a safe programming language. This new run-time system, DOSA (Distributed Object Sharing Architecture), provides a shared object space abstraction rather than a shared address space abstraction. The key to its efficiency is the observation that a handle-based distributed implementation permits VM-based access and modification detection without suffering false sharing for fine-grained access patterns. We compare DOSA to TreadMarks, a conventional DSM system that is efficient at handling coarse-grained sharing. The performance of fine-grained applications and garbage-collected applications is considerably better than in TreadMarks, and the performance of coarse-grained applications is nearly as good as in TreadMarks. Inasmuch as the performance of such applications is already good in TreadMarks, we consider this an acceptable performance penalty.
- Published
- 2003
31. Secure routing for structured peer-to-peer overlay networks
- Author
-
Ayalvadi Ganesh, Peter Druschel, Antony Rowstron, Dan S. Wallach, and Miguel Castro
- Subjects
business.industry ,Computer science ,Peer to peer overlay networks ,Overlay network ,Crash ,Overlay ,Communication in small groups ,Distributed data store ,General Earth and Planetary Sciences ,State (computer science) ,Routing (electronic design automation) ,business ,General Environmental Science ,Computer network - Abstract
Structured peer-to-peer overlay networks provide a substrate for the construction of large-scale, decentralized applications, including distributed storage, group communication, and content distribution. These overlays are highly resilient; they can route messages correctly even when a large fraction of the nodes crash or the network partitions. But current overlays are not secure; even a small fraction of malicious nodes can prevent correct message delivery throughout the overlay. This problem is particularly serious in open peer-to-peer systems, where many diverse, autonomous parties without preexisting trust relationships wish to pool their resources. This paper studies attacks aimed at preventing correct message delivery in structured peer-to-peer overlays and presents defenses to these attacks. We describe and evaluate techniques that allow nodes to join the overlay, to maintain routing state, and to forward messages securely in the presence of malicious nodes.
- Published
- 2002
32. Termination in language-based systems
- Author
-
Algis Rudys and Dan S. Wallach
- Subjects
General Computer Science ,Java ,Computer science ,business.industry ,Programming language ,media_common.quotation_subject ,Java bytecode ,computer.software_genre ,Extensibility ,Infinite loop ,Operating system ,Code (cryptography) ,The Internet ,Rewriting ,Safety, Risk, Reliability and Quality ,business ,Function (engineering) ,computer ,media_common ,computer.programming_language - Abstract
Language run-time systems are increasingly being embedded in systems to support run-time extensibility via mobile code. Such systems raise a number of concerns when the code running in such systems is potentially buggy or untrusted. Although sophisticated access controls have been designed for mobile code and are shipping as part of commercial systems such as Java, there is no support for terminating mobile code short of terminating the entire language run-time. This article presents a concept called "soft termination" that can be applied to virtually any mobile code system. Soft termination allows mobile code threads to be safely terminated while preserving the stability of the language run-time. In addition, function bodies can be permanently disabled, thwarting attacks predicated on system threads eventually calling untrusted functions. Soft termination guarantees termination by breaking any potential infinite loops in mobile code. We present a formal design for soft termination and an implementation of it for Java, built using Java bytecode rewriting, which demonstrates reasonable performance (3 to 25% slowdowns onbenchmarks).
- Published
- 2002
33. Hardening Persona – Improving Federated Web Login
- Author
-
Michael Dietz and Dan S. Wallach
- Subjects
World Wide Web ,Password ,Computer science ,Automated proof checking ,Proof of concept implementation ,Persona ,Day to day ,Security token ,Login ,Computer security ,computer.software_genre ,computer - Abstract
Federated login protocols for the Web are intended to increase user security by reducing the proliferation of passwords that users are expected to remember and use on a day to day basis, however these protocols are vulnerable to recent attacks against TLS that allow attackers to extract session cookies and other such authentication tokens from within TLS sessions. A recent technique, TLS-OBC (origin bound certificates), allows these tokens to be hardened against extraction. This paper describes the design and engineering of OBC-based extensions to federated login protocols. We present two OBC-based variants on the popular Persona federated login protocol, formalizing them with BAN logic and using the automated proof checker from the related Nexus Authentication Logic. We also present a proof of concept implementation, exploring the necessary browser extensions and server support.
- Published
- 2014
34. Aggregate Characterization of User Behavior in Twitter and Analysis of the Retweet Graph
- Author
-
Robert P. Dick, Yue Liu, Z. Morley Mao, David R. Bild, and Dan S. Wallach
- Subjects
FOS: Computer and information sciences ,Physics - Physics and Society ,Theoretical computer science ,Computer Networks and Communications ,Microblogging ,Computer science ,FOS: Physical sciences ,02 engineering and technology ,Physics and Society (physics.soc-ph) ,Power law ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Social media ,Cluster analysis ,Weibull distribution ,Social and Information Networks (cs.SI) ,Social graph ,Social network ,business.industry ,Computer Science - Social and Information Networks ,Graph ,Log-normal distribution ,Graph (abstract data type) ,020201 artificial intelligence & image processing ,Information cascade ,business - Abstract
Most previous analysis of Twitter user behavior is focused on individual information cascades and the social followers graph. We instead study aggregate user behavior and the retweet graph with a focus on quantitative descriptions. We find that the lifetime tweet distribution is a type-II discrete Weibull stemming from a power law hazard function, the tweet rate distribution, although asymptotically power law, exhibits a lognormal cutoff over finite sample intervals, and the inter-tweet interval distribution is power law with exponential cutoff. The retweet graph is small-world and scale-free, like the social graph, but is less disassortative and has much stronger clustering. These differences are consistent with it better capturing the real-world social relationships of and trust between users. Beyond just understanding and modeling human communication patterns and social networks, applications for alternative, decentralized microblogging systems-both predicting real-word performance and detecting spam-are discussed., Comment: 17 pages, 21 figures
- Published
- 2014
- Full Text
- View/download PDF
35. SAFKASI
- Author
-
Edward W. Felten, Dan S. Wallach, and Andrew W. Appel
- Subjects
Java ,Computer science ,Call stack ,business.industry ,Programming language ,Tail call ,computer.software_genre ,Bytecode ,Real time Java ,Software engineering ,business ,Java applet ,computer ,Java annotation ,Software ,Java Modeling Language ,computer.programming_language - Abstract
In order to run untrusted code in the same process as trusted code, there must be a mechanism to allow dangerous calls to determine if their caller is authorized to exercise the privilege of using the dangerous routine. Java systems have adopted a technique called stack inspection to address this concern. But its original definition, in terms of searching stack frames, had an unclear relationship to the actual achievement of security, overconstrained the implementation of a Java system, limited many desirable optimizations such as method inlining and tail recursion, and generally interfered with interprocedural optimization. We present a new semantics for stack inspection based on a belief logic and its implementation using the calculus of security-passing style which addresses the concerns of traditional stack inspection. With security-passing style, we can efficiently represent the security context for any method activation, and we can build a new implementation strictly by rewriting the Java bytecodes before they are loaded by the system. No changes to the JVM or bytecode semantics are necessary. With a combination of static analysis and runtime optimizations, our prototype implementation showes reasonable performance (although traditional stack inspection is still faster), and is easier to consider for languages beyond Java. We call our system SAFKASI (the Security Architecture Formerly Known as Stack Inspection).
- Published
- 2000
36. Extensible security architectures for Java
- Author
-
Drew Dean, Edward W. Felten, Dan S. Wallach, and Dirk Balfanz
- Subjects
Cloud computing security ,Java ,Computer science ,Principal (computer security) ,Computer security model ,computer.software_genre ,Security policy ,Real time Java ,Software security assurance ,Security through obscurity ,Operating system ,General Earth and Planetary Sciences ,computer ,computer.programming_language ,General Environmental Science - Abstract
Mobile code technologies such as Java, JavaScript, and ActiveX generally limit all programs to a single restrictive security policy. However, software-based protection can allow for more extensible security models, with potentially significant performance improvements over traditional hardware-based solutions. An extensible security system should be able to protect subsystems and implement policies that are created after the initial system is shipped. We describe and analyze three implementation strategies for interposing such security policies in software-based security systems. Implementations exist for all three strategies: several vendors have adapted capabilities to Java, Netscape and Microsoft have extensions to Java's stack introspection, and we built a name space management system as an add-on to Microsoft Internet Explorer. Theoretically, all these systems are equivalently secure, but many practical issues and implementation details favor some aspects of each system.
- Published
- 1997
37. A Case of Collusion: A Study of the Interface Between Ad Libraries and their Apps
- Author
-
Dan S. Wallach and Theodore Book
- Subjects
FOS: Computer and information sciences ,Computer Science - Cryptography and Security ,business.industry ,Computer science ,Internet privacy ,Popularity ,Advertising revenue ,World Wide Web ,Collusion ,Android (operating system) ,business ,Private information retrieval ,Personally identifiable information ,Cryptography and Security (cs.CR) - Abstract
A growing concern with advertisement libraries on Android is their ability to exfiltrate personal information from their host applications. While previous work has looked at the libraries' abilities to measure private information on their own, advertising libraries also include APIs through which a host application can deliberately leak private information about the user. This study considers a corpus of 114,000 apps. We reconstruct the APIs for 103 ad libraries used in the corpus, and study how the privacy leaking APIs from the top 20 ad libraries are used by the applications. Notably, we have found that app popularity correlates with privacy leakage; the marginal increase in advertising revenue, multiplied over a larger user base, seems to incentivize these app vendors to violate their users' privacy., Comment: 6 pages
- Published
- 2013
- Full Text
- View/download PDF
38. Inferring Atmospheric Particulate Matter Concentrations from Chinese Social Media Data
- Author
-
Aynne Kokas, Zhu Tao, Daniel S. Cohan, Dan S. Wallach, and Rui Zhang
- Subjects
Atmospheric Science ,010504 meteorology & atmospheric sciences ,Air pollution ,lcsh:Medicine ,Social Sciences ,010501 environmental sciences ,medicine.disease_cause ,01 natural sciences ,Mathematical and Statistical Techniques ,Sociology ,Beijing ,Data Mining ,lcsh:Science ,media_common ,Air Pollutants ,Multidisciplinary ,Applied Mathematics ,Simulation and Modeling ,Social Communication ,Particulates ,Pollution ,Chemistry ,Social Networks ,Physical Sciences ,Engineering and Technology ,Regression Analysis ,Information Technology ,Network Analysis ,Algorithms ,Statistics (Mathematics) ,Research Article ,Environmental Monitoring ,Computer and Information Sciences ,Pollutants ,China ,Environmental Engineering ,Microblogging ,media_common.quotation_subject ,Twitter ,Linear Regression Analysis ,Research and Analysis Methods ,Air Quality ,Air Pollution ,Environmental health ,medicine ,Environmental Chemistry ,Humans ,Social media ,Statistical Methods ,Air quality index ,0105 earth and related environmental sciences ,Pollutant ,lcsh:R ,Ecology and Environmental Sciences ,Communications ,Megacity ,Atmospheric Chemistry ,Earth Sciences ,Environmental science ,lcsh:Q ,Particulate Matter ,Social Media ,Mathematics - Abstract
Although studies have increasingly linked air pollution to specific health outcomes, less well understood is how public perceptions of air quality respond to changing pollutant levels. The growing availability of air pollution measurements and the proliferation of social media provide an opportunity to gauge public discussion of air quality conditions. In this paper, we consider particulate matter (PM) measurements from four Chinese megacities (Beijing, Shanghai, Guangzhou, and Chengdu) together with 112 million posts on Weibo (a popular Chinese microblogging system) from corresponding days in 2011-2013 to identify terms whose frequency was most correlated with PM levels. These correlations are used to construct an Air Discussion Index (ADI) for estimating daily PM based on the content of Weibo posts. In Beijing, the Chinese city with the most PM as measured by U.S. Embassy monitor stations, we found a strong correlation (R = 0.88) between the ADI and measured PM. In other Chinese cities with lower pollution levels, the correlation was weaker. Nonetheless, our results show that social media may be a useful proxy measurement for pollution, particularly when traditional measurement stations are unavailable, censored or misreported.
- Published
- 2016
39. Strengthening user authentication through opportunistic cryptographic identity assertions
- Author
-
Dan S. Wallach, Michael Dietz, Tadayoshi Kohno, Dirk Balfanz, and Alexei Czeskis
- Subjects
Password ,Web server ,Password policy ,Authentication ,Cognitive password ,Computer science ,business.industry ,Cryptography ,Multi-factor authentication ,computer.software_genre ,Computer security ,Login ,One-time password ,Phishing ,Chip Authentication Program ,S/KEY ,World Wide Web ,Generic Bootstrapping Architecture ,Authentication protocol ,Lightweight Extensible Authentication Protocol ,Challenge–response authentication ,business ,computer - Abstract
User authentication systems are at an impasse. The most ubiquitous method -- the password -- has numerous problems, including susceptibility to unintentional exposure via phishing and cross-site password reuse. Second-factor authentication schemes have the potential to increase security but face usability and deployability challenges. For example, conventional second-factor schemes change the user authentication experience. Furthermore, while more secure than passwords, second-factor schemes still fail to provide sufficient protection against (single-use) phishing attacks.We present PhoneAuth, a system intended to provide security assurances comparable to or greater than that of conventional two-factor authentication systems while offering the same authentication experience as traditional passwords alone. Our work leverages the following key insights. First, a user's personal device (eg a phone) can communicate directly with the user's computer (and hence the remote web server) without any interaction with the user. Second, it is possible to provide a layered approach to security, whereby a web server can enact different policies depending on whether or not the user's personal device is present. We describe and evaluate our server-side, Chromium web browser, and Android phone implementations of PhoneAuth.
- Published
- 2012
40. Slender PUF Protocol: A Lightweight, Robust, and Secure Authentication by Substring Matching
- Author
-
Dan S. Wallach, Farinaz Koushanfar, Mehrdad Majzoobi, Masoud Rostami, and Srinivas Devadas
- Subjects
ComputingMilieux_MANAGEMENTOFCOMPUTINGANDINFORMATIONSYSTEMS ,Authentication ,Theoretical computer science ,Computer science ,business.industry ,Embedded system ,Authentication protocol ,Physical unclonable function ,Hash function ,Message authentication code ,Cryptographic protocol ,business ,Protocol (object-oriented programming) - Abstract
We introduce Slender PUF protocol, an efficient and secure method to authenticate the responses generated from a Strong Physical Unclonable Function (PUF). The new method is lightweight, and suitable for energy constrained platforms such as ultra-low power embedded systems for use in identification and authentication applications. The proposed protocol does not follow the classic paradigm of exposing the full PUF responses (or a transformation of the full string of responses) on the communication channel. Instead, random subsets of the responses are revealed and sent for authentication. The response patterns are used for authenticating the prover device with a very high probability. We perform a thorough analysis of the method's resiliency to various attacks which guides adjustment of our protocol parameters for an efficient and secure implementation. We demonstrate that Slender PUF protocol, if carefully designed, will be resilient against all known machine learning attacks. In addition, it has the great advantage of an inbuilt PUF error tolerance. Thus, Slender PUF protocol is lightweight and does not require costly additional error correction, fuzzy extractors, and hash modules suggested in most previously known PUF-based robust authentication techniques. The low overhead and practicality of the protocol are confirmed by a set of hardware implementation and evaluations.
- Published
- 2012
41. Using predictable mobility patterns to support scalable and secure MANETs of handheld devices
- Author
-
Yue Liu, Z. Morley Mao, Robert P. Dick, Dan S. Wallach, and David R. Bild
- Subjects
Optimized Link State Routing Protocol ,business.industry ,Computer science ,ComputerSystemsOrganization_COMPUTER-COMMUNICATIONNETWORKS ,Scalability ,Wireless ,Geographic routing ,Mobile ad hoc network ,business ,Location tracking ,Mobile device ,Anonymity ,Computer network - Abstract
Mobile ad-hoc networks of wireless devices (MANETs) hold the promise of providing network services without traditional infrastructures that could fall victim to manipulation and censorship. Unfortunately, current MANET systems suffer significant scalability problems, effectively precluding their use for general-purpose networking. We suggest tailoring MANETs to particular classes of application and leveraging application-specific properties to increase scalability. This paper describes the design of a scalable and secure MANET for text-based personal communication. Our design is based on geographic routing and uses human motion and communication patterns to facilitate location tracking and distribution, thereby increasing scalability above that of traditional geographic routing. We provide location privacy by transplanting mix-net techniques into MANETs.
- Published
- 2011
42. Building Incentives into Tor
- Author
-
Dan S. Wallach, Roger Dingledine, and Tsuen-Wan 'Johnny’‘ Ngan
- Subjects
Service (systems architecture) ,Incentive ,Computer science ,Relay ,law ,ComputerSystemsOrganization_COMPUTER-COMMUNICATIONNETWORKS ,ComputingMilieux_COMPUTERSANDSOCIETY ,Directory ,Computer security ,computer.software_genre ,computer ,Telecommunications network ,law.invention - Abstract
Distributed anonymous communication networks like Tor depend on volunteers to donate their resources. However, the efforts of Tor volunteers have not grown as fast as the demands on the Tor network. We explore techniques to incentivize Tor users to relay Tor traffic too; if users contribute resources to the Tor overlay, they should receive faster service in return. In our design, the central Tor directory authorities measure performance and publish a list of Tor relays that should be given higher priority when establishing circuits. Simulations of our proposed design show that conforming users receive significant improvements in performance, in some cases experiencing twice the network throughput of selfish users who do not relay traffic for the Tor network.
- Published
- 2010
43. Super-Efficient Aggregating History-Independent Persistent Authenticated Dictionaries
- Author
-
Dan S. Wallach and Scott A. Crosby
- Subjects
Authentication ,Information retrieval ,Constant (computer programming) ,Database ,Computer science ,Feature (machine learning) ,computer.software_genre ,Query language ,Data structure ,Merkle tree ,computer ,Search tree ,Variety (cybernetics) - Abstract
Authenticated dictionaries allow users to send lookup requests to an untrusted server and get authenticated answers. Persistent authenticated dictionaries (PADs) add queries against historical versions. We consider a variety of different trust models for PADs and we present several extensions, including support for aggregation and a rich query language, as well as hiding information about the order in which PADs were constructed. We consider variations on treelike data structures as well as a design that improves efficiency by speculative future predictions. We improve on prior constructions and feature two designs that can authenticate historical queries with constant storage per update and several designs that can return constant-sized authentication results.
- Published
- 2009
44. Finding the Evidence in Tamper-Evident Logs
- Author
-
Kyle Derr, Daniel Robert Sandler, Scott A. Crosby, and Dan S. Wallach
- Subjects
Set (abstract data type) ,Predicate logic ,Correctness ,Database ,Computer science ,Order (business) ,Hash chain ,Data mining ,Mathematical proof ,computer.software_genre ,Performance results ,computer ,Counterexample - Abstract
Secure logs are powerful tools for building systems that must resist forgery, prove temporal relationships, and stand up to forensic scrutiny. The proofs of order and integrity encoded in these tamper-evident chronological records, typically built using hash chaining, may be used by applications to enforce operating constraints or sound alarms at suspicious activity. However, existing research stops short of discussing how one might go about automatically determining whether a given secure log satisfies a given set of constraints on its records. In this paper, we discuss our work on Querifier, a tool that accomplishes this. It can be used offline as an analyzer for static logs, or online during the runtime of a logging application. Querifier rules are written in a flexible pattern-matching language that adapts to arbitrary log structures; given a set of rules and available log data, Querifier presents evidence of correctness and offers counterexamples if desired. We describe Querfier's implementation and offer early performance results.
- Published
- 2008
45. Electronic voting machines versus traditional methods
- Author
-
Kristen K. Greene, Michael D. Byrne, Daniel Robert Sandler, Sarah P. Everett, Kyle Derr, Dan S. Wallach, and Ted Torous
- Subjects
Electronic voting ,Computer science ,business.industry ,media_common.quotation_subject ,ComputingMilieux_LEGALASPECTSOFCOMPUTING ,Usability ,Computer security ,computer.software_genre ,USable ,Preference ,Human–computer interaction ,Voting ,business ,computer ,media_common - Abstract
In the 2006 U.S. election, it was estimated that over 66 million people would be voting on direct recording electronic (DRE) systems in 34% of the nation's counties [8]. Although these computer-based voting systems have been widely adopted, they have not been empirically proven to be more usable than their predecessors. The series of studies reported here compares usability data from a DRE with those from more traditional voting technologies (paper ballots, punch cards, and lever machines). Results indicate that there were little differences between the DRE and these older methods in efficiency or effectiveness. However, in terms of user satisfaction, the DRE was significantly better than the older methods. Paper ballots also perform well, but participants were much more satisfied with their experiences voting on the DRE. The disconnect between subjective and objective usability has potential policy ramifications.
- Published
- 2008
46. Eclipse Attacks on Overlay Networks: Threats and Defenses
- Author
-
T.-W. Ngan, Peter Druschel, Dan S. Wallach, and A. Singh
- Subjects
Computer science ,business.industry ,Overlay network ,Computer security ,computer.software_genre ,business ,computer ,Computer network - Abstract
Overlay networks are widely used to deploy functionality at edge nodes without changing network routers. Each node in an overlay network maintains connections with a number of peers, forming a graph upon which a distributed application or service is implemented. In an “Eclipse” attack, a set of malicious, colluding overlay nodes arranges for a correct node to peer only with members of the coalition. If successful, the attacker can mediate most or all communication to and from the victim. Furthermore, by supplying biased neighbor information during normal overlay maintenance, a modest number of malicious nodes can eclipse a large number of correct victim nodes. This paper studies the impact of Eclipse attacks on structured overlays and shows the limitations of known defenses. We then present the design, implementation, and evaluation of a new defense, in which nodes anonymously audit each other’s connectivity. The key observation is that a node that mounts an Eclipse attack must have a higher than average node degree. We show that enforcing a node degree limit by auditing is an effective defense against Eclipse attacks. Furthermore, unlike most existing defenses, our defense leaves flexibility in the selection of neighboring nodes, thus permitting important overlay optimizations like proximity neighbor selection (PNS).
- Published
- 2006
47. Architectures for adaptation systems
- Author
-
Dan S. Wallach, Willy Zwaenepoel, and E. de Lara
- Subjects
File system ,Computer science ,business.industry ,Interface (computing) ,Distributed computing ,Bandwidth (signal processing) ,computer.software_genre ,Upgrade ,Resource (project management) ,Resource allocation (computer) ,The Internet ,business ,Adaptation (computer science) ,computer - Abstract
Modern systems need support for adaptation, typically responding to changes in system resources such as available network bandwidth. If an adaptation system is implemented strictly at the system layer, data adaptations can be added within the network or file system. This makes the adaptation system portable across applications, but sacrifices opportunities to change an application's behavior. It's not possible, for example, to first return a low-quality version of an image and later upgrade it should excess network capacity be available. On the flip side, the adaptation logic could be built into each and every application, with the system providing information to the applications in order to help them adapt their behavior. This becomes impractical because many applications will never be written to perform adaptation, and an application writer may not be able to foresee all possible adaptations that may be desirable. We argue that adaptation systems should be centralized, where they can make global observations about system usage and resource availability. We further argue that applications should not be written to perform adaptation. Instead, applications should support an interface where the adaptation system can dynamically modify an application's behavior as it runs.
- Published
- 2005
48. A Taxonomy of Rational Attacks
- Author
-
Seth James Nielson, Dan S. Wallach, and Scott A. Crosby
- Subjects
Consumption (economics) ,Computer science ,Taxonomy (general) ,Node (networking) ,Information system ,Fraction (mathematics) ,Peer-to-peer ,computer.software_genre ,Computer security ,computer - Abstract
For peer-to-peer services to be effective, participating nodes must cooperate, but in most scenarios a node represents a self-interested party and cooperation can neither be expected nor enforced. A reasonable assumption is that a large fraction of p2p nodes are rational and will attempt to maximize their consumption of system resources while minimizing the use of their own. If such behavior violates system policy then it constitutes an attack. In this paper we identify and create a taxonomy for rational attacks and then identify corresponding solutions if they exist. The most effective solutions directly incentivize cooperative behavior, but when this is not feasible the common alternative is to incentivize evidence of cooperation instead.
- Published
- 2005
49. Scrivener: Providing Incentives in Cooperative Content Distribution Systems
- Author
-
Tsuen-Wan 'Johnny’‘ Ngan, Animesh Nandi, Atul Singh, Peter Druschel, and Dan S. Wallach
- Subjects
Service (business) ,Service quality ,Computer science ,Quality of service ,Overlay network ,Peer-to-peer ,Computer security ,computer.software_genre ,Common good ,Incentive ,Middleware ,Scrivener ,Overhead (computing) ,computer - Abstract
Cooperative peer-to-peer (p2p) applications are designed to share the resources of participating computers for the common good of all users. However, users do not necessarily have an incentive to donate resources to the system if they can use the system's services for free. In this paper, we describe Scrivener, a fully decentralized system that ensures fair sharing of bandwidth in cooperative content distribution networks. We show how participating nodes, tracking only first-hand observed behavior of their peers, can detect when their peers are behaving selfishly and refuse to provide them service. Simulation results show that our mechanisms effectively limit the quality of service received by a user to a level that is proportional to the amount of resources contributed by that user, while incurring modest overhead.
- Published
- 2005
50. Practical robust localization over large-scale 802.11 wireless networks
- Author
-
Andreas Haeberlen, Lydia E. Kavraki, Eliot John Flannery, Dan S. Wallach, Algis Rudys, and Andrew M. Ladd
- Subjects
Base station ,business.product_category ,Wireless network ,Computer science ,Robustness (computer science) ,Laptop ,Real-time computing ,Bayesian probability ,Probabilistic logic ,Signal intensity ,Frame rate ,business - Abstract
We demonstrate a system built using probabilistic techniques that allows for remarkably accurate localization across our entire office building using nothing more than the built-in signal intensity meter supplied by standard 802.11 cards. While prior systems have required significant investments of human labor to build a detailed signal map, we can train our system by spending less than one minute per office or region, walking around with a laptop and recording the observed signal intensities of our building's unmodified base stations. We actually collected over two minutes of data per office or region, about 28 man-hours of effort. Using less than half of this data to train the localizer, we can localize a user to the precise, correct location in over 95% of our attempts, across the entire building. Even in the most pathological cases, we almost never localize a user any more distant than to the neighboring office. A user can obtain this level of accuracy with only two or three signal intensity measurements, allowing for a high frame rate of localization results. Furthermore, with a brief calibration period, our system can be adapted to work with previously unknown user hardware. We present results demonstrating the robustness of our system against a variety of untrained time-varying phenomena, including the presence or absence of people in the building across the day. Our system is sufficiently robust to enable a variety of location-aware applications without requiring special-purpose hardware or complicated training and calibration procedures.
- Published
- 2004
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.