1,172 results
Search Results
2. Are We Cobblers without Shoes? Making Computer Science Data FAIR: In search of more efficient data sharing.
- Author
-
Noy, Natasha and Goble, Carole
- Subjects
COMPUTER science ,DATA ,INFORMATION sharing - Abstract
The article discusses the lack of efficiency in how research data within the Computer Science discipline are shared. The author uses the acronym FAIR -- findable, accessible, interoperable, and reusable -- to capture how data should be made available at conferences and in journals.
- Published
- 2023
- Full Text
- View/download PDF
3. Historical Reflections: Conjoined Twins: Artificial Intelligence and the Invention of Computer Science.
- Author
-
Haigh, Thomas
- Subjects
ARTIFICIAL intelligence ,COMPUTER science ,HISTORY of technology ,CYBERNETICS ,ARTIFICIAL neural networks - Abstract
The article discusses the relationship between artificial intelligence and computer science and the history and development of each discipline between 1950 and 1970. Information is presented on the research behind artificial intelligence and the beginning of graduate education programs in the United States in both artificial intelligence and computer science.
- Published
- 2023
- Full Text
- View/download PDF
4. Almost-Linear-Time Algorithms for Maximum Flow and Minimum-Cost Flow.
- Author
-
Li Chen, Kyng, Rasmus, Liu, Yang P., Peng, Richard, Gutenberg, Maximilian Probst, and Sachdeva, Sushant
- Subjects
ALGORITHMS ,COMPUTER science ,INTERIOR-point methods ,LINEAR time invariant systems ,MATHEMATICS theorems - Abstract
The article discusses an algorithm designed to compute maximum flow and minimum-cost flows. It builds on the works of prior researchers to propose new theorems and provides formulas and graphs to illustrate these theorems and schemes.
- Published
- 2023
- Full Text
- View/download PDF
5. There Was No 'First AI Winter': Despite challenges and failures, the artificial intelligence community grew steadily during the 1970s.
- Author
-
Haigh, Thomas
- Subjects
ARTIFICIAL intelligence research ,SCIENTIFIC community ,RESEARCH funding ,NINETEEN seventies ,COMPUTER science - Abstract
The article discusses the community that developed during the decade of the 1970's that fostered the growth of research into artificial intelligence. It details the slow progress of AI researchers in developing the field. It mentions the importance of military backing in aid of AI research, including funding acquired through Advanced Research Projects Agency (ARPA) to places like MIT and Stanford University. It outlines the continued struggle for broader funding. It describes the growth of AI organizations, increased conference attendance, class enrollment in universities, and the growing number of published articles throughout the 1970's as evidence of the growing support for this field in the academic and computer science communities.
- Published
- 2023
- Full Text
- View/download PDF
6. Data Analytics Anywhere and Everywhere: Mobile, ubiquitous, and immersive computing appear poised to transform visualization, data science, and data-driven decision making.
- Author
-
ELMQVIST, NIKLAS
- Subjects
UBIQUITOUS computing ,MOBILE computing ,DATA analysis ,DATA science ,DATA visualization ,COMPUTER science ,DATA ,ELECTRONIC records - Abstract
The article examines the rise of ubiquitous computing. Describes the proliferation of smart phones, mobile computing devices and immersive technologies that produce "anywhere and everywhere data" and how such devices are becoming increasing entwined. Describes the local, temporal and contextual nature of generated data. Applies the post-cognitive framework from social science to information science in order to describe how information is transformed through interactions between media sources. Discusses how the various devices should complement each other as ubiquitous analytics works to leverage the shared data.
- Published
- 2023
- Full Text
- View/download PDF
7. Resolution of the Burrows-Wheeler Transform Conjecture.
- Author
-
Kempa, Dominik and Kociumaka, Tomasz
- Subjects
COMPUTER programming ,COMPUTERS in lexicography ,ALGORITHMS ,DATA structures ,COMPUTER science - Abstract
The Burrows-Wheeler Transform (BWT) is an invertible text transformation that permutes symbols of a text according to the lexicographical order of its suffixes. BWT is the main component of popular lossless compression programs (such as bzip2) as well as recent powerful compressed indexes (such as the r-index
7 ), central in modern bioinformatics. The compressibility of BWT is quantified by the number r of equal-letter runs in the output. Despite the practical significance of BWT, no nontrivial upper bound on r is known. By contrast, the sizes of nearly all other known compression methods have been shown to be either always within a polylog n factor (where n is the length of the text) from z, the size of Lempel--Ziv (LZ77) parsing of the text, or much larger in the worst case (by an nε factor for ε > 0). In this paper, we show that r = O (z log² n) holds for every text. This result has numerous implications for text indexing and data compression; in particular: (1) it proves that many results related to BWT automatically apply to methods based on LZ77, for example, it is possible to obtain functionality of the suffix tree in O (z polylog n) space; (2) it shows that many text processing tasks can be solved in the optimal time assuming the text is compressible using LZ77 by a sufficiently large polylog n factor; and (3) it implies the first nontrivial relation between the number of runs in the BWT of the text and of its reverse. In addition, we provide an O (z polylog n)-time algorithm converting the LZ77 parsing into the run-length compressed BWT. To achieve this, we develop several new data structures and techniques of independent interest. In particular, we define compressed string synchronizing sets (generalizing the recently introduced powerful technique of string synchronizing sets11) and show how to efficiently construct them. Next, we propose a new variant of wavelet trees for sequences of long strings, establish a nontrivial bound on their size, and describe efficient construction algorithms. Finally, we develop new indexes that can be constructed directly from the LZ77 parsing and efficiently support pattern matching queries on text substrings. [ABSTRACT FROM AUTHOR]- Published
- 2022
- Full Text
- View/download PDF
8. On Being a Computer Science Communicator: Facilitating more effective public engagement with a computer science perspective.
- Author
-
Jacobson, Sheldon H.
- Subjects
COMPUTER science ,COMMUNICATION ,PUBLIC opinion ,GOVERNMENT policy ,ATTITUDES toward technology - Abstract
The author writes about the importance of effectively communicating the broad and impactful aspects of computer science, moving beyond technical jargon to engage both educated non-experts and the general public. This involves creating concise "elevator speeches" and utilizing opportunities to bridge the gap in understanding terms like "artificial intelligence" (AI) and "machine learning." He feels that public engagement, such as interacting with journalists, giving interviews, and writing opinion articles, is crucial to demonstrate how computer science can address complex societal issues as well as secure support for the field, and that active participation in public engagement enhances the field's visibility, attracts diverse students, and safeguards its future, particularly in addressing concerns about AI ethics and workforce effects.
- Published
- 2023
- Full Text
- View/download PDF
9. Historic Algorithms Help Unlock Shortest-Path Problem Breakthrough: By revisiting key algorithms from computing, a team unlocked hidden efficiency in a long-standing computer science problem.
- Author
-
Edwards, Chris
- Subjects
ALGORITHMS ,COMPUTER algorithms ,COMPUTER programming ,COMPUTER science - Abstract
The article focuses on how revisiting key algorithms from computing history can unlock the hidden efficiency in computer science problems. The author discusses the shortest-path problem that has been long-standing, explores the use of the Bellman-Ford algorithm, and examines the work of Aaron Bernstein, Danupon Nanongkai, and Christian Wulff-Nilsen.
- Published
- 2023
- Full Text
- View/download PDF
10. Community Is Critical Too.
- Author
-
Clegg, Tamara L.
- Subjects
STEM education ,BLACK women ,COMPUTER science ,COMMUNITIES ,COMMUNITY support - Abstract
This article reflects on the author’s experiences as a black woman in the field of computing science. Topics include her educational experience in computing science, her personal communities’ support of her throughout her education, and how her experiences lead to her Science Everywhere project which encourages learners to think about STEM in all aspects of their daily lives.
- Published
- 2024
- Full Text
- View/download PDF
11. Why Don't Developers Detect Improper Input Validation? '; DROP TABLE Papers; --.
- Author
-
Braz, Larissa, Fregnan, Enrico, Çalikli, Gül, and Bacchelli, Alberto
- Subjects
SOFTWARE engineering ,COMPUTER science ,COMPUTER software development ,ARTIFICIAL intelligence ,CODE review (Computer science) - Abstract
Improper Input Validation (IIV) is a software vulnerability that occurs when a system does not safely handle input data. Even though IIV is easy to detect and fix, it still commonly happens in practice. In this paper, we study to what extent developers can detect IIV and investigate underlying reasons. This knowledge is essential to better understand how to support developers in creating secure software systems. We conduct an online experiment with 146 participants, of which 105 report at least three years of professional software development experience. Our results show that the existence of a visible attack scenario facilitates the detection of IIV vulnerabilities and that a significant portion of developers who did not find the vulnerability initially could identify it when warned about its existence. Yet, a total of 60 participants could not detect the vulnerability even after the warning. Other factors, such as the frequency with which the participants perform code reviews, influence the detection of IIV. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
12. Methodological Standards in Accessibility Research on Motor Impairments: A Survey.
- Author
-
SARSENBAYEVA, ZHANNA, VAN BERKEL, NIELS, VELLOSO, EDUARDO, GONCALVES, JORGE, and KOSTAKOS, VASSILIS
- Subjects
COMPUTER science - Abstract
The design and evaluation of accessibility technology is a core component of the computer science landscape, aiming to ensure that digital innovations are accessible to all. One of the most prominent and long-lasting areas of accessibility research focuses on motor impairments--deficiencies that affect the ability to move, manipulate objects, and interact with the physical world. In this survey article, we present an extensive overview of the past two decades of research into accessibility for people with motor impairments. Following a structured selection process, we analyzed the study details as reported in 177 relevant papers. Based on this analysis, we critically assess user representation, measurement instruments, and existing barriers that exist in accessibility research. Finally, we discuss future directions for accessibility research within the computer science domain. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
13. Informatics Higher Education in Europe: A Data Portal and Case Study.
- Author
-
DI NITTO, ELISABETTA, GARCÍA-VAREA, ISMAEL, JAZAYERI, MEHDI, TAMBURRI, DAMIAN A., and TIKHONENKO, SVETLANA
- Subjects
COMPUTER science ,HIGHER education ,DATABASES ,WEB portals ,APPLIED sciences ,RESEARCH universities & colleges - Abstract
This article investigates the current state of informatics higher education in Europe by utilizing the Informatics Europe Higher Education (IEHE) data portal. First, the article provides a detailed look at the IEHE data portal. Then the article provides a case study comparing informatics higher education at applied sciences universities compared to traditional research universities in five selected European countries using data providing in the portal.
- Published
- 2023
- Full Text
- View/download PDF
14. Sampling Near Neighbors in Search for Fairness.
- Author
-
Aumüller, Martin, Har-Peled, Sariel, Mahabadi, Sepideh, Pagh, Rasmus, and Silvestri, Francesco
- Subjects
DATA ,FAIRNESS ,DATABASE searching ,SEARCH algorithms ,COMPUTER algorithms ,COMPUTER science ,COMPUTER programming - Abstract
Similarity search is a fundamental algorithmic primitive, widely used in many computer science disciplines. Given a set of points S and a radius parameter r > 0, the r-near neighbor (r-NN) problem asks for a data structure that, given any query point q, returns a point p within distance at most r from q. In this paper, we study the r-NN problem in the light of individual fairness and providing equal opportunities: all points that are within distance r from the query should have the same probability to be returned. The problem is of special interest in high dimensions, where Locality Sensitive Hashing (LSH), the theoretically leading approach to similarity search, does not provide any fairness guarantee. In this work, we show that LSH-based algorithms can be made fair, without a significant loss in efficiency. We propose several efficient data structures for the exact and approximate variants of the fair NN problem. Our approach works more generally for sampling uniformly from a subcollection of sets of a given collection and can be used in a few other applications. We also carried out an experimental evaluation that highlights the inherent unfairness of existing NN data structures. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
15. Computer Vision-based Analysis of Buildings and Built Environments: A Systematic Review of Current Approaches.
- Author
-
STARZYŃSKA-GRZEŚ, MAŁGORZATA B., ROUSSEL, ROBIN, JACOBY, SAM, and ASADIPOUR, ALI
- Subjects
BUILT environment ,COMPUTER architecture ,EVIDENCE gaps ,COMPUTER science ,ARCHITECTURAL design ,COMPUTER vision - Abstract
Analysing 88 sources published from 2011 to 2021, this article presents a first systematic review of the computer vision-based analysis of buildings and the built environment. Its aim is to assess the potential of this research for architectural studies and the implications of a shift to a cross-disciplinarity approach between architecture and computer science for research problems, aims, processes, and applications. To this end, the types of algorithms and data sources used in the reviewed studies are discussed in respect to architectural applications such as a building classification, detail classification, qualitative environmental analysis, building condition survey, and building value estimation. Based on this, current research gaps and trends are identified, with two main research aims emerging. First, studies that use or optimise computer vision methods to automate time-consuming, labour-intensive, or complex tasks when analysing architectural image data. Second, work that explores the methodological benefits of machine learning approaches to overcome limitations of conventional analysis to investigate new questions about the built environment by finding patterns and relationships among visual, statistical, and qualitative data. The growing body of research offers new methods to architectural and design studies, with the article identifying future challenges and directions of research. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
16. Conference Paper Selectivity and Impact.
- Author
-
JILIN CHEN and KONSTAN, JOSEPH A.
- Subjects
IMPACT factor (Citation analysis) ,CITATION analysis ,COMPUTER science ,SCIENTIFIC literature ,RESEARCH evaluation ,CONFERENCES & conventions - Abstract
The article presents the results of a study which investigated the correlation between the acceptance rate and impact rating of conference papers in the field of computer science. The papers with the highest impact ratings were found to be associated with highly selective conferences, defined as those which rejected between 70 and 85 percent of papers submitted. Such papers, on average, had higher impact ratings than papers which were published in journals without being presented at conferences. A rejection rate of 85 percent or more tended to suppress submission levels and reduce impact factors, while an acceptance rate over 30 percent was associated with less prestigious conferences.
- Published
- 2010
- Full Text
- View/download PDF
17. The Silent (R)evolution of SAT.
- Author
-
FICHTE, JOHANNES K., LE BERRE, DANIEL, HECHER, MARKUS, and SZEIDER, STEFAN
- Subjects
SATISFIABILITY (Computer science) ,MATHEMATICAL logic ,COMPUTER science ,BOOLEAN functions ,MACHINE learning - Abstract
The article explores Propositional Satisfiability (SAT), described by the authors as a cornerstone of computational complexity theory. Particular focus is given to how it has become a central target problem for solving hard computational problems in practice. Other topics discussed include how initial progress was made in SAT solving in the early 1990s and details on other eras of practical SAT solving.
- Published
- 2023
- Full Text
- View/download PDF
18. Introducing Research for Practice.
- Author
-
BAILIS, PETER, PETER, SIMON, and SHERRY, JUSTINE
- Subjects
COMPUTER science research ,SERVER farms (Computer network management) ,HISTORY of technology ,COMPUTER science ,TWENTY-first century - Abstract
In this article, various experts on computer science (CS) offer guides relating to the best of CS research. Particular focus is given to how to determine what papers are worth taking the time to read. Additional topics discussed include the ACM (Association for Computing Machinery), how datacenters are changing the way Web servers are designed and the notion of Network Functions Virtualization (NFV).
- Published
- 2016
- Full Text
- View/download PDF
19. Research for Practice: Prediction-Serving Systems.
- Author
-
CRANKSHAW, DAN and GONZALEZ, JOSEPH
- Subjects
MACHINE learning ,COMPUTER science ,INTERNET advertising ,RELATIONAL databases ,ARTIFICIAL neural networks - Abstract
The article discusses the deployment of machine learning technology in production environments. The authors describe the algorithms which constitute and shape machine learning protocols with a focus on papers on subjects including the MauveDB project to integrate machine learning technology into relational databases, scalable response prediction for online advertising, and optimization of neural network queries for object detection in video streams.
- Published
- 2018
- Full Text
- View/download PDF
20. An Interview with Dana Scott: ACM Fellow and A.M. Turing Award recipient Dana Scott reflects on his career.
- Author
-
Shustek, Len
- Subjects
ACM A.M. Turing Award ,COMPUTER science ,MACHINE theory ,PROGRAMMING languages ,MATHEMATICAL logic - Abstract
An interview with Association for Computing Machinery (ACM) fellow Dana Stewart Scott who discusses various topics including his career involving the fields of computing science, automata theory, and the theory of programming languages. Scott's receipt of the ACM's A.M. Turing Award with colleague Michael Rabin in 1976 is mentioned, along with Scott's education, his involvement with mathematical logic, and his work at the University of Chicago and the University of California, Berkeley.
- Published
- 2022
- Full Text
- View/download PDF
21. Introduction to the Special Issue on BioFoundries and Cloud Laboratories.
- Author
-
DENSMORE, DOUGLAS, HILLSON, NATHAN J., KLAVINS, ERIC, MYERS, CHRIS, PECCOUD, JEAN, and STRACQUADANIO, GIOVANNI
- Subjects
BIOENGINEERING ,SYNTHETIC biology ,LIFE sciences ,TECHNOLOGICAL innovations ,CHEMICAL engineering ,COMPUTER science - Abstract
An introduction is presented in which the editor discusses articles in the special issue on BioFoundries and Cloud Laboratories, highlighting their potential to harness biology to address global challenges through synthetic biology and the development of Biofoundries.
- Published
- 2023
- Full Text
- View/download PDF
22. Viewpoint Rebooting the CS Publication Process.
- Author
-
Wallach, Dan S.
- Subjects
SCHOLARLY publishing ,OPEN access publishing ,COMPUTER science ,TECHNICAL publishing - Abstract
The article discusses the publication process for academic papers in computer science (CS) and provides a proposal for CSPub (clean-slate or computer science publication), an open-access publication system. The author identifies problems which he believes could be addressed by CSPub including low acceptance rates, the resubmission of rejected papers, and short incremental work which is published due to the demands of promotion and tenure systems.
- Published
- 2011
- Full Text
- View/download PDF
23. Relative Status of Journal and Conference Publications in Computer Science.
- Author
-
FREYNE, JILL, COYLE, LORCAN, SMYTH, BARRY, and CUNNINGHAM, PADRAIG
- Subjects
PUBLISHING ,COMPUTER science ,COMPUTER logic ,CONFERENCE papers ,ACADEMIC discourse ,INFORMATION technology - Abstract
The article discusses the status of research papers published by computer science (CS) conferences, as compared with those published in CS journals. Debate has occurred in relation to the proper way in which to qualify the research presented in a paper. Problems exist in determining the quality of research in various journals due to the wide array of publication opportunities available. A scale that has been created to measure the quality of conference papers in a variety of ways, such as citations and rejection rates, is discussed.
- Published
- 2010
- Full Text
- View/download PDF
24. Collusion Rings Threaten the Integrity of Computer Science Research: Experiences discovering attempts to subvert the peer-review process.
- Author
-
Littman, Michael L.
- Subjects
COMPUTER science ,SCHOLARLY peer review ,ETHICS - Abstract
The author discusses collusion rings in computer science research, and his experiences with and knowledge of attempts to undermine peer-review of computer science conference publications. According to the author, collusion rings go beyond the computer architecture field. The author describes the peer-review process, how a collusion ring works, and outlines how the computer research field should respond.
- Published
- 2021
- Full Text
- View/download PDF
25. SELF-PLAGIARISM IN COMPUTER SCIENCE.
- Author
-
Collberg, Christian and Kobourov, Stephen
- Subjects
COMPUTER training ,PLAGIARISM ,COMPUTER science ,INTERNET in education ,REPORT writing ,COPYRIGHT infringement - Abstract
The article presents information on self-plagiarism in computer science. Students submit assignments inherited from their friends, online paper-mills provide term papers on popular topics, and occasionally researchers are found falsifying data or publishing the work of others as their own. Self-plagiarism occurs when authors reuse portions of their previous writings in subsequent research papers. Occasionally, the derived paper is simply a retitled and reformatted version of the original one, but more frequently it is assembled from bits and pieces of previous work. Incorporating texts or ideas from previously published work while unaware of the existence of that work. Incorporating texts or ideas from previously published work when writing to a community different from that in which the original work was published. Missing from the ACM and IEEE policy documents is any discussion of what the consequences of ignoring the rules and guidelines might be and whose responsibility it is to prevent plagiarized and self-plagiarized papers from being published. In contrast, most university course syllabi address the definition of plagiarism and who will look for it, as well as its potential consequences.
- Published
- 2005
- Full Text
- View/download PDF
26. The Ring: Worst-case Optimal Joins in Graph Databases using (Almost) No Extra Space.
- Author
-
Arroyuelo, Diego, Gómez-Brandón, Adrián, Hogan, Aidan, Navarro, Gonzalo, Reutter, Juan, Rojas-Ledesma, Javiel, and Soto, Adrián
- Subjects
DIRECTED graphs ,GRAPH algorithms ,RELATIONAL databases ,TIME complexity ,COMPUTER science ,DATA structures ,DATA security failures - Published
- 2024
- Full Text
- View/download PDF
27. Picking Publication Targets.
- Author
-
Baquero, Carlos
- Subjects
SCHOLARLY publishing ,COMPUTER science ,ACADEMIC conferences ,ACADEMIC discourse ,SCHOLARLY periodicals - Abstract
The article presents tips on academic publishing for computer scientists. Specific guides to publication for the NeurIPS and VLDB conference series are offered and "maximalist" and "perfectionist" strategies for publication are compared with respect to top-tier journals and the most prestigious conferences.
- Published
- 2022
- Full Text
- View/download PDF
28. "Intelligent Heuristics Are the Future of Computing".
- Author
-
Teng, Shang-Hua
- Subjects
HEURISTIC ,SIMPLEX algorithm ,COMPUTER software ,EVOLUTIONARY algorithms ,COMPUTER science ,UBIQUITOUS computing - Abstract
Back in 1988, the partial game trees explored by computer chess programs were among the largest search structures in real-world computing. Because the game tree is too large to be fully evaluated, chess programs must make heuristic strategic decisions based on partial information, making it an illustrative subject for teaching AI search. In one of his lectures that year on AI search for games and puzzles, Professor Hans Berliner—a pioneer of computer chess programs
1 —stated: "Intelligent heuristics are the future of computing." As a student in the field of the theory of computation, I was naturally perplexed but fascinated by this perspective. I had been trained to believe that "Algorithms and computational complexity theory are the foundation of computer science." However, as it happens, my attempts to understand heuristics in computing have subsequently played a significant role in my career as a theoretical computer scientist. I have come to realize that Berliner's postulation is a far-reaching worldview, particularly in the age of big, rich, complex, and multifaceted data and models, when computing has ubiquitous interactions with science, engineering, humanity, and society. In this article,2 I will share some of my experiences on the subject of heuristics in computing, presenting examples of theoretical attempts to understand the behavior of heuristics on real data, as well as efforts to design practical heuristics with desirable theoretical characterizations. My hope is that these theoretical insights from past heuristics—such as spectral partitioning, multilevel methods, evolutionary algorithms, and simplex methods—can shed light on and further inspire a deeper understanding of the current and future techniques in AI and data mining. [ABSTRACT FROM AUTHOR]- Published
- 2023
- Full Text
- View/download PDF
29. Extracting the Essential Simplicity of the Internet: Looking past inessential complexities to explain the Internet's simple yet daring design.
- Author
-
MCCAULEY, JAMES, SHENKER, SCOTT, and VARGHESE, GEORGE
- Subjects
INTERNET ,INTERNET software ,INTERNET governance ,INTERNET programming ,COMPUTER programming ,WEB development ,WEB design ,COMPUTER science - Abstract
The article focuses on the Internet's essential simplicity. The authors discuss the fundamental Internet design choices that have been in place since the 1980s, examines the rationale behind these design choices in an effort to understand the "why, not how" of the Internet, and offers insights for all computer scientists. A "Brief Internet Timeline" is also provided.
- Published
- 2023
- Full Text
- View/download PDF
30. Maximum Flow through a Network: A Storied Problem and a Groundbreaking Solution.
- Author
-
Shang-Hua Teng
- Subjects
ALGORITHMS ,COMPUTER networks ,COMPUTER science ,DATA flow computing ,BIG data ,INFORMATION science ,INFORMATION storage & retrieval systems - Abstract
The article provides a perspective on the use of algorithms in maintaining maximum flow through computing networks. Offers a critique of various max-flow theorems and algorithms. Discusses prior research into max-flow applications. Mentions the hope for future breakthroughs in developing scalable max-flow algorithms.
- Published
- 2023
- Full Text
- View/download PDF
31. Shaping Ethical Computing Cultures: Lessons from the recent past.
- Author
-
Shilton, Katie, Finn, Megan, and DuPont, Quinn
- Subjects
COMPUTER science ,COMPUTER ethics ,TECHNOLOGICAL innovations ,DATA science - Abstract
The authors present their thoughts about the significance of computer ethics and the construction and support of ethical computing cultures in an era of rapid technological innovations. The article revisits the "Menlo Report" from 2012, calls for codes of data science ethics, and research governance.
- Published
- 2021
- Full Text
- View/download PDF
32. Let’s Not Dumb Down the History of Computer Science.
- Author
-
Knuth, Donald E.
- Subjects
COMPUTER science ,HISTORY - Abstract
An excerpt from a speech delivered by computer scientist and computer science professor Donald Knuth at Stanford University on May 7, 2014, about computer science history is presented.
- Published
- 2021
- Full Text
- View/download PDF
33. Pivot Tracing: Dynamic Causal Monitoring for Distributed Systems.
- Author
-
Mace, Jonathan, Roelke, Ryan, and Fonseca, Rodrigo
- Subjects
COMPUTER systems ,DEBUGGING ,COMPUTER science ,ELECTRONIC data processing ,QUERY languages (Computer science) - Abstract
Monitoring and troubleshooting distributed systems are notoriously difficult; potential problems are complex, varied, and unpredictable. The monitoring and diagnosis tools commonly used today—logs, counters, and metrics—have two important limitations: what gets recorded is defined a priori, and the information is recorded in a component- or machine-centric way, making it extremely hard to correlate events that cross these boundaries. This paper presents Pivot Tracing, a monitoring framework for distributed systems that addresses both limitations by combining dynamic instrumentation with a novel relational operator: the happened-before join. Pivot Tracing gives users, at runtime, the ability to define arbitrary metrics at one point of the system, while being able to select, filter, and group by events meaningful at other parts of the system, even when crossing component or machine boundaries. Pivot Tracing does not correlate cross-component events using expensive global aggregations, nor does it perform offline analysis. Instead, Pivot Tracing directly correlates events as they happen by piggybacking metadata alongside requests as they execute. This gives Pivot Tracing low runtime overhead—less than 1% for many cross-component monitoring queries. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
34. Making the Field of Computing More Inclusive.
- Author
-
LAZAR, JONATHAN, CHURCHILL, ELIZABETH F., GROSSMAN, TOVI, VAN DER VEER, GERRIT, PALANQUE, PHILIPPE, MORRIS, JOHN “SCOOTER”, and MANKOFF, JENNIFER
- Subjects
COMPUTER science ,TELEPRESENCE ,SERVICES for people with disabilities ,EQUIPMENT & supplies ,SOCIETIES - Abstract
The article offers advice on how to make the field of computing more inclusive. Particular focus is given to how this relates to the work of the Association for Computing Machinery (ACM). Additional topics discussed include making technology accessible for people with disabilities, SIGCHI, the ACM Special Interest Group on Computer-Human Interaction, and a telepresence robots.
- Published
- 2017
- Full Text
- View/download PDF
35. Repeatability in Computer Systems Research.
- Author
-
COLLBERG, CHRISTIAN and PROEBSTING, TODD A.
- Subjects
COMPUTER systems ,RESEARCH ,COMPUTER science ,DATA ,COMPUTER file sharing - Abstract
The article discusses the factors that affect the sharing of research artifacts relating to computer systems according to the authors. Topics covered include the importance of sharing research artifacts for repeatability and benefaction, the research studies the evaluated the willingness of computer science researchers in sharing code and data, and details relating to the three measures of weak repeatability determined by the authors.
- Published
- 2016
- Full Text
- View/download PDF
36. Workshop YOUR study design! Participatory Critique and Refinement of Participants' Studies.
- Author
-
Fraune, Marlena R., Nihan Karatas, and Leite, Iolanda
- Subjects
EXPERIMENTAL design ,PARTICIPATORY design ,QUESTIONNAIRES ,COMPUTER science ,WORK design ,PSYCHOLOGICAL feedback ,INTERDISCIPLINARY approach to knowledge - Abstract
The purpose of this workshop is to help researchers develop methodological skills, especially in areas that are relatively new to them. With HRI researchers coming from diverse backgrounds in computer science, engineering, informatics, philosophy, psychology, and more disciplines, we can't be expert in everything. In this workshop, participants will be grouped with a mentor to enhance their study design and interdisciplinary work. Participants will submit 4-page papers with a small introduction and detailed method section for a project currently in the design process. In small groups led by a mentor in the area, they will discuss their method and obtain feedback. The workshop will include time to edit and improve the study. Workshop mentors include Drs. Cindy Bethel, Hung Hsuan Huang, Selma Šabanović, Brian Scassellati, Megan Strait, Komatsu Takanori, Leila Takayama, and Ewart de Visser, with expertise in areas of realworld study, empirical lab study, questionnaire design, interview, participatory design, and statistics. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
37. When Satisfiability Solving Meets Symbolic Computation.
- Author
-
BRIGHT, CURTIS, KOTSIREAS, ILIAS, and GANESH, VIJAY
- Subjects
COMPUTATIONAL intelligence ,ALGEBRA software ,COMPUTER algorithms ,COMPUTER science ,ARTIFICIAL intelligence - Abstract
The article discusses hybrid computation systems involving satisfiability (SAT) solving and computer algebra systems (CAS) that combine mathematical algorithms and efficient search and learning processes.
- Published
- 2022
- Full Text
- View/download PDF
38. Inside Risks: Toward Total-System Trustworthiness; Considering how to achieve the long-term goal to systemically reduce risks.
- Author
-
Neumann, Peter G.
- Subjects
SYSTEM analysis ,TRUST ,COMPUTER science ,ARTIFICIAL intelligence ,CRYPTOGRAPHY ,COMPUTER security - Abstract
The author discusses the significance of total-system analysis in computer science, particularly trust in total-system analysis and the difficulty in achieving such trust and assurance. It examines artificial intelligence, cryptography, and multilevel security. The article also examines hierarchically layered designs and methods.
- Published
- 2022
- Full Text
- View/download PDF
39. Editorial: JACM at the Start of a New Decade.
- Author
-
VIANU, VICTOR
- Subjects
EDITORIAL policies ,COMPUTER science - Abstract
The article discusses the editorial policy of the journal and attempts by the editorial board to attract research on a wide variety of computer science topics.
- Published
- 2010
- Full Text
- View/download PDF
40. App's Auto-Login Function Security Testing via Android OS-Level Virtualization.
- Author
-
Wenna Song, Jiang Ming, Lin Jiang, Han Yan, Yi Xiang, Yuan Chen, Jianming Fu, and Guojun Peng
- Subjects
MOBILE apps ,COMPUTER operating systems ,ARTIFICIAL intelligence ,COMPUTER science ,SOFTWARE engineering - Abstract
Limited by the small keyboard, most mobile apps support the automatic login feature for better user experience. Therefore, users avoid the inconvenience of retyping their ID and password when an app runs in the foreground again. However, this auto-login function can be exploited to launch the so-called "data-clone attack": once the locally-stored, auto-login depended data are cloned by attackers and placed into their own smartphones, attackers can break through the login-device number limit and log in to the victim's account stealthily. A natural countermeasure is to check the consistency of device-specific attributes. As long as the new device shows different device fingerprints with the previous one, the app will disable the auto-login function and thus prevent data-clone attacks. In this paper, we develop VPDroid, a transparent Android OS-level virtualization platform tailored for security testing. With VPDroid, security analysts can customize different device artifacts, such as CPU model, Android ID, and phone number, in a virtual phone without user-level API hooking. VPDroid's isolation mechanism ensures that user-mode apps in the virtual phone cannot detect device-specific discrepancies. To assess Android apps' susceptibility to the data-clone attack, we use VPDroid to simulate data-clone attacks with 234 most-downloaded apps. Our experiments on five different virtual phone environments show that VPDroid's device attribute customization can deceive all tested apps that perform device-consistency checks, such as Twitter, WeChat, and PayPal. 19 vendors have confirmed our report as a zero-day vulnerability. Our findings paint a cautionary tale: only enforcing a device-consistency check at client side is still vulnerable to an advanced data-clone attack. [ABSTRACT FROM AUTHOR]
- Published
- 2021
41. "Ignorance and Prejudice" in Software Fairness.
- Author
-
Zhang, Jie M. and Harman, Mark
- Subjects
MACHINE learning ,SOFTWARE engineers ,SOFTWARE engineering ,ARTIFICIAL intelligence ,COMPUTER science - Abstract
Machine learning software can be unfair when making human-related decisions, having prejudices over certain groups of people. Existing work primarily focuses on proposing fairness metrics and presenting fairness improvement approaches. It remains unclear how key aspect of any machine learning system, such as feature set and training data, affect fairness. This paper presents results from a comprehensive study that addresses this problem. We find that enlarging the feature set plays a significant role in fairness (with an average effect rate of 38%). Importantly, and contrary to widely-held beliefs that greater fairness often corresponds to lower accuracy, our findings reveal that an enlarged feature set has both higher accuracy and fairness. Perhaps also surprisingly, we find that a larger training data does not help to improve fairness. Our results suggest a larger training data set has more unfairness than a smaller one when feature sets are insufficient; an important cautionary finding for practising software engineers. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
42. On the Naming of Methods: A Survey of Professional Developers.
- Author
-
Alsuhaibani, Reem S., Newman, Christian D., Decker, Michael J., Collard, Michael L., and Maletic, Jonathan I.
- Subjects
SOFTWARE engineering ,ARTIFICIAL intelligence ,COMPUTER software development ,PROGRAMMING languages ,COMPUTER science - Abstract
This paper describes the results of a large (+1100 responses) survey of professional software developers concerning standards for naming source code methods. The various standards for source code method names are derived from and supported in the software engineering literature. The goal of the survey is to determine if there is a general consensus among developers that the standards are accepted and used in practice. Additionally, the paper examines factors such as years of experience and programming language knowledge in the context of survey responses. The survey results show that participants very much agree about the importance of various standards and how they apply to names and that years of experience and the programming language has almost no effect on their responses. The results imply that the given standards are both valid and to a large degree complete. The work provides a foundation for automated method name assessment during development and code reviews. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
43. Viewpoint: Time for Computer Science to Grow Up.
- Author
-
Fortnow, Lance
- Subjects
COMPUTER science ,COMPUTER science periodicals ,SCIENCE publishing ,CONFERENCES & conventions - Abstract
The author presents an argument that the field of computer science should move away from its use of conferences as a venue for the primary publication of research papers, and adopt an approach more in line with that of other scholarly disciplines, which would involve the use of peer-reviewed journals as the primary venue for publication of original research, and conferences as a forum for professional discussions and networking.
- Published
- 2009
- Full Text
- View/download PDF
44. A Perspective on Theoretical Computer Science in Latin America.
- Author
-
KIWI, MARCOS, KOHAYAKAWA, YOSHIHARU, RAJSBAUM, SERGIO, RODRÍGUEZ-HENRÍQUEZ, FRANCISCO, SZWARCFITER, JAYME LUIZ, and VIOLA, ALFREDO
- Subjects
COMPUTER science ,COMPUTER research ,ALGORITHMS ,COMPUTERS in graph theory ,PATTERN matching ,INFORMATION storage & retrieval systems - Abstract
The article describes several achievements in the field of theoretical computer science (TCS) and computer research in Latin America. It discusses several areas of research that were developed by regional conferences including LATIN, LAGOS, and Latincrypt. The article discusses research including graph theory, pattern matching and information retrieval, and algorithms.
- Published
- 2020
- Full Text
- View/download PDF
45. Trends in Computer Science Research within European Countries.
- Author
-
KAGAN, DIMA, FIRE, MICHAEL, and ALPERT, GALIT FUHRMANN
- Subjects
COMPUTER science ,SCHOLARLY publishing ,PUBLISHING of learned institutions & societies ,COOPERATIVE research - Abstract
The article examines alleged trends in European computer science research. The relative rates of publication and citation of computer science research articles across Europe, North America, and Asia are compared along with relative rates of collaborative and independent research and alleged asymmetry of benefits derived from collaborative research is described.
- Published
- 2022
- Full Text
- View/download PDF
46. Always Measure One Level Deeper.
- Author
-
OUSTERHOUT, JOHN
- Subjects
PERFORMANCE evaluation ,SCIENTIFIC method ,COMPUTER software developers ,INTUITION ,COMPUTER science ,BENCHMARK testing (Engineering) ,ATTITUDE (Psychology) - Abstract
The article discusses the importance of performance measurements in improving the quality of systems being measured and the developers' intuition. Topics include the lack of performance evaluation education and widespread agreement on how to measure performance within the computer science industry, the importance of understanding the system's performance as well as why it performs that way, and the application of the scientific method to improve intuition about systems.
- Published
- 2018
- Full Text
- View/download PDF
47. The Go Programming Language and Environment.
- Author
-
COX, RUSS, GRIESEMER, ROBERT, PIKE, ROB, TAYLOR, IAN LANCE, and THOMPSON, KEN
- Subjects
GO (Computer program language) ,OPEN source software ,DECISION making ,COMPUTER science ,TECHNOLOGICAL innovations - Abstract
The article discusses the computer programming language known as Go, which was created at the high technology firm Google in 2007, and open source released in 2009. It examines the decision making processes concerning the design and success of Go, how Go treats language features, and the March 2022 release of Go 1.18.
- Published
- 2022
- Full Text
- View/download PDF
48. The Health of Research Conferences and the Dearth of Big Idea Papers.
- Author
-
Patterson, David A.
- Subjects
COMPUTER science ,COMPUTER scientists ,RESEARCH - Abstract
This article reports that research conferences are often the most desirable venues for presenting research results. For academic computer scientists and engineers, preferring conferences over journals is so common that they even lobby administrators to ensure that conference papers can be viewed in the same light as journal papers in other fields. Hence, the health of conferences is vital to the computer science research mission. One conventional indication of health is the number of submissions and the acceptance rate at the conference. Calls for papers often include encouraging words for big idea or new direction papers. The problem is that reviewers see so many regular papers it is just too difficult to switch gears and be more understanding when evaluating bolder papers with holes in arguments or missing measurements. Program committees typically start with a ranked list of papers based on the average of numerical ratings in order to cope with the large number of submissions. Big idea papers are sure to get some poor evaluations, which cause them to drop down the list.
- Published
- 2004
- Full Text
- View/download PDF
49. A Survey on Hyperdimensional Computing aka Vector Symbolic Architectures, Part I: Models and Data Transformations.
- Author
-
KLEYKO, DENIS, RACHKOVSKIJ, DMITRI A., and OSIPOV, EVGENY
- Subjects
ARTIFICIAL intelligence ,BINARY codes ,COGNITIVE science ,TENSOR products ,COMPUTER science ,COGNITIVE computing - Abstract
This two-part comprehensive survey is devoted to a computing framework most commonly known under the names Hyperdimensional Computing and Vector Symbolic Architectures (HDC/VSA). Both names refer to a family of computational models that use high-dimensional distributed representations and rely on the algebraic properties of their key operations to incorporate the advantages of structured symbolic representations and distributed vector representations. Notable models in the HDC/VSA family are Tensor Product Representations, Holographic Reduced Representations, Multiply-Add-Permute, Binary Spatter Codes, and Sparse Binary Distributed Representations but there are other models too. HDC/VSA is a highly interdisciplinary field with connections to computer science, electrical engineering, artificial intelligence, mathematics, and cognitive science. This fact makes it challenging to create a thorough overview of the field. However, due to a surge of new researchers joining the field in recent years, the necessity for a comprehensive survey of the field has become extremely important. Therefore, amongst other aspects of the field, this Part I surveys important aspects such as: known computational models of HDC/VSA and transformations of various input data types to high-dimensional distributed representations. Part II of this survey [84] is devoted to applications, cognitive computing and architectures, as well as directions for future work. The survey is written to be useful for both newcomers and practitioners. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
50. Mitigating Bias in Algorithmic Systems—A Fish-eye View.
- Author
-
ORPHANOU, KALIA, OTTERBACHER, JAHNA, KLEANTHOUS, STYLIANI, BATSUREN, KHUYAGBAATAR, GIUNCHIGLIA, FAUSTO, BOGINA, VERONIKA, TAL, AVITAL SHULNER, HARTMAN, ALAN, and KUFLIK, TSVI
- Subjects
COMMUNITIES ,COMPUTER science ,SCIENTIFIC computing ,FAIRNESS ,EYE - Abstract
Mitigating bias in algorithmic systems is a critical issue drawing attention across communities within the information and computer sciences. Given the complexity of the problem and the involvement of multiple stakeholders—including developers, end users, and third-parties—there is a need to understand the landscape of the sources of bias, and the solutions being proposed to address them, from a broad, cross-domain perspective. This survey provides a “fish-eye view,” examining approaches across four areas of research. The literature describes three steps toward a comprehensive treatment—bias detection, fairness management, and explainability management—and underscores the need to work from within the system as well as from the perspective of stakeholders in the broader context. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.