7 results on '"Jonathan Z Bakdash"'
Search Results
2. MultiModal Deception Detection: Accuracy, Applicability and Generalizability
- Author
-
Yan Zhou, Jonathan Z. Bakdash, Jelena Rakic, Linda Nguyen, Bhavani M. Thuriasingham, Daniel C. Krawczyk, Vibha Belavadi, and Murat Kantarcioglu
- Subjects
Facial expression ,Process (engineering) ,Human–computer interaction ,Computer science ,media_common.quotation_subject ,Inference ,Generalizability theory ,Video processing ,Deception ,Lying ,Facial recognition system ,media_common - Abstract
The increasing use of Artificial Intelligence (AI) systems in face recognition and video processing in recent times creates higher stakes for their application in daily life. Increasingly, critical decisions are being made using these AI systems in application domains such as employment, finance, and crime prevention. These applications are done through the use of more abstract concepts such as emotions, trait evaluations (e.g., trustworthiness), and behavior (e.g., deception). These abstract concepts are learned by the AI system using the verbal and non-verbal cues from the human subject stimuli (e,g., facial expressions, movements, audio, text) for inference. Because the use of AI systems often happens in high stakes scenarios, it is of utmost importance that the AI system participating in the decision-making process is highly reliable and credible. In this paper, we specifically consider the feasibility of using such an AI system for deception detection. We examine if deception can be caught using multimodal aspects such as facial expressions and movements, audio cues, video cues, etc. We experiment using three different datasets with varying degrees of deception to explore the problem of deception detection. We also study state-of-the-art deception detection systems and investigate whether we can extend their algorithm into new datasets. We conclude that there is a lack of reasonable evidence that AI-based deception detection is generalizable over different scenarios of lying (lying deliberately, lying under duress, and lying through half-truths) and that in the future additional factors will need to be considered to make such a claim.
- Published
- 2020
- Full Text
- View/download PDF
3. Attacklets: Modeling High Dimensionality in Real World Cyberattacks
- Author
-
Cuneyt Gurcan Akcora, Bhavani Thuraisingham, Yulia R. Gel, Laura R. Marusich, Jonathan Z. Bakdash, and Murat Kantarcioglu
- Subjects
021110 strategic, defence & security studies ,Exploit ,Computer science ,0211 other engineering and technologies ,020206 networking & telecommunications ,02 engineering and technology ,Data breach ,High dimensional ,computer.software_genre ,Visualization ,Data modeling ,Range (mathematics) ,Server ,0202 electrical engineering, electronic engineering, information engineering ,Data mining ,High dimensionality ,computer - Abstract
We introduce attacklets, a novel approach to model the high dimensional interactions in cyberattacks. Attacklets are implemented using a real-world dataset of cyberattacks from the Verizon Data Breach Investigation Report. Whereas the commonly used attack graphs model the action sequences of attackers for specific exploits, attacklets model general attributes and states of each attack separately. Attacklets may inform the number and types of attributes across a wide range of cyberattacks. These structural properties can then be used in machine learning models to classify and predict future cyberattacks.
- Published
- 2018
- Full Text
- View/download PDF
4. Learning and Reasoning in Complex Coalition Information Environments: A Critical Analysis
- Author
-
Ramya Raghavendra, Mani Srivastava, Moustafa Alzantot, Supriyo Chakraborty, Jonathan Z. Bakdash, Murat Sensoy, Tianwei Xing, Alun Preece, Lance M. Kaplan, Angelika Kimmig, Daniel Harborne, Federico Cerutti, and Dave Braines
- Subjects
QA75 ,artificial intelligence for situational understanding ,collective situational understanding ,critical analysis of artificial intelligence techniques ,Computer science ,02 engineering and technology ,010501 environmental sciences ,01 natural sciences ,Data science ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Situational ethics ,0105 earth and related environmental sciences - Abstract
In this paper we provide a critical analysis with met- rics that will inform guidelines for designing distributed systems for Collective Situational Understanding (CSU). CSU requires both collective insight—i.e., accurate and deep understanding of a situation derived from uncertain and often sparse data and collective foresight—i.e., the ability to predict what will happen in the future. When it comes to complex scenarios, the need for a distributed CSU naturally emerges, as a single monolithic approach not only is unfeasible: it is also undesirable. We therefore propose a principled, critical analysis of AI techniques that can support specific tasks for CSU to derive guidelines for designing distributed systems for CSU.
- Published
- 2018
- Full Text
- View/download PDF
5. Human understanding of information represented in natural versus artificial language (Poster)
- Author
-
Erin Zaroukian and Jonathan Z. Bakdash
- Subjects
Decision support system ,Computer science ,business.industry ,Diagram ,computer.software_genre ,language.human_language ,Visualization ,Constructed language ,Controlled natural language ,Empirical research ,language ,Natural (music) ,Artificial intelligence ,business ,computer ,Natural language processing ,Natural language - Abstract
In this paper we compare human understanding of information represented in a natural language (NL) to a type of artificial language, called a Controlled Natural Language (CNL). Potential applications for CNLs include decision support and conversational agents, but currently there is limited empirical research on the understandability of CNLs for untrained humans. We investigate a particular type of CNL, called Controlled English (CE), which was designed to be a simplified, artificial subset of natural language that is both human readable and unambiguous for fast and accurate machine processing. We quantify and compare human understanding of NL and CE using accuracy and speed for language statements. The statements described entities (people and objects) and relations (actions) among entities with the ground-truth represented using visual diagrams. Participants responded whether the statements matched the diagram (yes/no). In Experiment I, we found accuracy for NL and CE was comparable, although the speed for understanding CE was slower. To further examine the role of speed, we induced time pressure in Experiment II. We found both the accuracy and speed for CE was lower than NL. These results indicate that if people have sufficient time, understanding for CE can be equivalent to NL. However, with limited time the accuracy and speed for understanding NL is better than CE. Our findings indicate that both accuracy and speed of CNLs should be evaluated. Furthermore, under time pressure there can be meaningful differences in accuracy and speed between different ways of representing information. Understanding for methods of representing machine information has potential implications for situation understanding and management with human-machine interaction and collaboration.
- Published
- 2018
- Full Text
- View/download PDF
6. The Future of Deception: Machine-Generated and Manipulated Images, Video, and Audio?
- Author
-
Jonathan Z. Bakdash, Sue E. Kase, Erin Zaroukian, Jennifer Holmes, Monica Rankin, Char Sample, Murat Kantarcioglu, and Boleslaw K. Szymanski
- Subjects
Echo (communications protocol) ,business.industry ,Computer science ,media_common.quotation_subject ,Deception ,Filter (software) ,Visualization ,Social group ,Adversarial system ,Software ,Human–computer interaction ,Social media ,business ,media_common - Abstract
Social sensing techniques were designed for analyzing unreliable data [1], but not explicitly built for adversarial generated and manipulated data. The adversarial use of social media to spread deceptive or misleading information poses a social, economic, and political threat [2]. Deceptive information spreads quickly and inexpensively online relative to traditional methods of dissemination (e.g., print, radio, and television). For example, bots (i.e., dedicated software for sharing text information [3]) can distribute information faster than humans. Such deceptive information is commonly referred to as fake (fabricated) news, which can be a form of propaganda (i.e., manipulation to advance a particular view or agenda). Information spread is particularly effective if the content resonates with the preconceptions and biases of social groups or communities because the spread will be reinforced by implied trust in information coming from other members (echo chambers and filter bubbles) [4]. We conjecture that the future of online deception, including fake news, will extend beyond text to high-quality, massproduced machine-generated and manipulated images, video, and audio [5].
- Published
- 2018
- Full Text
- View/download PDF
7. Automation bias with a conversational interface: User confirmation of misparsed information
- Author
-
Jonathan Z. Bakdash, William Webberley, Alun Preece, and Erin Zaroukian
- Subjects
QA75 ,Situation awareness ,Interface (Java) ,Computer science ,business.industry ,Flexibility (personality) ,Usability ,computer.software_genre ,Task (project management) ,Knowledge-based systems ,Human–computer interaction ,Dialog system ,business ,computer ,Natural language - Abstract
We investigate automation bias for confirming\ud erroneous information with a conversational interface.\ud Participants in our studies used a conversational interface to\ud report information in a simulated intelligence, surveillance, and\ud reconnaissance (ISR) task. In the task, for flexibility and ease of\ud use, participants reported information to the conversational\ud agent in natural language. Then, the conversational agent\ud interpreted the user’s reports in a human- and machine-readable\ud language. Next, participants could accept or reject the agent’s\ud interpretation. Misparses occur when the agent incorrectly\ud interprets the report and the user erroneously accepts it. We\ud hypothesize that the misparses naturally occur in the experiment\ud due to automation bias and complacency because the agent\ud interpretation was generally correct (92%). These errors indicate\ud some users were unable to maintain situation awareness using\ud the conversational interface. Our results illustrate concerns for\ud implementing a flexible conversational interface in safety critical\ud environments (e.g., military, emergency operations).
- Published
- 2017
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.