Back to Search Start Over

Developing HATIT: A Platform and Experimental Paradigm for Evaluating AI in Intelligence Analysis.

Authors :
Paletz, Susannah B. F.
Kane, Aimee
Diep, Madeline
Vahlkamp, Sarah
Porter, Adam
Nelson, Tammie
Source :
Academy of Management Annual Meeting Proceedings; 2024, Vol. 2024 Issue 1, pN.PAG-N.PAG, 1p
Publication Year :
2024

Abstract

Based on interviews with intelligence professionals, we created the Human-Agent Teaming for Intelligence Tasks (HATIT) experimental paradigm for evaluating artificial intelligence (AI) interventions in the context of intelligence shiftwork (i.e., asynchronous teamwork). These handovers and collaborative intelligence analysis suffer from team cognition and information challenges (e.g., volume, velocity), which AIs may be able to address. HATIT includes a web-based software platform, a shift handover task in a fictional world with hundreds of pages and 59 documents, and multifaceted behavioral and perceptual measures. To test the feasibility of HATIT, we designed a simple AI agent called "Illuminate" (branded with a sun icon) that summarizes documents conversationally and provides social media topic models. Before the release of ChatGPT, we conducted a two-phase (training/screening, main task) between-subjects (AI vs no-AI) collaborative analysis shift handover experiment. We found that transactive memory systems were more accurate in the AI condition, but that workload, specifically frustration and temporal demand, were perceived as higher in the AI condition. These findings strongly suggest that HATIT can effectively test different AI interventions. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
21516561
Volume :
2024
Issue :
1
Database :
Complementary Index
Journal :
Academy of Management Annual Meeting Proceedings
Publication Type :
Conference
Accession number :
178798567
Full Text :
https://doi.org/10.5465/AMPROC.2024.14402abstract