1. Communicating and combating algorithmic bias: effects of data diversity, labeler diversity, performance bias, and user feedback on AI trust.
- Author
-
Chen, Cheng and Sundar, S. Shyam
- Subjects
- *
RACISM , *ALGORITHMIC bias , *TRUST , *INTERACTIVE multimedia , *ARTIFICIAL intelligence - Abstract
Inspired by the emerging documentation paradigm emphasizing data and model transparency, this study explores whether displaying racial diversity cues in training data and labelers’ backgrounds enhance users’ expectations of algorithmic fairness and trust in AI systems, even to the point of making them overlook racially biased performance. It also explores how their trust is affected when the system invites their feedback. We conducted a factorial experiment (
N =597) to test hypotheses derived from a model of Human-AI Interaction based on the Theory of Interactive Media Effects (HAII-TIME). We found that racial diversity cues in either training data or labelers’ backgrounds trigger the representativeness heuristic, which is associated with higher algorithmic fairness expectations and increased trust. Inviting feedback enhances users’ sense of agency and is positively related to behavioral trust, but it reduces usability for Whites when the AI shows unbiased performance. Implications for designing socially responsible AI interfaces are discussed, considering both users’ cognitive limitations and usability. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF