1. Disclosing Personal Health Information to Emotional Human Doctors or Unemotional AI Doctors? Experimental Evidence Based on Privacy Calculus Theory.
- Author
-
Li, Shuoshuo, Mou, Yi, and Xu, Jian
- Subjects
- *
MEDICAL records , *ARTIFICIAL intelligence , *TRUST , *MENTAL health , *PHYSICIANS - Abstract
AbstractThe commercialization of artificial intelligence (AI) in healthcare is accelerating, yet academic research on its users remains scarce. To what extent are they willing to disclose personal health privacy to AI doctors compared to traditional human doctors? What factors are shaping these decisions? The lack of user research has left these questions unanswered. This article, based on privacy calculus theory, conducted a multi-factorial between-subjects online experiment (
N = 582) with a 2 (medical provider: AI vs. human) × 2 (emotional support: low vs. high) × 2 (information sensitivity: low vs. high) design. The results indicated that AI doctors lead participants to perceive both lower health benefits and privacy risks. Emotional support is not always beneficial. On one hand, high emotional support can provide patients with more health benefits, but on the other hand, it also poses higher levels of privacy risks. Additionally, high emotional support responses from AI doctors could enhance patients’ health benefits, trust, and willingness to disclose health privacy, while the opposite was observed for human doctors. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF