1. Unveiling and Mitigating Bias in Mental Health Analysis with Large Language Models
- Author
-
Wang, Yuqing, Zhao, Yun, Keller, Sara Alessandra, de Hond, Anne, van Buchem, Marieke M., Pillai, Malvika, and Hernandez-Boussard, Tina
- Subjects
Computer Science - Computation and Language - Abstract
The advancement of large language models (LLMs) has demonstrated strong capabilities across various applications, including mental health analysis. However, existing studies have focused on predictive performance, leaving the critical issue of fairness underexplored, posing significant risks to vulnerable populations. Despite acknowledging potential biases, previous works have lacked thorough investigations into these biases and their impacts. To address this gap, we systematically evaluate biases across seven social factors (e.g., gender, age, religion) using ten LLMs with different prompting methods on eight diverse mental health datasets. Our results show that GPT-4 achieves the best overall balance in performance and fairness among LLMs, although it still lags behind domain-specific models like MentalRoBERTa in some cases. Additionally, our tailored fairness-aware prompts can effectively mitigate bias in mental health predictions, highlighting the great potential for fair analysis in this field., Comment: In submission; Data and code are available at: https://github.com/EternityYW/BiasEval-LLM-MentalHealth
- Published
- 2024