Back to Search Start Over

Exploring the Efficacy of Large Language Models in Summarizing Mental Health Counseling Sessions: Benchmark Study

Authors :
Prottay Kumar Adhikary
Aseem Srivastava
Shivani Kumar
Salam Michael Singh
Puneet Manuja
Jini K Gopinath
Vijay Krishnan
Swati Kedia Gupta
Koushik Sinha Deb
Tanmoy Chakraborty
Source :
JMIR Mental Health, Vol 11, p e57306 (2024)
Publication Year :
2024
Publisher :
JMIR Publications, 2024.

Abstract

BackgroundComprehensive session summaries enable effective continuity in mental health counseling, facilitating informed therapy planning. However, manual summarization presents a significant challenge, diverting experts’ attention from the core counseling process. Leveraging advances in automatic summarization to streamline the summarization process addresses this issue because this enables mental health professionals to access concise summaries of lengthy therapy sessions, thereby increasing their efficiency. However, existing approaches often overlook the nuanced intricacies inherent in counseling interactions. ObjectiveThis study evaluates the effectiveness of state-of-the-art large language models (LLMs) in selectively summarizing various components of therapy sessions through aspect-based summarization, aiming to benchmark their performance. MethodsWe first created Mental Health Counseling-Component–Guided Dialogue Summaries, a benchmarking data set that consists of 191 counseling sessions with summaries focused on 3 distinct counseling components (also known as counseling aspects). Next, we assessed the capabilities of 11 state-of-the-art LLMs in addressing the task of counseling-component–guided summarization. The generated summaries were evaluated quantitatively using standard summarization metrics and verified qualitatively by mental health professionals. ResultsOur findings demonstrated the superior performance of task-specific LLMs such as MentalLlama, Mistral, and MentalBART evaluated using standard quantitative metrics such as Recall-Oriented Understudy for Gisting Evaluation (ROUGE)-1, ROUGE-2, ROUGE-L, and Bidirectional Encoder Representations from Transformers Score across all aspects of the counseling components. Furthermore, expert evaluation revealed that Mistral superseded both MentalLlama and MentalBART across 6 parameters: affective attitude, burden, ethicality, coherence, opportunity costs, and perceived effectiveness. However, these models exhibit a common weakness in terms of room for improvement in the opportunity costs and perceived effectiveness metrics. ConclusionsWhile LLMs fine-tuned specifically on mental health domain data display better performance based on automatic evaluation scores, expert assessments indicate that these models are not yet reliable for clinical application. Further refinement and validation are necessary before their implementation in practice.

Subjects

Subjects :
Psychology
BF1-990

Details

Language :
English
ISSN :
23687959
Volume :
11
Database :
Directory of Open Access Journals
Journal :
JMIR Mental Health
Publication Type :
Academic Journal
Accession number :
edsdoj.70991efb6b43da9393a9b05c5bfccb
Document Type :
article
Full Text :
https://doi.org/10.2196/57306