1. A Pilot Study of Medical Student Opinions on Large Language Models.
- Author
-
Xu AY, Piranio VS, Speakman S, Rosen CD, Lu S, Lamprecht C, Medina RE, Corrielus M, Griffin IT, Chatham CE, Abchee NJ, Stribling D, Huynh PB, Harrell H, Shickel B, and Brennan M
- Abstract
Introduction Artificial intelligence (AI) has long garnered significant interest in the medical field. Large language models (LLMs) have popularized the use of AI for the public through chatbots such as ChatGPT and have become an easily accessible and recognizable medical resource for medical students. Here, we investigate how medical students are currently utilizing LLM-based tools throughout medical education and examine medical student perception of these tools. Methods A cross-sectional survey was administered to current medical students at the University of Florida College of Medicine (UFCOM) in January 2024 discussing the utilization of AI and LLM tools and perspectives on the current and future role of AI in medicine. Results All 102 respondents reported having heard of LLM-based chatbots such as ChatGPT, Bard, Bing Chat, and Claude. Sixty-nine percent (69%; 70/102) of respondents reported having used them for medical-related purposes at least once a month. Seventy-seven point one percent (77.1%; 54/70) reported the information provided by them to be very accurate or somewhat accurate, and 80% (55/70) reported that they were likely to continue using them in their future medical practice. Those with some baseline understanding of and exposure to AI were 3.26 (p=0.020) and 4.30 (p=0.002) times more likely to have used an LLM-based chatbot, respectively, and 5.06 (p=0.021) and 3.38 (p=0.039) times more likely to cross-check information obtained from them, respectively, compared to those with little to no baseline understanding or exposure. Furthermore, those with some exposure to AI in medical school were 2.70 (p=0.039) and 4.61 (p=0.0004) times more likely to trust AI with clinical decision-making currently and in the next 5 years, respectively, than those with little to no exposure. Those who had used an LLM-based chatbot were 4.31 (p=0.019) times more likely to trust AI with clinical decision-making currently compared to those who had not used one. Conclusion LLM-based chatbots, such as ChatGPT, are not only making their way into the medical student repertoire of study resources but are also being utilized in the setting of patient care and research. Medical students who participated in the survey generally had a positive perception of LLM-based chatbots and reported they were likely to continue using them in the future. Previous AI knowledge and exposure correlated with more conscientious use of these tools such as cross-checking information. Combined with our finding that all respondents believed AI should be taught in the medical curriculum, our study highlights a key opportunity in medical education to acclimate medical students to AI now., Competing Interests: Human subjects: Consent for treatment and open access publication was obtained or waived by all participants in this study. University of Florida (UF) IRB issued approval Exemption ET00022207 (dated 01/18/2024). Based on the information you submitted below, your research meets the criteria for exempt research, and you are authorized to conduct your research as it is described. Animal subjects: All authors have confirmed that this study did not involve animal subjects or tissue. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work., (Copyright © 2024, Xu et al.)
- Published
- 2024
- Full Text
- View/download PDF