2,140 results on '"CHATGPT"'
Search Results
2. Who Determines What Is Relevant? Humans or AI? Why Not Both?
- Author
-
Faggioli, Guglielmo, Dietz, Laura, Clarke, Charles L. A., Demartini, Gianluca, Hagen, Matthias, Hauff, Claudia, Kando, Noriko, Kanoulas, Evangelos, Potthast, Martin, Stein, Benno, and Wachsmuth, Henning
- Subjects
- *
HUMAN-artificial intelligence interaction , *ARTIFICIAL intelligence , *HUMAN-computer interaction , *ARTIFICIAL intelligence & society , *LANGUAGE models , *CHATGPT , *CHATBOTS - Abstract
The article offers an opinion regarding how a human-artificial intelligence (AI) collaboration can asses relevance. Topics include the improvements to large language models (LLMs) such as OpenAI's chatbot ChatGPT, the reasons why utilizing only LLMs can be problematic, and how human judgement can improve fairness, efficiency, and effectiveness.
- Published
- 2024
- Full Text
- View/download PDF
3. The Science of Detecting LLM-Generated Text.
- Author
-
Tang, Ruixiang, Chuang, Yu-Neng, and Hu, Xia
- Subjects
- *
LANGUAGE models , *NATURAL language processing , *COMPUTATIONAL linguistics , *CHATGPT , *CHATBOTS , *ARTIFICIAL intelligence , *SEMANTIC computing - Abstract
This research article focuses on the science of detecting large language model (LLM) generated text. The authors discuss the advancement of natural language generation (NLG) technology, including OpenAI's ChatGPT, and explain how the two detection methods, black-box detection and white-box detection, work to mitigate the potential misuse of LLMs.
- Published
- 2024
- Full Text
- View/download PDF
4. Generative Artificial Intelligence: 8 Critical Questions for Libraries.
- Author
-
Bridges, Laurie M., McElroy, Kelly, and Welhouse, Zach
- Subjects
- *
GENERATIVE artificial intelligence , *LANGUAGE models , *INTELLECTUAL freedom - Abstract
In this article, we provide a brief overview of generative artificial intelligence (GenAI) and large language models (LLMs). We then propose eight critical questions that libraries should ask when exploring this technology and its implications for their communities. We argue that libraries have a unique role in facilitating informed and responsible use of GenAI, as well as safeguarding and promoting the values of access, privacy, and intellectual freedom. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Can ChatGPT Learn Chinese or Swahili?
- Author
-
Savage, Neil
- Subjects
- *
CHATGPT , *LANGUAGE models , *LANGUAGE & languages , *JAPANESE language , *SYNTAX (Grammar) - Abstract
The article discusses the performance of large language models (LLMs) like ChatGPT in languages other than English, highlighting challenges and potential solutions. Researchers found that LLMs struggle to mimic humans in languages like Japanese due to differences in syntax and writing styles, leading to concerns about their effectiveness in non-English languages. Efforts to improve LLM performance in other languages include innovative tokenization methods and leveraging search data to supplement training data, aiming to address the scarcity of language-specific training resources.
- Published
- 2024
- Full Text
- View/download PDF
6. Thus Spake ChatGPT: On the reliability of AI-based chatbots for science communication.
- Author
-
Dutta, Subhabrata and Chakraborty, Tanmoy
- Subjects
- *
CHATGPT , *STATISTICAL reliability , *SCIENTIFIC communication , *LANGUAGE models , *SCIENTIFIC knowledge - Abstract
The article provides the author's perspective on the reliability of the ChatGPT, which is an artificially intelligent (AI) chatbot that derives from a large language model (LLM), in the communication of science. Particular focus is given to the errors that ChatGPT encounters when communicating scientific knowledge to a general audience.
- Published
- 2023
- Full Text
- View/download PDF
7. Generative AI as a New Innovation Platform: Considering the stability and longevity of a potential new foundational technology.
- Author
-
Cusumano, Michael A.
- Subjects
- *
GENERATIVE artificial intelligence , *CHATGPT , *ARTIFICIAL neural networks , *LANGUAGE models , *GOVERNMENT regulation , *ACCURACY of information - Abstract
This article presents generative artificial intelligence (AI) as a potential new innovation platform. First this article provides the history of generative AI with a discussion of Microsoft, OpenAI, and the use of neural networks and large language models. Next, a look at whether generative AI could meet the criteria of an innovation platform. Lastly, a discussion of regulation and governance.
- Published
- 2023
- Full Text
- View/download PDF
8. Popping the chatbot hype balloon.
- Author
-
Goudarzi, Sara
- Subjects
- *
CHATBOTS , *CHATGPT , *ARTIFICIAL intelligence , *LANGUAGE models , *PERSONALLY identifiable information , *SCIENCE fiction - Abstract
Since ChatGPT's release in November 2022, artificial intelligence has come into the spotlight. Inspiring both fascination and fear, chatbots have stirred debates among researchers, developers, and policy makers. The concerns range from concrete and tangible ones—which include replication of existing biases and discrimination at scale, harvesting personal data, and spreading misinformation—to more existential fears that their development will lead to machines with human-like cognitive abilities. Understanding how chatbots work and the human labor and data involved can better help evaluate the validity of concerns surrounding these systems, which although innovative, are hardly the stuff of science fiction. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
9. Generative AI Degrades Online Communities.
- Author
-
Burtch, Gordon, Lee, Dokyun, and Chen, Zhichen
- Subjects
- *
VIRTUAL communities , *LANGUAGE models , *CHATGPT , *CHATBOTS , *INTERNET forums , *INTERNET users , *ONLINE comments , *VIRTUAL culture - Abstract
The article focuses on how large language models (LLMs) are influencing online communities. The authors offer their opinions, stating that generative artificial intelligence (AI) technologies, such as ChatGPT, are causing a decrease in user participation of knowledge communities and degrading the quality of answers the community provides.
- Published
- 2024
- Full Text
- View/download PDF
10. ChatGPT's performance evaluation for annotating multi-label text in Indonesian language.
- Author
-
Hakim, M. Faris Al and Prasetiyo, Budi
- Subjects
- *
CHATGPT , *INDONESIAN language , *LANGUAGE models , *SENTIMENT analysis , *ARTIFICIAL intelligence - Abstract
The high need for artificial intelligence applications in all fields impacts the appropriate dataset for building a good machine. Labeling datasets becomes one of the main tasks needed before training the machine, especially in sentiment analysis. Aspect-based sentiment analysis has more labels in its process than others. The high number of data also impacts the high cost of processing the data, including the labeling process. Nevertheless, all the problems still need to be solved, including multi-label in Indonesian. It is a potential task that needs to be done by giving several labels in an instance. ChatGPT, as one of the Large Language Models (LLM), has a high potential to carry out the labeling process. ChatGPT-3.5 was examined to label for aspect-based sentiment analysis in the Indonesian language in this study. CASA dataset containing 1080 rows was used to evaluate the performance of the model. The results show that comprehensive exploration must be applied to produce optimal ChatGPT performance for classifying multi-label in Indonesian. The study results will impact the efficiency of the labeling process in multi-label case that needs more effort to be finished. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Criticize my Code, not me: Using AI-Generated Feedback in Computer Science Teaching.
- Author
-
Kunz, Sibylle and Steffen, Adrienne
- Subjects
- *
COMPUTER science education , *CHATGPT , *LANGUAGE models , *WOMEN in higher education , *GENDER stereotypes , *TEACHER training - Abstract
Large Language Models (LLMs) like ChatGPT can help teachers to tailor learning tasks for their students, combining learning objectives and storytelling to raise interest in the subject. AI-based learning task design can help to support competency-based learning, especially for girls in STEM courses like computer science, where otherwise the "Leaky STEM pipeline" (Speer 2023) leads to a constant loss of female students over school time. LLMs support many steps of the creation cycle of learning tasks. One important step is the feedback process between teachers and students during and after solving the tasks. Students need person-related as well as process-related feedback to make progress. Sometimes problems occur when teachers give feedback in a way that embarrasses or hurts the students. Especially female students often need more confirmation to make them aware of their progress, but studies show that boys demand and get more atention by teachers in this situation. This is one of the many reasons why girls lose motivation and interest in STEM courses over time. Since male and female teachers differ in expressing feedback without being aware of it, it is necessary to raise their consciousness. LLMs like ChatGPT can be used in two scenarios here. The first scenario is helping teachers to formulate objective feedback in a way that is adequate and understandable for the target group - e.g., young girls or boys - in a specific situation. The second scenario is training the teacher in a Socrafic way, where the LLM simulates a student receiving the feedback and reacting to it according to established communication models like the Four Ears-model by Schulz von Thun (Schulz von Thun 1981) or Berne's Transactional Analysis (Berne, 1964). This case study provides examples and prompting schemes for both scenarios and discusses the fragile balance between avoiding gender stereotypes in LLMs and giving more helpful and sustainable feedback for female students to foster self-esteem and competency-awareness. [ABSTRACT FROM AUTHOR]
- Published
- 2024
12. Using generative artificial intelligence/ChatGPT for academic communication: Students' perspectives.
- Author
-
Liu, Yanhua, Park, Jaeuk, and McMinn, Sean
- Abstract
Generative artificial intelligence (GenAI) tools such as ChatGPT with their human‐like intelligence and language processing capabilities are significantly impacting the way we live, work, and communicate with each other. While scholars have increasingly focused on the use of GenAI in higher education since its inception, little is known about how key higher education stakeholders, particularly students, perceive its impact on teaching and learning within the context of academic communication, an area central to students' development of transferable skills and literacy competencies yet heavily influenced by the technology. This empirical study addresses the gap by investigating students' experiences and attitudes toward GenAI tools for English academic communication, focusing on their overall perceptions, perceived benefits, limitations, and challenges. Drawing on data from a questionnaire survey with 475 students and interviews with 12 at two universities in China, our findings indicate that students generally view GenAI positively, considering them useful for learning academic communication skills, particularly in writing, grammar, vocabulary, and reading. However, limitations are recognized in terms of giving feedback on critical thinking, creativity, and speaking skills. In addition, information reliability, ethical issues, and impact on assessment and academic integrity also emerged as important concerns. Our study argues that universities should embrace and capitalize on the affordances of GenAI and address its challenges to better support students' learning of critical academic literacy. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. The aid of ChatGPT to dance education: a theoretical exploration based on TPACK.
- Author
-
Dou, Xinyu
- Abstract
This in-depth exploration delves into the transformative possibilities of integrating ChatGPT into the realm of dance education. Examining its roles as a learning ally, creative mentor, knowledge transmitter, and perceptual guide within the unique context of dance instruction, the study navigates through the Technological Pedagogical Content Knowledge (TPACK) framework. It showcases how ChatGPT can elevate teaching efficiency, provide real-time feedback on dance movements, kindle artistic curiosity, and extend emotional support to students. While emphasizing its positive impact, the study also acknowledges potential risks, underscoring the irreplaceable role of educators in guiding students on effective ChatGPT utilization, ensuring it complements the nuanced aspects of dance education. The supporting framework presented highlights the dynamic synergy between ChatGPT, educators, and students, promising a more engaging and enriched dance education experience. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. ChatGPT, can you take my job interview? Examining artificial intelligence cheating in the asynchronous video interview.
- Author
-
Canagasuriam, Damian and Lukacik, Eden‐Raye
- Subjects
- *
CHATGPT , *ARTIFICIAL intelligence , *EMPLOYMENT interviewing , *CHATBOTS - Abstract
Artificial intelligence (AI) chatbots, such as Chat Generative Pre‐trained Transformer (ChatGPT), may threaten the validity of selection processes. This study provides the first examination of how AI cheating in the asynchronous video interview (AVI) may impact interview performance and applicant reactions. In a preregistered experiment, Prolific respondents (
N = 245) completed an AVI after being randomly assigned to a non‐ChatGPT, ChatGPT‐Verbatim (read AI‐generated responses word‐for‐word), or ChatGPT‐Personalized condition (provided their résumé/contextual instructions to ChatGPT and modified the AI‐generated responses). The ChatGPT conditions received considerably higher scores on overall performance and content than the non‐ChatGPT condition. However, response delivery ratings did not differ between conditions and the ChatGPT conditions received lower honesty ratings. Both ChatGPT conditions rated the AVI as lower on procedural justice than the non‐ChatGPT condition. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
15. I Cannot Miss It for the World: The Relationship Between Fear of Missing out (FOMO) and Acceptance of ChatGPT.
- Author
-
Li, Heng
- Subjects
- *
CHATGPT , *TECHNOLOGICAL innovations , *TECHNOLOGY Acceptance Model - Abstract
AbstractAccording to the technology acceptance model, people’s acceptance of new technology is influenced by their perception of its usefulness and ease of use. We expand the research agenda by identifying the fear of missing out (FOMO) as a potential factor shaping people’s attitudes toward ChatGPT. We predicted that FOMO experiences, characterized by strong desires to stay connected with other people’s lives, would be linked to more AI-related supportive attitudes. In Study 1 (N = 209), university students with more frequent experiences of FOMO showed more favorable attitudes toward ChatGPT. Study 2 (N = 126) found that participants with a greater FOMO were more likely to choose an invitation letter purportedly authored by ChatGPT, despite it being composed by a human, which provided a behavioral choice confirmation of the observed relationship. Study 3 (N = 186) replicated the findings in a tourism context by employing non-student populations. Study 4 (N = 150) explored the propensity for participants to engage with ChatGPT in personal music selection process and obtained the same effect. Together, these findings suggest that the FOMO bias may exert an additional influence on the endorsement of new technologies, which highlights the important role of emotion in technology acceptance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. Les adjectifs évaluatifs dans le discours juridictionnel : Analyse de corpus.
- Author
-
Dolata-Zaród, Anna
- Subjects
- *
CHATGPT , *CORPORA , *LEGAL judgments , *JUDGES , *FRENCH language - Abstract
L’objectif de cet article est d’analyser l’utilisation des adjectifs évaluatifs dans le cadre des recherches quantitatives et qualitatives fondées sur le corpus du discours juridictionnel français. Dans cette analyse, nous adopterons l’approche Appraisal (Martin & White 2005) qui s’intéresse aux marques linguistiques laissant entrevoir la présence subjective du sujet parlant dans le texte. Notre corpus comprend 100 arrêts de la Cour de cassation de 2018 à 2022 inclus. Nous utiliserons la plateforme AnaText pour l’analyse quantitative et ChatGPT pour l’analyse qualitative. Cette recherche nous permettra d’identifier les adjectifs évaluatifs les plus souvent utilisés par les juges et d’indiquer les cibles et la polarité de l’argumentation judiciaire.This article aims to analyse the use of evaluative adjectives in quantitative and qualitative research based on the corpus of French jurisdictional discourse. In this analysis, we adopt the Appraisal approach (Martin & White 2005), which focuses on linguistic marks that reveal the subjective presence of the speaker in the text. Our corpus comprises 100 judgments of the Court of Cassation from 2018 to 2022 inclusive. We use the AnaText platform for quantitative analysis and ChatGPT for qualitative analysis. This research allows us to identify the evaluative adjectives most often used by judges and to indicate the targets and polarity of judicial argumentation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Quality of science journalism in the age of Artificial Intelligence explored with a mixed methodology.
- Author
-
Dijkstra, Anne M., de Jong, Anouk, and Boscolo, Marco
- Subjects
- *
SCIENCE journalism , *ARTIFICIAL intelligence , *CHATGPT , *MEMBERSHIP in associations, institutions, etc. , *THEMATIC analysis - Abstract
Science journalists, traditionally, play a key role in delivering science information to a wider audience. However, changes in the media ecosystem and the science-media relationship are posing challenges to reliable news production. Additionally, recent developments such as ChatGPT and Artificial Intelligence (AI) more generally, may have further consequences for the work of (science) journalists. Through a mixed-methodology, the quality of news reporting was studied within the context of AI. A content analysis of media output about AI (news articles published within the time frame 1 September 2022–28 February 2023) explored the adherence to quality indicators, while interviews shed light on journalism practices regarding quality reporting on and with AI. Perspectives from understudied areas in four European countries (Belgium, Italy, Portugal, and Spain) were included and compared. The findings show that AI received continuous media attention in the four countries. Furthermore, despite four different media landscapes, the reporting in the news articles adhered to the same quality criteria such as applying rigour, including sources of information, accessibility, and relevance. Thematic analysis of the interview findings revealed that impact of AI and ChatGPT on the journalism profession is still in its infancy. Expected benefits of AI related to helping with repetitive tasks (e.g. translations), and positively influencing journalistic principles of accessibility, engagement, and impact, while concerns showed fear for lower adherence to principles of rigour, integrity and transparency of sources of information. More generally, the interviewees expressed concerns about the state of science journalism, including a lack of funding influencing the quality of reporting. Journalists who were employed as staff as well as those who worked as freelancers put efforts in ensuring quality output, for example, via editorial oversight, discussions, or memberships of associations. Further research into the science-media relationship is recommended. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. The Roles of Social Perception and AI Anxiety in Individuals’ Attitudes Toward ChatGPT in Education.
- Author
-
Wang, Chengcheng, Li, Xing, Liang, Zheng, Sheng, Yingying, Zhao, Qingbai, and Chen, Shi
- Abstract
AbstractPublic attitudes are essential for technology promotion and policy formulation. The present study investigated the Chinese public’s knowledge of ChatGPT, as well as examined the roles played by Big-Five personality traits, social perception, and AI anxiety in shaping the public’s attitudes toward ChatGPT using the questionnaire method. Results showed that: (1) Nearly, 1/3 of teachers surveyed did not know ChatGPT at all, and all of them were primary and secondary school teachers. (2) The level of knowledge about ChatGPT was significantly related to gender, educational level, teaching stage (in teacher samples), and major (in student samples). (3) The public’s positive attitude is higher significantly than the negative attitude. (4) Social perception positively predicted positive attitudes and negatively predicted negative attitudes. Moreover, a notably higher predictive power for positive attitudes compared to negative attitudes was demonstrated by social perception. There is an equally predictive effect of competence perception and warmth perception on attitudes, without any domain effect observed. (5) AI anxiety only positively predicted negative attitudes but did not impact positive attitudes. In explaining negative attitudes, AI anxiety exhibited a higher explanatory power compared to the Big-Five personality traits, primarily correlating with neuroticism. The findings indicate that it’s inappropriate to consider attitude evaluation toward AI as a single dimension. There are relatively independent components for positive and negative attitudes toward AI. The roles played by other predictive variables in attitudes are discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. Using GPT-4 to write a scientific review article: a pilot evaluation study.
- Author
-
Wang, Zhiping Paul, Bhandary, Priyanka, Wang, Yizhou, and Moore, Jason H.
- Subjects
- *
GENERATIVE pre-trained transformers , *LANGUAGE models , *CHATGPT , *TECHNICAL writing , *PILOT projects - Abstract
GPT-4, as the most advanced version of OpenAI's large language models, has attracted widespread attention, rapidly becoming an indispensable AI tool across various areas. This includes its exploration by scientists for diverse applications. Our study focused on assessing GPT-4's capabilities in generating text, tables, and diagrams for biomedical review papers. We also assessed the consistency in text generation by GPT-4, along with potential plagiarism issues when employing this model for the composition of scientific review papers. Based on the results, we suggest the development of enhanced functionalities in ChatGPT, aiming to meet the needs of the scientific community more effectively. This includes enhancements in uploaded document processing for reference materials, a deeper grasp of intricate biomedical concepts, more precise and efficient information distillation for table generation, and a further refined model specifically tailored for scientific diagram creation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. Can ChatGPT generate practice question explanations for medical students, a new faculty teaching tool?
- Author
-
Tong, Lilin, Wang, Jennifer, Rapaka, Srikar, and Garg, Priya S.
- Subjects
- *
CHATGPT , *MEDICAL students , *STUDENT financial aid , *MEDICAL school curriculum - Abstract
AbstractIntroductionMethodResultsConclusionMultiple-choice questions (MCQs) are frequently used for formative assessment in medical school but often lack sufficient answer explanations given time-restraints of faculty. Chat Generated Pre-trained Transformer (ChatGPT) has emerged as a potential student learning aid and faculty teaching tool. This study aims to evaluate ChatGPT’s performance in answering and providing explanations for MCQs.Ninety-four faculty-generated MCQs were collected from the pre-clerkship curriculum at a US medical school. ChatGPT’s accuracy in answering MCQ’s were tracked on first attempt without an answer prompt (Pass 1) and after being given a prompt for the correct answer (Pass 2). Explanations provided by ChatGPT were compared with faculty-generated explanations, and a 3-point evaluation scale was used to assess accuracy and thoroughness compared to faculty-generated answers.On first attempt, ChatGPT demonstrated a 75% accuracy in correctly answering faculty-generated MCQs. Among correctly answered questions, 66.4% of ChatGPT's explanations matched faculty explanations, and 89.1% captured some key aspects without providing inaccurate information. The amount of inaccurately generated explanations increases significantly if the questions was not answered correctly on the first pass (2.7% if correct on first pass vs. 34.6% if incorrect on first pass,
p < 0.001).ChatGPT shows promise in assisting faculty and students with explanations for practice MCQ’s but should be used with caution. Faculty should review explanations and supplement to ensure coverage of learning objectives. Students can benefit from ChatGPT for immediate feedback through explanations if ChatGPT answers the question correctly on the first try. If the question is answered incorrectly students should remain cautious of the explanation and seek clarification from instructors. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
21. Opportunities and risks involved in using ChatGPT to create first grade science lesson plans.
- Author
-
Powell, Wardell and Courchesne, Steven
- Subjects
- *
CHATGPT , *LESSON planning , *GENERATIVE artificial intelligence , *STUDENT teachers , *CURRICULUM frameworks - Abstract
Generative AI can potentially support teachers in lesson planning by making the process of generating an outline more efficient. This qualitative study employed an exploratory case study design to examine a specific lesson design activity involving a series of prompts and responses from ChatGPT. The desired science lesson on heredity was aimed at first grade students. We analyzed the process's efficiency, finding that within 30 minutes we could generate and substantially refine a lesson plan that accurately aligned with the desired curriculum framework and the 5E model of instruction. However, the iterations of the lesson plan included questionable components, missing details, and a fake resource. We discussed the implications of these findings for faculty looking to train pre-service teachers to appropriately use generative AI in lesson planning. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. Are Preprints a Threat to the Credibility and Quality of Artificial Intelligence Literature in the ChatGPT Era? A Scoping Review and Qualitative Study.
- Author
-
Adarkwah, Michael Agyemang, Islam, A. Y. M. Atiquil, Schneider, Käthe, Luckin, Rose, Thomas, Michael, and Spector, Jonathan Michael
- Abstract
AbstractChatGPT, as the pioneer of advanced generative AI tools, has triggered scholarly discussions about the potential use of such AI technologies in interdisciplinary fields. With a focus on the surge of AI-related preprints since the introduction of ChatGPT by OpenAI, the study investigated what the surge implies for AI literature, particularly in terms of credibility and quality. A scoping review was initially conducted to study the characteristics of the AI-related preprints in the Web of Science (WoS) database and also in five (5) preprint platforms (ArXiv, MedRxiv, SocArxiv, SSRN, and Research Square). The publication date range was set at January 01, 2023 to September 08, 2023. This was followed up by an interpretive phenomenological analysis (IPA) of the perceptions of experts in the AI field about the preprints. Employing a scoping review of AI-related preprints across six databases and a qualitative analysis of 15 AI experts’ opinions, our study reveals concerns about the research accuracy, quality, and credibility of preprints, and advocates for a robust evaluation and high-quality assurance process to promote open science objectives during their dissemination. Specifically, 45,918 AI-related preprints were found in the six preprint databases or repositories across different fields. The nine themes from the IPA showed that preprints can be of value. However, experts advocated for the safe and responsible use of AI-related preprints, involving such tenets as maintaining ethical integrity and high-quality work on the part of authors and establishing sound AI-content guidelines from publishers and editors. Future studies are recommended to investigate the impact of preprints on decision-making processes in educational research and practice. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. Multi role ChatGPT framework for transforming medical data analysis.
- Author
-
Chen, Haoran, Zhang, Shengxiao, Zhang, Lizhong, Geng, Jie, Lu, Jinqi, Hou, Chuandong, He, Peifeng, and Lu, Xuechun
- Subjects
- *
CHATGPT , *DATA analysis , *DRUG repositioning , *DRUG analysis , *DATA science , *QUALITY control - Abstract
The application of ChatGPTin the medical field has sparked debate regarding its accuracy. To address this issue, we present a Multi-Role ChatGPT Framework (MRCF), designed to improve ChatGPT's performance in medical data analysis by optimizing prompt words, integrating real-world data, and implementing quality control protocols. Compared to the singular ChatGPT model, MRCF significantly outperforms traditional manual analysis in interpreting medical data, exhibiting fewer random errors, higher accuracy, and better identification of incorrect information. Notably, MRCF is over 600 times more time-efficient than conventional manual annotation methods and costs only one-tenth as much. Leveraging MRCF, we have established two user-friendly databases for efficient and straightforward drug repositioning analysis. This research not only enhances the accuracy and efficiency of ChatGPT in medical data science applications but also offers valuable insights for data analysis models across various professional domains. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. Assessment of the impacts of artificial intelligence (AI) on intercultural communication among postgraduate students in a multicultural university environment.
- Author
-
Sarwari, Abdul Qahar, Javed, Muhammad Naeem, Mohd Adnan, Hamedi, and Abdul Wahab, Mohammad Nubli
- Subjects
- *
COLLEGE environment , *GRADUATE students , *CROSS-cultural communication , *COLLEGE students , *COMMUNICATION barriers - Abstract
Artificial intelligence (AI) broadly influences different aspects of human life, especially human communication. One of the main concerns of the broad use of AI in daily interactions among different people could be whether it helps them interact easily or complicates their interactions. To answer the mentioned question, this study assessed the impacts of AI on intercultural communication among postgraduate students in a multicultural university environment. A newly developed survey instrument was used to conduct this study. The participants of this study were 115 postgraduate students from nine different countries. The descriptive statistics, reliability analysis, and Bivariate correlation tests of the 29th version of IBM-SPSS software were used to analyze the quantitative data, and inductive coding and conceptual content analysis were used to code and analyze the qualitative data. Based on descriptive results, the vast majority (93%) of the participants already used and experienced AI in their daily lives, and the majority of them believed that AI and AI technologies connect different cultures, reduce language and cultural barriers, and help people of different cultures to interact and be connected. Based on the results from the correlation test, there were strong positive correlations between AI attitudes and AI benefits, and also between AI regulation and AI benefits. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. Text-to-video generative artificial intelligence: sora in neurosurgery.
- Author
-
Mohamed, Ali A. and Lucke-Wold, Brandon
- Subjects
- *
GENERATIVE artificial intelligence , *LANGUAGE models , *NATURAL language processing , *COMPUTER vision , *ARTIFICIAL intelligence - Abstract
Artificial intelligence (AI) has increased in popularity in neurosurgery, with recent interest in generative AI algorithms such as the Large Language Model (LLM) ChatGPT. Sora, an innovation in generative AI, leverages natural language processing, deep learning, and computer vision to generate impressive videos from text prompts. This new tool has many potential applications in neurosurgery. These include patient education, public health, surgical training and planning, and research dissemination. However, there are considerable limitations to the current model such as physically implausible motion generation, spontaneous generation of subjects, unnatural object morphing, inaccurate physical interactions, and abnormal behavior presentation when many subjects are generated. Other typical concerns are with respect to patient privacy, bias, and ethics. Further, appropriate investigation is required to determine how effective generative videos are compared to their non-generated counterparts, irrespective of any limitations. Despite these challenges, Sora and other iterations of its text-to-video generative application may have many benefits to the neurosurgical community. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. Evaluating language models for mathematics through interactions.
- Author
-
Collins, Katherine M., Jiang, Albert Q., Frieder, Simon, Wong, Lionel, Zilka, Miri, Bhatt, Umang, Lukasiewicz, Thomas, Yuhuai Wu, Tenenbaum, Joshua B., Hart, William, Gowers, Timothy, Wenda Li, Weller, Adrian, and Jamnik, Mateja
- Subjects
- *
LANGUAGE models , *GENERATIVE pre-trained transformers , *CHATGPT , *MATHEMATICS , *MATHEMATICS students - Abstract
There is much excitement about the opportunity to harness the power of large language models (LLMs) when building problem-solving assistants. However, the standard methodology of evaluating LLMs relies on static pairs of inputs and outputs; this is insufficient for making an informed decision about which LLMs are best to use in an interactive setting, and how that varies by setting. Static assessment therefore limits how we understand language model capabilities. We introduce CheckMate, an adaptable prototype platform for humans to interact with and evaluate LLMs. We conduct a study with CheckMate to evaluate three language models (InstructGPT, ChatGPT, and GPT-4) as assistants in proving undergraduate-level mathematics, with a mixed cohort of participants from undergraduate students to professors of mathematics. We release the resulting interaction and rating dataset, MathConverse. By analyzing MathConverse, we derive a taxonomy of human query behaviors and uncover that despite a generally positive correlation, there are notable instances of divergence between correctness and perceived helpfulness inLLMgenerations, among other findings. Further, we garner a more granular understanding of GPT-4 mathematical problemsolving through a series of case studies, contributed by experienced mathematicians. We conclude with actionable takeaways for ML practitioners and mathematicians: models that communicate uncertainty, respond well to user corrections, and can provide a concise rationale for their recommendations, may constitute better assistants. Humans should inspect LLM output carefully given their current shortcomings and potential for surprising fallibility. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. Google, ChatGPT, questions of omniscience and wisdom.
- Author
-
Hoffman, Frank J. and Iso, Klairung
- Abstract
The article explores how platforms like Google and ChatGPT, which claim omniscience and wisdom-like attributes, prompt philosophical questions. It revisits religious perspectives on omniscience and their influence on the pursuit of wisdom. The article suggests that while Google may offer compartmentalized omniscience based on user preferences, ChatGPT’s factual accuracy challenges its characterization as omniscient. Nonetheless, ChatGPT can still help humans progress toward wisdom, by integrating the co-creation of knowledge between humans and the unfolding of divine knowledge from Process Thought and Buddhist epistemology insights. Notably, instead of offering definitive answers, the paper is written with a sense of deep humility to encourage ongoing inquiry and investigation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. ChatGPT for good? Taking ‘beneficence’ seriously in the regulation of generative artificial intelligence.
- Author
-
Singh Chauhan, Krishna Deo
- Abstract
Generative AI platforms such as ChatGPT have found prominence in recent times with their ability to generate texts, images, etc. Several questions pertaining to ethical and legal issues surrounding ChatGPT have arisen. In this paper, I discuss the nature and background of generative AI, situating its development in the historical context of AI. I then discuss my primary research questions: is sufficient attention paid in the literature on ethics of AI to the principle of beneficence and is there theoretical clarity on its meaning? I highlight that while there is great deal of discussion on what harms can arise from generative AI and how to stop them, there is very little discussion on what amounts to AI-for-good, particularly in the literature of AI ethics and regulation. To the extent that such discussion exists, it pushes ahead with suggesting specific solutions, without fully addressing the underlying question of what makes their prescriptions beneficent. I demonstrate that the principle of beneficence as understood in biomedical ethics and human rights frameworks has limited utility for the ethics of generative AI. These gaps in the AI regulation are prominent, as they can derail long term progress of generative AI and the realization of its full potential. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. Exploring ChatGPT as a writing assessment tool.
- Author
-
Bucol, Junifer Leal and Sangkawong, Napattanissa
- Abstract
This research paper employs an exploratory framework to evaluate the potential of ChatGPT as an Automated Writing Evaluation (AWE) tool in teaching English as a Foreign Language (EFL) in Thailand. The main objective is to investigate how well ChatGPT can assess students’ writing using prompts and pre-defined rubrics compared to human raters. Moreover, the study examines its strengths and weaknesses as an assessment tool by analysing the teachers’ reflections during the assessment process. Quantitative analyses revealed significant relationships between trial accounts in comparison with the human ratings. Qualitative analysis unearths patterns in the feedback, shedding light on ChatGPT’s strengths and its limitations as an AWE tool. ChatGPT displays substantial promise as an AWE tool, offering distinct features such as human-like interface, consistency, efficiency, and scalability. Nonetheless, educators must be cognisant of its limitations. This study recognises that the strategic use of ChatGPT could enhance the evaluation process among teachers and foster the development of EFL students’ written communication skills. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
30. Use of large language models to optimize poison center charting.
- Author
-
Matsler, Nikolaus, Pepin, Lesley, Banerji, Shireen, Hoyte, Christopher, and Heard, Kennon
- Abstract
AbstractIntroductionMethodsResultsDiscussionConclusionsEfficient and complete medical charting is essential for patient care and research purposes. In this study, we sought to determine if Chat Generative Pre-Trained Transformer could generate cogent, suitable charts from recorded, real-world poison center calls and abstract and tabulate data.De-identified transcripts of real-world hospital-initiated poison center consults were summarized by Chat Generative Pre-Trained Transformer 4.0. Additionally, Chat Generative Pre-Trained Transformer organized tables for data points, including vital signs, test results, therapies, and recommendations. Seven trained reviewers, including certified specialists in poison information and board-certified medical toxicologists, graded summaries using a 1 to 5 scale to determine appropriateness for entry into the medical record. Intra-rater reliability was calculated. Tabulated data was quantitatively evaluated for accuracy. Finally, reviewers selected preferred documentation: original or Chat Generative Pre-Trained Transformer organized.Eighty percent of summaries had a median score high enough to be deemed appropriate for entry into the medical record. In three duplicate cases, reviewers did change scores, leading to moderate intra-rater reliability (kappa = 0.6). Among all cases, 91 percent of data points were correctly abstracted into table format.By utilizing a large language model with a unified prompt, charts can be generated directly from conversations in seconds without the need for additional training. Charts generated by Chat Generative Pre-Trained Transformer were preferred over extant charts, even when they were deemed unacceptable for entry into the medical record prior to the correction of errors. However, there were several limitations to our study, including poor intra-rater-reliability and a limited number of cases examined.In this study, we demonstrate that large language models can generate coherent summaries of real-world poison center calls that are often acceptable for entry to the medical record as is. When errors were present, these were often fixed with the addition or deletion of a word or phrase, presenting an enormous opportunity for efficiency gains. Our future work will focus on implementing this process in a prospective fashion. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. Protecting older consumers in the digital age: a commentary on ChatGPT, helplines and the way to prevent accessible fraud.
- Author
-
Segal, Michal
- Abstract
Older people are often targeted by fraudsters due to their unique characteristics and vulnerabilities. Being a victim of exploitation can lead to negative emotional and financial consequences. The purpose of this commentary is to present ChatGPT’s potential to provide accessible information and support, helping older consumers protect themselves when confronted with exploitation, address the limitations of ChatGPT and propose solutions to overcome these limitations. Integrating tailored human and technological solutions, such as helplines, AI chatbots, and involving older adults in development, is crucial. By providing adequate training and support, the goal of ensuring accessibility for all users can be achieved. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. Predictors of higher education students’ behavioural intention and usage of ChatGPT: the moderating roles of age, gender and experience.
- Author
-
Arthur, Francis, Salifu, Iddrisu, and Abam Nortey, Sharon
- Abstract
The adoption and usage of ChatGPT among students are influenced by various factors, including individual characteristics such as age, gender, and experience with technology use. However, studies on the moderating roles of gender, age, and experience in predicting students’ behavioural intention and usage of ChatGPT are limited. This study employed the Unified Theory of Acceptance and Use of Technology (UTAUT2) model to examine the predictors of Higher Education (HE) students’ behavioural intention and usage of ChatGPT. The study employed a descriptive cross-sectional survey design with an adapted instrument to collect data from 486 students. Using the Partial Least Squares Structural Equation Modelling approach, the results showed that hedonic motivation, performance expectancy, effort expectancy, and social influence were significant predictors of students’ behavioural intention, whereas behavioural intention and facilitating conditions had significant influence on students’ actual use of ChatGPT. Age and gender were found to moderate the relationship between facilitating conditions and the use of ChatGPT. Lastly, experience moderated the relationship between habit and the use of ChatGPT, and the relationship between hedonic motivation and behavioural intention. These findings have implications for the design and implementation of ChatGPT in higher education towards enhancing students’ engagement and learning outcomes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. ChatGPT-facilitated professional development: evidence from professional trainers’ learning achievements, self-worth, and self-confidence.
- Author
-
Chang, Chun-Chun and Hwang, Gwo-Jen
- Abstract
Professional trainers play an important role in helping new recruits adapt to the workplace. In general hospitals, training courses for clinical teachers still adopt the lecture method. Such a teacher training approach focuses on the way of delivering knowledge and skills, while the training for their teaching of case handling as well as their self-worth and self-confidence could be insufficient. In order to cope with this problem, the present study proposed a ChatGPT-based training mode (ChatGPT-TM) for professional development. To verify its effects, we conducted an experiment in a “Using ChatGPT in Case Teaching” course for clinical teachers in hospitals, and explored their learning achievement, self-worth, self-confidence, and learning perceptions using the ChatGPT training mode (ChatGPT-TM) and the conventional training mode (C-TM). The results showed that the ChatGPT-TM could effectively enhance clinical teachers’ learning achievement in case teaching, self-worth, and self-confidence in comparison with the C-TM. The main contribution of this study is that it revealed that ChatGPT could allow clinical teachers to carry out reflection, verify references, and integrate theory and practice, which improved their learning achievement, made them realize their self-worth, and increased their self-confidence in performing professional training tasks. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. Co-journeying with ChatGPT in tertiary education: identity transformation of EMI teachers in Taiwan.
- Author
-
Tsou, Wenli, Lin, Angel M. Y., and Chen, Fay
- Abstract
This paper responds to the prevalent discourse on the ‘lack of English proficiency’ among EMI teachers in contexts where English is used as an additional language. It explores how AI-powered tools, particularly ChatGPT, enable EMI teachers in Taiwan to leverage their expertise and teach in English with more confidence. This study first describes the training of an EMI PD programme. Then it reports on the challenges and strategies, and the extended and innovative use by EMI teachers after the training. By reporting on the experiences of three EMI teachers, this study shows how collaborating with ChatGPT contributes to developing an empowered EMI teacher identity. This study fills a research gap, shedding light on the transformative potential of collaborating with ChatGPT to transform content teachers’ identities into an EMI teacher identity. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. Determinants of ChatGPT Use and its Impact on Learning Performance: An Integrated Model of BRT and TPB.
- Author
-
Al-Qaysi, Noor, Al-Emran, Mostafa, Al-Sharafi, Mohammed A., Iranmanesh, Mohammad, Ahmad, Azhana, and Mahmoud, Moamin A.
- Abstract
AbstractThe rapid emergence of Generative Artificial Intelligence (GAI) heralds a significant shift, opening new frontiers in how education is delivered. This groundbreaking wave of technological advancement is poised to redefine traditional learning, promising to enhance the educational landscape with unprecedented levels of personalized learning and accessibility. Despite GAI’s progressive infiltration into various educational strata, limited empirical research exists on its impact on students’ learning performance. Drawing on the Theory of Planned Behavior (TPB) and Behavioral Reasoning Theory (BRT), this study investigates the determinants affecting students’ use of ChatGPT and its influence on learning performance. The data were collected from 357 university students and were analyzed using the PLS-SEM technique. The results supported the role of ChatGPT in positively affecting students’ learning performance. In addition, the results showed that reasons for and against adoption are pivotal in shaping students’ attitudes. ChatGPT use is found to be significantly affected by attitudes, subjective norms, and perceived behavioral control. Besides the theoretical contributions, the findings offer various implications for stakeholders and underscore the necessity for educational institutions to foster a conducive environment for GAI adoption, addressing ethical and technical concerns to optimize learning experiences. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. Human–computer pragmatics trialled: some (im)polite interactions with ChatGPT 4.0 and the ensuing implications.
- Author
-
Quan, Zhi and Chen, Zhiwei
- Abstract
Upon rapid evolution, ChatGPT can now generate content that is linguistically accurate and logically sound, while sidestepping ethical, social and legal concerns. This research seeks to investigate whether ChatGPT will employ different pragmatic strategies in its responses to (im)polite questions. In our experiment, this AI-powered tool was instructed to answer 200 self-made questions over four (im)politeness levels, and the 200 responses were collected to go through linguistic and sentiment analysis. Triangulated data, together with typical examples, show that ChatGPT tends to give shorter and less positive answers to less polite questions, appearing to be less responsive when confronted with more blunt and offensive inquiries. This, to some extent, resembles how human beings react when treated impolitely. A tentative explanation may be that, given its nature as a large language model, ChatGPT mirrors human interaction in various scenarios, and draws on prevalent human communication tendencies. Thus, interacting with ChatGPT is more of a human-society interaction than human-machine communication in the real sense. Our research sheds light on the coined “human-machine pragmatics”, i.e. how humans can best communicate with computers for the best informative and affective outcomes. The implications for language education are also discussed in the end. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. ChatGPT: A Conceptual Review of Applications and Utility in the Field of Medicine.
- Author
-
Rao, Shiavax J., Isath, Ameesh, Krishnan, Parvathy, Tangsrivimol, Jonathan A., Virk, Hafeez Ul Hassan, Wang, Zhen, Glicksberg, Benjamin S., and Krittanawong, Chayakrit
- Subjects
- *
ELDER care , *WEIGHT loss , *MEDICAL education , *MENTAL health , *ARTIFICIAL intelligence , *DECISION making in clinical medicine , *PATIENT care , *MEDICAL students , *PARADIGMS (Social sciences) , *MEDICAL research , *PHYSICAL fitness , *MEDICATION therapy management , *CONCEPTUAL structures , *MEDICINE , *INDIVIDUALIZED medicine , *HUMAN error , *NUTRITION , *PHYSICAL activity - Abstract
Artificial Intelligence, specifically advanced language models such as ChatGPT, have the potential to revolutionize various aspects of healthcare, medical education, and research. In this narrative review, we evaluate the myriad applications of ChatGPT in diverse healthcare domains. We discuss its potential role in clinical decision-making, exploring how it can assist physicians by providing rapid, data-driven insights for diagnosis and treatment. We review the benefits of ChatGPT in personalized patient care, particularly in geriatric care, medication management, weight loss and nutrition, and physical activity guidance. We further delve into its potential to enhance medical research, through the analysis of large datasets, and the development of novel methodologies. In the realm of medical education, we investigate the utility of ChatGPT as an information retrieval tool and personalized learning resource for medical students and professionals. There are numerous promising applications of ChatGPT that will likely induce paradigm shifts in healthcare practice, education, and research. The use of ChatGPT may come with several benefits in areas such as clinical decision making, geriatric care, medication management, weight loss and nutrition, physical fitness, scientific research, and medical education. Nevertheless, it is important to note that issues surrounding ethics, data privacy, transparency, inaccuracy, and inadequacy persist. Prior to widespread use in medicine, it is imperative to objectively evaluate the impact of ChatGPT in a real-world setting using a risk-based approach. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. Clinical and Surgical Applications of Large Language Models: A Systematic Review.
- Author
-
Pressman, Sophia M., Borna, Sahar, Gomez-Cabello, Cesar A., Haider, Syed Ali, Haider, Clifton R., and Forte, Antonio Jorge
- Subjects
- *
LANGUAGE models , *MEDICAL care , *MEDICAL personnel , *CLINICAL medicine , *ARTIFICIAL intelligence , *PUBLICATION bias , *BIBLIOGRAPHIC databases - Abstract
Background: Large language models (LLMs) represent a recent advancement in artificial intelligence with medical applications across various healthcare domains. The objective of this review is to highlight how LLMs can be utilized by clinicians and surgeons in their everyday practice. Methods: A systematic review was conducted following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. Six databases were searched to identify relevant articles. Eligibility criteria emphasized articles focused primarily on clinical and surgical applications of LLMs. Results: The literature search yielded 333 results, with 34 meeting eligibility criteria. All articles were from 2023. There were 14 original research articles, four letters, one interview, and 15 review articles. These articles covered a wide variety of medical specialties, including various surgical subspecialties. Conclusions: LLMs have the potential to enhance healthcare delivery. In clinical settings, LLMs can assist in diagnosis, treatment guidance, patient triage, physician knowledge augmentation, and administrative tasks. In surgical settings, LLMs can assist surgeons with documentation, surgical planning, and intraoperative guidance. However, addressing their limitations and concerns, particularly those related to accuracy and biases, is crucial. LLMs should be viewed as tools to complement, not replace, the expertise of healthcare professionals. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. Accuracy of Online Artificial Intelligence Models in Primary Care Settings.
- Author
-
Kassab, Joseph, Hadi El Hajjar, Abdel, Wardrop III, Richard M., and Brateanu, Andrei
- Subjects
- *
ARTIFICIAL intelligence , *GEMINI (Chatbot) , *PRIMARY care , *CHATGPT , *PREVENTIVE medicine - Abstract
The importance of preventive medicine and primary care in the sphere of public health is expanding, yet a gap exists in the utilization of recommended medical services. As patients increasingly turn to online resources for supplementary advice, the role of artificial intelligence (AI) in providing accurate and reliable information has emerged. The present study aimed to assess ChatGPT-4's and Google Bard's capacity to deliver accurate recommendations in preventive medicine and primary care. Fifty-six questions were formulated and presented to ChatGPT-4 in June 2023 and Google Bard in October 2023, and the responses were independently reviewed by two physicians, with each answer being classified as "accurate," "inaccurate," or "accurate with missing information." Disagreements were resolved by a third physician. Initial inter-reviewer agreement on grading was substantial (Cohen's Kappa was 0.76, 95%CI [0.61–0.90] for ChatGPT-4 and 0.89, 95%CI [0.79–0.99] for Bard). After reaching a consensus, 28.6% of ChatGPT-4-generated answers were deemed accurate, 28.6% inaccurate, and 42.8% accurate with missing information. In comparison, 53.6% of Bard-generated answers were deemed accurate, 17.8% inaccurate, and 28.6% accurate with missing information. Responses to CDC and immunization-related questions showed notable inaccuracies (80%) in both models. ChatGPT-4 and Bard demonstrated potential in offering accurate information in preventive care. It also brought to light the critical need for regular updates, particularly in the rapidly evolving areas of medicine. A significant proportion of the AI models' responses were deemed "accurate with missing information," emphasizing the importance of viewing AI tools as complementary resources when seeking medical information. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. Ethical considerations for artificial intelligence in dermatology: a scoping review.
- Author
-
Gordon, Emily R, Trager, Megan H, Kontos, Despina, Weng, Chunhua, Geskin, Larisa J, Dugdale, Lydia S, and Samie, Faramarz H
- Subjects
- *
ARTIFICIAL intelligence , *LANGUAGE models , *ATTITUDES toward technology , *DERMATOLOGY , *CHATGPT - Abstract
The field of dermatology is experiencing the rapid deployment of artificial intelligence (AI), from mobile applications (apps) for skin cancer detection to large language models like ChatGPT that can answer generalist or specialist questions about skin diagnoses. With these new applications, ethical concerns have emerged. In this scoping review, we aimed to identify the applications of AI to the field of dermatology and to understand their ethical implications. We used a multifaceted search approach, searching PubMed, MEDLINE, Cochrane Library and Google Scholar for primary literature, following the PRISMA Extension for Scoping Reviews guidance. Our advanced query included terms related to dermatology, AI and ethical considerations. Our search yielded 202 papers. After initial screening, 68 studies were included. Thirty-two were related to clinical image analysis and raised ethical concerns for misdiagnosis, data security, privacy violations and replacement of dermatologist jobs. Seventeen discussed limited skin of colour representation in datasets leading to potential misdiagnosis in the general population. Nine articles about teledermatology raised ethical concerns, including the exacerbation of health disparities, lack of standardized regulations, informed consent for AI use and privacy challenges. Seven addressed inaccuracies in the responses of large language models. Seven examined attitudes toward and trust in AI, with most patients requesting supplemental assessment by a physician to ensure reliability and accountability. Benefits of AI integration into clinical practice include increased patient access, improved clinical decision-making, efficiency and many others. However, safeguards must be put in place to ensure the ethical application of AI. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
41. It takes one to know one—Machine learning for identifying OBGYN abstracts written by ChatGPT.
- Author
-
Levin, Gabriel, Meyer, Raanan, Guigue, Paul‐Adrien, and Brezinov, Yoav
- Subjects
- *
CHATGPT , *MACHINE learning , *DATABASES - Abstract
Objectives: To use machine learning to optimize the detection of obstetrics and gynecology (OBGYN) Chat Generative Pre‐trained Transformer (ChatGPT) ‐written abstracts of all OBGYN journals. Methods: We used Web of Science to identify all original articles published in all OBGYN journals in 2022. Seventy‐five original articles were randomly selected. For each, we prompted ChatGPT to write an abstract based on the title and results of the original abstracts. Each abstract was tested by Grammarly software and reports were inserted into a database. Machine‐learning modes were trained and examined on the database created. Results: Overall, 75 abstracts from 12 different OBGYN journals were randomly selected. There were seven (58%) Q1 journals, one (8%) Q2 journal, two (17%) Q3 journals, and two (17%) Q4 journals. Use of mixed dialects of English, absence of comma‐misuse, absence of incorrect verb forms, and improper formatting were important prediction variables of ChatGPT‐written abstracts. The deep‐learning model had the highest predictive performance of all examined models. This model achieved the following performance: accuracy 0.90, precision 0.92, recall 0.85, area under the curve 0.95. Conclusions: Machine‐learning‐based tools reach high accuracy in identifying ChatGPT‐written OBGYN abstracts. Synopsis: Machine‐learning‐based tools reach high accuracy in identifying ChatGPT‐written obstetrics and gynecology abstracts. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. Awareness of Artificial Intelligence as an Essential Digital Literacy: ChatGPT and Gen-AI in the Classroom.
- Author
-
Bender, Stuart Marshall
- Subjects
- *
ARTIFICIAL intelligence , *DIGITAL literacy , *CHATGPT , *CLASSROOMS , *COMPUTER software - Abstract
This discussion article examines the potential integration of Generative Artificial Intelligence (Gen-AI), including advanced Large-Language Models like the popular platform ChatGPT into subject English education. Following the significant public and academic attention in response to these technologies through 2023, this paper considers the transformative potential and challenges posed by Gen-AI in educational settings. Central to the discussion is the exploration of how English teachers can leverage Gen-AI to enrich student learning beyond the obvious domain of writing skills. Instead, the article foregrounds the necessity for students' understanding of Gen-AI as an essential component of digital literacy. While acknowledging ethical concerns such as plagiarism, equity, and access, the paper presents an argument for the productive use of Gen-AI in the classroom to augment reading, viewing, and interpretation lessons. Avoiding an evangelical or dystopian view of AI, this discussion piece explores the time-critical and urgent issue of how, when, and why English can engage with the technology. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
43. Artificial Intelligence Augmented Qualitative Analysis: The Way of the Future?
- Author
-
Hitch, Danielle
- Subjects
- *
LANGUAGE & languages , *DOCUMENTATION , *QUALITATIVE research , *DATA analysis , *ARTIFICIAL intelligence , *POST-acute COVID-19 syndrome , *NATURAL language processing , *THEMATIC analysis , *MACHINE learning , *RESEARCH ethics - Abstract
The artificial intelligence (AI) revolution is here and gathering momentum, thanks to new models of natural language processing (NLP) and rapidly increasing adoption by the public. NLP technology uses statistical analysis of language structures to analyse and generate human language, using text or speech as its source material. It can also be applied to visual mediums like images and videos. A few qualitative research early adopters are beginning to adopt this technology into their work, but our understanding of its potential remains in its infancy. This article will define and describe NLP-based AI and discuss its benefits and limitations for reflexive thematic analysis in health research. While there are many platforms available, ChatGPT is the most well-known and accessible. A worked example using ChatGPT to augment reflexive thematic analysis is provided to illustrate potential application in practice. This article is intended to inspire further conversation around the role of AI in qualitative research and offer practical guidance for researchers seeking to adopt this technology. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
44. A Case for Caution: Patient Use of Artificial Intelligence.
- Author
-
Stewart, Lisa, Patterson, Wesley G., Farrell, Christopher L., and Withycombe, Janice S.
- Subjects
- *
NURSES , *ADENOCARCINOMA , *CONTINUING education units , *OCCUPATIONAL roles , *ARTIFICIAL intelligence , *PROTEIN-tyrosine kinase inhibitors , *PATIENT advocacy , *INFORMATION resources , *PATIENT decision making , *LUNG cancer , *CANCER patient psychology , *GENETIC testing , *INFORMATION-seeking behavior - Abstract
Artificial intelligence use is increasing exponentially, including by patients in medical decisionmaking. Because of the limitations of chatbots and the possibility of receiving erroneous or incomplete information, patient education is a necessity. Nurses can advocate for patients by emphasizing the importance of conferring with oncology professionals before making decisions based solely on self-investigation using artificial intelligence. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
45. Transforming Assessment: The Impacts and Implications of Large Language Models and Generative AI.
- Author
-
Hao, Jiangang, von Davier, Alina A., Yaneva, Victoria, Lottridge, Susan, von Davier, Matthias, and Harris, Deborah J.
- Subjects
- *
GENERATIVE artificial intelligence , *LANGUAGE models , *ARTIFICIAL intelligence , *CHATGPT , *FAIRNESS - Abstract
The remarkable strides in artificial intelligence (AI), exemplified by ChatGPT, have unveiled a wealth of opportunities and challenges in assessment. Applying cutting‐edge large language models (LLMs) and generative AI to assessment holds great promise in boosting efficiency, mitigating bias, and facilitating customized evaluations. Conversely, these innovations raise significant concerns regarding validity, reliability, transparency, fairness, equity, and test security, necessitating careful thinking when applying them in assessments. In this article, we discuss the impacts and implications of LLMs and generative AI on critical dimensions of assessment with example use cases and call for a community effort to equip assessment professionals with the needed AI literacy to harness the potential effectively. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
46. Evaluating ChatGPT as a viable research tool for typological investigations of cultural heritage artefacts—Roman clay oil lamps.
- Author
-
Lapp, Eric C. and Lapp, Louis W. P.
- Subjects
- *
CHATGPT , *CULTURAL property , *CLAY , *PETROLEUM , *CHATBOTS , *LAMPS - Abstract
This study evaluates the current viability of ChatGPT as a research tool in lychnology, a discipline of archaeology focusing on the study of light use and lamps in antiquity. Prompts applicable to a common cultural heritage artifact group—the Roman clay oil lamp—were entered in ChatGPT to test its capabilities in compiling, categorizing, describing, and identifying lamp types, and to assess how accurate, detailed, and knowledgeable its responses would be. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
47. LLMs and the Amazing Shrinking University.
- Author
-
Evron, Nir
- Subjects
- *
LANGUAGE models , *YOUNG adults , *GESTALT therapy , *UNIVERSITY & college admission - Abstract
The article discusses the potential impact of large language models (LLMs) on higher education, particularly in the humanities. The author suggests that LLMs have the potential to revolutionize teaching and learning by providing personalized and adaptive instruction. However, they also raise concerns about the future of universities and the humanities, as LLMs may lead to a contraction of the higher education system and a reevaluation of the value of a college degree. The author speculates that universities may become more specialized and focused on producing high-quality intellectual work, while the role of professors as inspiring teachers will remain important. [Extracted from the article]
- Published
- 2024
- Full Text
- View/download PDF
48. ChatGPT and the Territory of Contemporary Narratology; or, A Rhetorical River Runs through It.
- Author
-
Phelan, James
- Subjects
- *
CHATGPT , *NARRATOLOGY , *LANGUAGE models , *GAZE - Abstract
This article examines the use of artificial intelligence (AI), specifically ChatGPT, in generating narrative texts. It discusses the difference between structuralist narratology and rhetorical narratology, emphasizing the impact of the latter on users' perception of AI-generated narratives. While ChatGPT can produce narratives with recognizable elements, it lacks the agency, purpose, and audience typically associated with human-authored narratives. Users often attribute these components to the AI-generated texts due to their own prompts. The article concludes that the rhetorical model captures an important aspect of narrative engagement. Additionally, the article explores the limitations of ChatGPT in generating unreliable narration, arguing that its text-centric approach fails to recognize the relationship between the author, narrator, and audience. The author contrasts ChatGPT's analysis of a passage from Sandra Cisneros's "Barbie-Q" with their own rhetorical analysis, highlighting the significance of shared knowledge between author and audience in conveying unreliability. Ultimately, the author suggests that understanding the communication of texts requires considering the broader context of author, audience, occasion, and purpose. [Extracted from the article]
- Published
- 2024
- Full Text
- View/download PDF
49. Facsimile Machines.
- Author
-
Kirschenbaum, Matthew
- Subjects
- *
GENERATIVE artificial intelligence , *LANGUAGE models - Abstract
The article discusses the impact of generative artificial intelligence, specifically ChatGPT, on the field of writing. The author explores the historical context of word processing and the anxieties surrounding new technologies. They argue that ChatGPT, with its ability to generate whole documents and genres of writing, represents a qualitative difference in writing technology. The author also raises concerns about the use of AI in writing, including issues of data mining, surveillance capitalism, environmental harm, and exploitative labor practices. They suggest that these technologies may lead to a post-alphabetic future where text loses its purpose as a format for human communication. [Extracted from the article]
- Published
- 2024
- Full Text
- View/download PDF
50. On Artificial and Post-artificial Texts: Machine Learning and the Reader's Expectations of Literary and Non-literary Writing.
- Author
-
Bajohr, Hannes
- Subjects
- *
LANGUAGE models , *MACHINE learning , *TURING test , *CHATGPT - Abstract
With the advent of ChatGPT and other large language models, the number of artificial texts we encounter on a daily basis is about to increase substantially. This essay asks how this new textual situation may influence what one can call the "standard expectation of unknown texts," which has always included the assumption that any text is the work of a human being. As more and more artificial writing begins to circulate, the essay argues, this standard expectation will shift—first, from the immediate assumption of human authorship to, second, a creeping doubt: did a machine write this? In the wake of what Matthew Kirschenbaum has called the "textpocalypse," however, this state cannot be permanent. The author suggests that after this second transitional period, one may suspend the question of origins and, third, take on a post-artificial stance. One would then focus only on what a text says, not on who wrote it; post-artificial writing would be read with an agnostic attitude about its origins. This essay explores the implications of such post-artificiality by looking back to the early days of text synthesis, considering the limitations of aesthetic Turing tests, and indulging in reasoned speculation about the future of literary and nonliterary text generation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.