2,243 results on '"CHATGPT"'
Search Results
2. Who Determines What Is Relevant? Humans or AI? Why Not Both?
- Author
-
Faggioli, Guglielmo, Dietz, Laura, Clarke, Charles L. A., Demartini, Gianluca, Hagen, Matthias, Hauff, Claudia, Kando, Noriko, Kanoulas, Evangelos, Potthast, Martin, Stein, Benno, and Wachsmuth, Henning
- Subjects
- *
HUMAN-artificial intelligence interaction , *ARTIFICIAL intelligence , *HUMAN-computer interaction , *ARTIFICIAL intelligence & society , *LANGUAGE models , *CHATGPT , *CHATBOTS - Abstract
The article offers an opinion regarding how a human-artificial intelligence (AI) collaboration can asses relevance. Topics include the improvements to large language models (LLMs) such as OpenAI's chatbot ChatGPT, the reasons why utilizing only LLMs can be problematic, and how human judgement can improve fairness, efficiency, and effectiveness.
- Published
- 2024
- Full Text
- View/download PDF
3. The Science of Detecting LLM-Generated Text.
- Author
-
Tang, Ruixiang, Chuang, Yu-Neng, and Hu, Xia
- Subjects
- *
LANGUAGE models , *NATURAL language processing , *COMPUTATIONAL linguistics , *CHATGPT , *CHATBOTS , *ARTIFICIAL intelligence , *SEMANTIC computing - Abstract
This research article focuses on the science of detecting large language model (LLM) generated text. The authors discuss the advancement of natural language generation (NLG) technology, including OpenAI's ChatGPT, and explain how the two detection methods, black-box detection and white-box detection, work to mitigate the potential misuse of LLMs.
- Published
- 2024
- Full Text
- View/download PDF
4. Generative Artificial Intelligence: 8 Critical Questions for Libraries.
- Author
-
Bridges, Laurie M., McElroy, Kelly, and Welhouse, Zach
- Subjects
- *
GENERATIVE artificial intelligence , *LANGUAGE models , *INTELLECTUAL freedom - Abstract
In this article, we provide a brief overview of generative artificial intelligence (GenAI) and large language models (LLMs). We then propose eight critical questions that libraries should ask when exploring this technology and its implications for their communities. We argue that libraries have a unique role in facilitating informed and responsible use of GenAI, as well as safeguarding and promoting the values of access, privacy, and intellectual freedom. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Can ChatGPT Learn Chinese or Swahili?
- Author
-
Savage, Neil
- Subjects
- *
CHATGPT , *LANGUAGE models , *LANGUAGE & languages , *JAPANESE language , *SYNTAX (Grammar) - Abstract
The article discusses the performance of large language models (LLMs) like ChatGPT in languages other than English, highlighting challenges and potential solutions. Researchers found that LLMs struggle to mimic humans in languages like Japanese due to differences in syntax and writing styles, leading to concerns about their effectiveness in non-English languages. Efforts to improve LLM performance in other languages include innovative tokenization methods and leveraging search data to supplement training data, aiming to address the scarcity of language-specific training resources.
- Published
- 2024
- Full Text
- View/download PDF
6. Thus Spake ChatGPT: On the reliability of AI-based chatbots for science communication.
- Author
-
Dutta, Subhabrata and Chakraborty, Tanmoy
- Subjects
- *
CHATGPT , *STATISTICAL reliability , *SCIENTIFIC communication , *LANGUAGE models , *SCIENTIFIC knowledge - Abstract
The article provides the author's perspective on the reliability of the ChatGPT, which is an artificially intelligent (AI) chatbot that derives from a large language model (LLM), in the communication of science. Particular focus is given to the errors that ChatGPT encounters when communicating scientific knowledge to a general audience.
- Published
- 2023
- Full Text
- View/download PDF
7. Generative AI as a New Innovation Platform: Considering the stability and longevity of a potential new foundational technology.
- Author
-
Cusumano, Michael A.
- Subjects
- *
GENERATIVE artificial intelligence , *CHATGPT , *ARTIFICIAL neural networks , *LANGUAGE models , *GOVERNMENT regulation , *ACCURACY of information - Abstract
This article presents generative artificial intelligence (AI) as a potential new innovation platform. First this article provides the history of generative AI with a discussion of Microsoft, OpenAI, and the use of neural networks and large language models. Next, a look at whether generative AI could meet the criteria of an innovation platform. Lastly, a discussion of regulation and governance.
- Published
- 2023
- Full Text
- View/download PDF
8. Popping the chatbot hype balloon.
- Author
-
Goudarzi, Sara
- Subjects
- *
CHATBOTS , *CHATGPT , *ARTIFICIAL intelligence , *LANGUAGE models , *PERSONALLY identifiable information , *SCIENCE fiction - Abstract
Since ChatGPT's release in November 2022, artificial intelligence has come into the spotlight. Inspiring both fascination and fear, chatbots have stirred debates among researchers, developers, and policy makers. The concerns range from concrete and tangible ones—which include replication of existing biases and discrimination at scale, harvesting personal data, and spreading misinformation—to more existential fears that their development will lead to machines with human-like cognitive abilities. Understanding how chatbots work and the human labor and data involved can better help evaluate the validity of concerns surrounding these systems, which although innovative, are hardly the stuff of science fiction. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
9. Generative AI Degrades Online Communities.
- Author
-
Burtch, Gordon, Lee, Dokyun, and Chen, Zhichen
- Subjects
- *
VIRTUAL communities , *LANGUAGE models , *CHATGPT , *CHATBOTS , *INTERNET forums , *INTERNET users , *ONLINE comments , *VIRTUAL culture - Abstract
The article focuses on how large language models (LLMs) are influencing online communities. The authors offer their opinions, stating that generative artificial intelligence (AI) technologies, such as ChatGPT, are causing a decrease in user participation of knowledge communities and degrading the quality of answers the community provides.
- Published
- 2024
- Full Text
- View/download PDF
10. ChatGPT's performance evaluation for annotating multi-label text in Indonesian language.
- Author
-
Hakim, M. Faris Al and Prasetiyo, Budi
- Subjects
- *
CHATGPT , *INDONESIAN language , *LANGUAGE models , *SENTIMENT analysis , *ARTIFICIAL intelligence - Abstract
The high need for artificial intelligence applications in all fields impacts the appropriate dataset for building a good machine. Labeling datasets becomes one of the main tasks needed before training the machine, especially in sentiment analysis. Aspect-based sentiment analysis has more labels in its process than others. The high number of data also impacts the high cost of processing the data, including the labeling process. Nevertheless, all the problems still need to be solved, including multi-label in Indonesian. It is a potential task that needs to be done by giving several labels in an instance. ChatGPT, as one of the Large Language Models (LLM), has a high potential to carry out the labeling process. ChatGPT-3.5 was examined to label for aspect-based sentiment analysis in the Indonesian language in this study. CASA dataset containing 1080 rows was used to evaluate the performance of the model. The results show that comprehensive exploration must be applied to produce optimal ChatGPT performance for classifying multi-label in Indonesian. The study results will impact the efficiency of the labeling process in multi-label case that needs more effort to be finished. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Criticize my Code, not me: Using AI-Generated Feedback in Computer Science Teaching.
- Author
-
Kunz, Sibylle and Steffen, Adrienne
- Subjects
- *
COMPUTER science education , *CHATGPT , *LANGUAGE models , *WOMEN in higher education , *GENDER stereotypes , *TEACHER training - Abstract
Large Language Models (LLMs) like ChatGPT can help teachers to tailor learning tasks for their students, combining learning objectives and storytelling to raise interest in the subject. AI-based learning task design can help to support competency-based learning, especially for girls in STEM courses like computer science, where otherwise the "Leaky STEM pipeline" (Speer 2023) leads to a constant loss of female students over school time. LLMs support many steps of the creation cycle of learning tasks. One important step is the feedback process between teachers and students during and after solving the tasks. Students need person-related as well as process-related feedback to make progress. Sometimes problems occur when teachers give feedback in a way that embarrasses or hurts the students. Especially female students often need more confirmation to make them aware of their progress, but studies show that boys demand and get more atention by teachers in this situation. This is one of the many reasons why girls lose motivation and interest in STEM courses over time. Since male and female teachers differ in expressing feedback without being aware of it, it is necessary to raise their consciousness. LLMs like ChatGPT can be used in two scenarios here. The first scenario is helping teachers to formulate objective feedback in a way that is adequate and understandable for the target group - e.g., young girls or boys - in a specific situation. The second scenario is training the teacher in a Socrafic way, where the LLM simulates a student receiving the feedback and reacting to it according to established communication models like the Four Ears-model by Schulz von Thun (Schulz von Thun 1981) or Berne's Transactional Analysis (Berne, 1964). This case study provides examples and prompting schemes for both scenarios and discusses the fragile balance between avoiding gender stereotypes in LLMs and giving more helpful and sustainable feedback for female students to foster self-esteem and competency-awareness. [ABSTRACT FROM AUTHOR]
- Published
- 2024
12. Assessment of the impacts of artificial intelligence (AI) on intercultural communication among postgraduate students in a multicultural university environment.
- Author
-
Sarwari, Abdul Qahar, Javed, Muhammad Naeem, Mohd Adnan, Hamedi, and Abdul Wahab, Mohammad Nubli
- Abstract
Artificial intelligence (AI) broadly influences different aspects of human life, especially human communication. One of the main concerns of the broad use of AI in daily interactions among different people could be whether it helps them interact easily or complicates their interactions. To answer the mentioned question, this study assessed the impacts of AI on intercultural communication among postgraduate students in a multicultural university environment. A newly developed survey instrument was used to conduct this study. The participants of this study were 115 postgraduate students from nine different countries. The descriptive statistics, reliability analysis, and Bivariate correlation tests of the 29th version of IBM-SPSS software were used to analyze the quantitative data, and inductive coding and conceptual content analysis were used to code and analyze the qualitative data. Based on descriptive results, the vast majority (93%) of the participants already used and experienced AI in their daily lives, and the majority of them believed that AI and AI technologies connect different cultures, reduce language and cultural barriers, and help people of different cultures to interact and be connected. Based on the results from the correlation test, there were strong positive correlations between AI attitudes and AI benefits, and also between AI regulation and AI benefits. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Text-to-video generative artificial intelligence: sora in neurosurgery.
- Author
-
Mohamed, Ali A. and Lucke-Wold, Brandon
- Subjects
- *
GENERATIVE artificial intelligence , *LANGUAGE models , *NATURAL language processing , *COMPUTER vision , *ARTIFICIAL intelligence - Abstract
Artificial intelligence (AI) has increased in popularity in neurosurgery, with recent interest in generative AI algorithms such as the Large Language Model (LLM) ChatGPT. Sora, an innovation in generative AI, leverages natural language processing, deep learning, and computer vision to generate impressive videos from text prompts. This new tool has many potential applications in neurosurgery. These include patient education, public health, surgical training and planning, and research dissemination. However, there are considerable limitations to the current model such as physically implausible motion generation, spontaneous generation of subjects, unnatural object morphing, inaccurate physical interactions, and abnormal behavior presentation when many subjects are generated. Other typical concerns are with respect to patient privacy, bias, and ethics. Further, appropriate investigation is required to determine how effective generative videos are compared to their non-generated counterparts, irrespective of any limitations. Despite these challenges, Sora and other iterations of its text-to-video generative application may have many benefits to the neurosurgical community. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. ChatGPT for good? Taking ‘beneficence’ seriously in the regulation of generative artificial intelligence.
- Author
-
Singh Chauhan, Krishna Deo
- Abstract
Generative AI platforms such as ChatGPT have found prominence in recent times with their ability to generate texts, images, etc. Several questions pertaining to ethical and legal issues surrounding ChatGPT have arisen. In this paper, I discuss the nature and background of generative AI, situating its development in the historical context of AI. I then discuss my primary research questions: is sufficient attention paid in the literature on ethics of AI to the principle of beneficence and is there theoretical clarity on its meaning? I highlight that while there is great deal of discussion on what harms can arise from generative AI and how to stop them, there is very little discussion on what amounts to AI-for-good, particularly in the literature of AI ethics and regulation. To the extent that such discussion exists, it pushes ahead with suggesting specific solutions, without fully addressing the underlying question of what makes their prescriptions beneficent. I demonstrate that the principle of beneficence as understood in biomedical ethics and human rights frameworks has limited utility for the ethics of generative AI. These gaps in the AI regulation are prominent, as they can derail long term progress of generative AI and the realization of its full potential. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Google, ChatGPT, questions of omniscience and wisdom.
- Author
-
Hoffman, Frank J. and Iso, Klairung
- Abstract
The article explores how platforms like Google and ChatGPT, which claim omniscience and wisdom-like attributes, prompt philosophical questions. It revisits religious perspectives on omniscience and their influence on the pursuit of wisdom. The article suggests that while Google may offer compartmentalized omniscience based on user preferences, ChatGPT’s factual accuracy challenges its characterization as omniscient. Nonetheless, ChatGPT can still help humans progress toward wisdom, by integrating the co-creation of knowledge between humans and the unfolding of divine knowledge from Process Thought and Buddhist epistemology insights. Notably, instead of offering definitive answers, the paper is written with a sense of deep humility to encourage ongoing inquiry and investigation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. Predictors of higher education students’ behavioural intention and usage of ChatGPT: the moderating roles of age, gender and experience.
- Author
-
Arthur, Francis, Salifu, Iddrisu, and Abam Nortey, Sharon
- Abstract
The adoption and usage of ChatGPT among students are influenced by various factors, including individual characteristics such as age, gender, and experience with technology use. However, studies on the moderating roles of gender, age, and experience in predicting students’ behavioural intention and usage of ChatGPT are limited. This study employed the Unified Theory of Acceptance and Use of Technology (UTAUT2) model to examine the predictors of Higher Education (HE) students’ behavioural intention and usage of ChatGPT. The study employed a descriptive cross-sectional survey design with an adapted instrument to collect data from 486 students. Using the Partial Least Squares Structural Equation Modelling approach, the results showed that hedonic motivation, performance expectancy, effort expectancy, and social influence were significant predictors of students’ behavioural intention, whereas behavioural intention and facilitating conditions had significant influence on students’ actual use of ChatGPT. Age and gender were found to moderate the relationship between facilitating conditions and the use of ChatGPT. Lastly, experience moderated the relationship between habit and the use of ChatGPT, and the relationship between hedonic motivation and behavioural intention. These findings have implications for the design and implementation of ChatGPT in higher education towards enhancing students’ engagement and learning outcomes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Use of large language models to optimize poison center charting.
- Author
-
Matsler, Nikolaus, Pepin, Lesley, Banerji, Shireen, Hoyte, Christopher, and Heard, Kennon
- Abstract
AbstractIntroductionMethodsResultsDiscussionConclusionsEfficient and complete medical charting is essential for patient care and research purposes. In this study, we sought to determine if Chat Generative Pre-Trained Transformer could generate cogent, suitable charts from recorded, real-world poison center calls and abstract and tabulate data.De-identified transcripts of real-world hospital-initiated poison center consults were summarized by Chat Generative Pre-Trained Transformer 4.0. Additionally, Chat Generative Pre-Trained Transformer organized tables for data points, including vital signs, test results, therapies, and recommendations. Seven trained reviewers, including certified specialists in poison information and board-certified medical toxicologists, graded summaries using a 1 to 5 scale to determine appropriateness for entry into the medical record. Intra-rater reliability was calculated. Tabulated data was quantitatively evaluated for accuracy. Finally, reviewers selected preferred documentation: original or Chat Generative Pre-Trained Transformer organized.Eighty percent of summaries had a median score high enough to be deemed appropriate for entry into the medical record. In three duplicate cases, reviewers did change scores, leading to moderate intra-rater reliability (kappa = 0.6). Among all cases, 91 percent of data points were correctly abstracted into table format.By utilizing a large language model with a unified prompt, charts can be generated directly from conversations in seconds without the need for additional training. Charts generated by Chat Generative Pre-Trained Transformer were preferred over extant charts, even when they were deemed unacceptable for entry into the medical record prior to the correction of errors. However, there were several limitations to our study, including poor intra-rater-reliability and a limited number of cases examined.In this study, we demonstrate that large language models can generate coherent summaries of real-world poison center calls that are often acceptable for entry to the medical record as is. When errors were present, these were often fixed with the addition or deletion of a word or phrase, presenting an enormous opportunity for efficiency gains. Our future work will focus on implementing this process in a prospective fashion. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. Exploring ChatGPT as a writing assessment tool.
- Author
-
Bucol, Junifer Leal and Sangkawong, Napattanissa
- Abstract
This research paper employs an exploratory framework to evaluate the potential of ChatGPT as an Automated Writing Evaluation (AWE) tool in teaching English as a Foreign Language (EFL) in Thailand. The main objective is to investigate how well ChatGPT can assess students’ writing using prompts and pre-defined rubrics compared to human raters. Moreover, the study examines its strengths and weaknesses as an assessment tool by analysing the teachers’ reflections during the assessment process. Quantitative analyses revealed significant relationships between trial accounts in comparison with the human ratings. Qualitative analysis unearths patterns in the feedback, shedding light on ChatGPT’s strengths and its limitations as an AWE tool. ChatGPT displays substantial promise as an AWE tool, offering distinct features such as human-like interface, consistency, efficiency, and scalability. Nonetheless, educators must be cognisant of its limitations. This study recognises that the strategic use of ChatGPT could enhance the evaluation process among teachers and foster the development of EFL students’ written communication skills. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. ChatGPT-facilitated professional development: evidence from professional trainers’ learning achievements, self-worth, and self-confidence.
- Author
-
Chang, Chun-Chun and Hwang, Gwo-Jen
- Abstract
Professional trainers play an important role in helping new recruits adapt to the workplace. In general hospitals, training courses for clinical teachers still adopt the lecture method. Such a teacher training approach focuses on the way of delivering knowledge and skills, while the training for their teaching of case handling as well as their self-worth and self-confidence could be insufficient. In order to cope with this problem, the present study proposed a ChatGPT-based training mode (ChatGPT-TM) for professional development. To verify its effects, we conducted an experiment in a “Using ChatGPT in Case Teaching” course for clinical teachers in hospitals, and explored their learning achievement, self-worth, self-confidence, and learning perceptions using the ChatGPT training mode (ChatGPT-TM) and the conventional training mode (C-TM). The results showed that the ChatGPT-TM could effectively enhance clinical teachers’ learning achievement in case teaching, self-worth, and self-confidence in comparison with the C-TM. The main contribution of this study is that it revealed that ChatGPT could allow clinical teachers to carry out reflection, verify references, and integrate theory and practice, which improved their learning achievement, made them realize their self-worth, and increased their self-confidence in performing professional training tasks. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. Protecting older consumers in the digital age: a commentary on ChatGPT, helplines and the way to prevent accessible fraud.
- Author
-
Segal, Michal
- Abstract
Older people are often targeted by fraudsters due to their unique characteristics and vulnerabilities. Being a victim of exploitation can lead to negative emotional and financial consequences. The purpose of this commentary is to present ChatGPT’s potential to provide accessible information and support, helping older consumers protect themselves when confronted with exploitation, address the limitations of ChatGPT and propose solutions to overcome these limitations. Integrating tailored human and technological solutions, such as helplines, AI chatbots, and involving older adults in development, is crucial. By providing adequate training and support, the goal of ensuring accessibility for all users can be achieved. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. ChatGPT: A Conceptual Review of Applications and Utility in the Field of Medicine.
- Author
-
Rao, Shiavax J., Isath, Ameesh, Krishnan, Parvathy, Tangsrivimol, Jonathan A., Virk, Hafeez Ul Hassan, Wang, Zhen, Glicksberg, Benjamin S., and Krittanawong, Chayakrit
- Subjects
- *
ELDER care , *WEIGHT loss , *MEDICAL education , *MENTAL health , *ARTIFICIAL intelligence , *DECISION making in clinical medicine , *PATIENT care , *MEDICAL students , *PARADIGMS (Social sciences) , *MEDICAL research , *PHYSICAL fitness , *MEDICATION therapy management , *CONCEPTUAL structures , *MEDICINE , *INDIVIDUALIZED medicine , *HUMAN error , *NUTRITION , *PHYSICAL activity - Abstract
Artificial Intelligence, specifically advanced language models such as ChatGPT, have the potential to revolutionize various aspects of healthcare, medical education, and research. In this narrative review, we evaluate the myriad applications of ChatGPT in diverse healthcare domains. We discuss its potential role in clinical decision-making, exploring how it can assist physicians by providing rapid, data-driven insights for diagnosis and treatment. We review the benefits of ChatGPT in personalized patient care, particularly in geriatric care, medication management, weight loss and nutrition, and physical activity guidance. We further delve into its potential to enhance medical research, through the analysis of large datasets, and the development of novel methodologies. In the realm of medical education, we investigate the utility of ChatGPT as an information retrieval tool and personalized learning resource for medical students and professionals. There are numerous promising applications of ChatGPT that will likely induce paradigm shifts in healthcare practice, education, and research. The use of ChatGPT may come with several benefits in areas such as clinical decision making, geriatric care, medication management, weight loss and nutrition, physical fitness, scientific research, and medical education. Nevertheless, it is important to note that issues surrounding ethics, data privacy, transparency, inaccuracy, and inadequacy persist. Prior to widespread use in medicine, it is imperative to objectively evaluate the impact of ChatGPT in a real-world setting using a risk-based approach. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. Determinants of ChatGPT Use and its Impact on Learning Performance: An Integrated Model of BRT and TPB.
- Author
-
Al-Qaysi, Noor, Al-Emran, Mostafa, Al-Sharafi, Mohammed A., Iranmanesh, Mohammad, Ahmad, Azhana, and Mahmoud, Moamin A.
- Abstract
AbstractThe rapid emergence of Generative Artificial Intelligence (GAI) heralds a significant shift, opening new frontiers in how education is delivered. This groundbreaking wave of technological advancement is poised to redefine traditional learning, promising to enhance the educational landscape with unprecedented levels of personalized learning and accessibility. Despite GAI’s progressive infiltration into various educational strata, limited empirical research exists on its impact on students’ learning performance. Drawing on the Theory of Planned Behavior (TPB) and Behavioral Reasoning Theory (BRT), this study investigates the determinants affecting students’ use of ChatGPT and its influence on learning performance. The data were collected from 357 university students and were analyzed using the PLS-SEM technique. The results supported the role of ChatGPT in positively affecting students’ learning performance. In addition, the results showed that reasons for and against adoption are pivotal in shaping students’ attitudes. ChatGPT use is found to be significantly affected by attitudes, subjective norms, and perceived behavioral control. Besides the theoretical contributions, the findings offer various implications for stakeholders and underscore the necessity for educational institutions to foster a conducive environment for GAI adoption, addressing ethical and technical concerns to optimize learning experiences. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. Co-journeying with ChatGPT in tertiary education: identity transformation of EMI teachers in Taiwan.
- Author
-
Tsou, Wenli, Lin, Angel M. Y., and Chen, Fay
- Abstract
This paper responds to the prevalent discourse on the ‘lack of English proficiency’ among EMI teachers in contexts where English is used as an additional language. It explores how AI-powered tools, particularly ChatGPT, enable EMI teachers in Taiwan to leverage their expertise and teach in English with more confidence. This study first describes the training of an EMI PD programme. Then it reports on the challenges and strategies, and the extended and innovative use by EMI teachers after the training. By reporting on the experiences of three EMI teachers, this study shows how collaborating with ChatGPT contributes to developing an empowered EMI teacher identity. This study fills a research gap, shedding light on the transformative potential of collaborating with ChatGPT to transform content teachers’ identities into an EMI teacher identity. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. Human–computer pragmatics trialled: some (im)polite interactions with ChatGPT 4.0 and the ensuing implications.
- Author
-
Quan, Zhi and Chen, Zhiwei
- Abstract
Upon rapid evolution, ChatGPT can now generate content that is linguistically accurate and logically sound, while sidestepping ethical, social and legal concerns. This research seeks to investigate whether ChatGPT will employ different pragmatic strategies in its responses to (im)polite questions. In our experiment, this AI-powered tool was instructed to answer 200 self-made questions over four (im)politeness levels, and the 200 responses were collected to go through linguistic and sentiment analysis. Triangulated data, together with typical examples, show that ChatGPT tends to give shorter and less positive answers to less polite questions, appearing to be less responsive when confronted with more blunt and offensive inquiries. This, to some extent, resembles how human beings react when treated impolitely. A tentative explanation may be that, given its nature as a large language model, ChatGPT mirrors human interaction in various scenarios, and draws on prevalent human communication tendencies. Thus, interacting with ChatGPT is more of a human-society interaction than human-machine communication in the real sense. Our research sheds light on the coined “human-machine pragmatics”, i.e. how humans can best communicate with computers for the best informative and affective outcomes. The implications for language education are also discussed in the end. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. Clinical and Surgical Applications of Large Language Models: A Systematic Review.
- Author
-
Pressman, Sophia M., Borna, Sahar, Gomez-Cabello, Cesar A., Haider, Syed Ali, Haider, Clifton R., and Forte, Antonio Jorge
- Abstract
Background: Large language models (LLMs) represent a recent advancement in artificial intelligence with medical applications across various healthcare domains. The objective of this review is to highlight how LLMs can be utilized by clinicians and surgeons in their everyday practice. Methods: A systematic review was conducted following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. Six databases were searched to identify relevant articles. Eligibility criteria emphasized articles focused primarily on clinical and surgical applications of LLMs. Results: The literature search yielded 333 results, with 34 meeting eligibility criteria. All articles were from 2023. There were 14 original research articles, four letters, one interview, and 15 review articles. These articles covered a wide variety of medical specialties, including various surgical subspecialties. Conclusions: LLMs have the potential to enhance healthcare delivery. In clinical settings, LLMs can assist in diagnosis, treatment guidance, patient triage, physician knowledge augmentation, and administrative tasks. In surgical settings, LLMs can assist surgeons with documentation, surgical planning, and intraoperative guidance. However, addressing their limitations and concerns, particularly those related to accuracy and biases, is crucial. LLMs should be viewed as tools to complement, not replace, the expertise of healthcare professionals. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. New Towers of Babel: Faith and Doubt in the Future of Translation.
- Author
-
Long, Hoyt
- Subjects
- *
MACHINE translating , *TRANSLATING & interpreting , *GENERATIVE artificial intelligence , *LANGUAGE models , *SOCIAL media - Abstract
This article examines the impact of machine translation and generative AI on literary translation. It presents two approaches to engaging with machine translation: coordinated friction and playful experimentation. The article discusses the capabilities and limitations of language models, specifically GPT-4, in translation. It explores the potential of playful experimentation with language models to expand the understanding of translation as a cultural medium, while also emphasizing the need for skepticism and critical examination. The article highlights the importance of understanding the relationship between humans and machines in translation and calls for further investigation and documentation of their real-world effects. [Extracted from the article]
- Published
- 2024
- Full Text
- View/download PDF
27. On Artificial and Post-artificial Texts: Machine Learning and the Reader's Expectations of Literary and Non-literary Writing.
- Author
-
Bajohr, Hannes
- Subjects
- *
LANGUAGE models , *MACHINE learning , *TURING test , *CHATGPT - Abstract
With the advent of ChatGPT and other large language models, the number of artificial texts we encounter on a daily basis is about to increase substantially. This essay asks how this new textual situation may influence what one can call the "standard expectation of unknown texts," which has always included the assumption that any text is the work of a human being. As more and more artificial writing begins to circulate, the essay argues, this standard expectation will shift—first, from the immediate assumption of human authorship to, second, a creeping doubt: did a machine write this? In the wake of what Matthew Kirschenbaum has called the "textpocalypse," however, this state cannot be permanent. The author suggests that after this second transitional period, one may suspend the question of origins and, third, take on a post-artificial stance. One would then focus only on what a text says, not on who wrote it; post-artificial writing would be read with an agnostic attitude about its origins. This essay explores the implications of such post-artificiality by looking back to the early days of text synthesis, considering the limitations of aesthetic Turing tests, and indulging in reasoned speculation about the future of literary and nonliterary text generation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. Borges and AI.
- Author
-
Raley, Rita and Samolsky, Russell
- Subjects
- *
ARTIFICIAL intelligence , *GENERATIVE artificial intelligence , *NATURAL language processing , *LANGUAGE models , *BEREAVEMENT , *ZENO'S paradoxes - Abstract
The article "Borges and AI" explores the connection between the writings of Jorge Luis Borges and artificial intelligence (AI). It discusses how Borges's story "Borges and I" foreshadows poststructuralist theories of writing and the rise of large language models (LLMs). The article delves into the themes of fictional capture and the potential for AI to surpass human creativity. It also examines the implications of AI for creative and critical writers, highlighting the challenges and uncertainties it brings. The text considers different perspectives on the relationship between human authors and generative AI models, as well as the impact of AI on education and student writing. The authors stress the importance of human involvement in textual analysis and the preservation of human authorship in the face of advancing AI technology. [Extracted from the article]
- Published
- 2024
- Full Text
- View/download PDF
29. LLMs and the Amazing Shrinking University.
- Author
-
Evron, Nir
- Subjects
- *
LANGUAGE models , *YOUNG adults , *GESTALT therapy , *UNIVERSITY & college admission - Abstract
The article discusses the potential impact of large language models (LLMs) on higher education, particularly in the humanities. The author suggests that LLMs have the potential to revolutionize teaching and learning by providing personalized and adaptive instruction. However, they also raise concerns about the future of universities and the humanities, as LLMs may lead to a contraction of the higher education system and a reevaluation of the value of a college degree. The author speculates that universities may become more specialized and focused on producing high-quality intellectual work, while the role of professors as inspiring teachers will remain important. [Extracted from the article]
- Published
- 2024
- Full Text
- View/download PDF
30. Facsimile Machines.
- Author
-
Kirschenbaum, Matthew
- Subjects
- *
GENERATIVE artificial intelligence , *LANGUAGE models - Abstract
The article discusses the impact of generative artificial intelligence, specifically ChatGPT, on the field of writing. The author explores the historical context of word processing and the anxieties surrounding new technologies. They argue that ChatGPT, with its ability to generate whole documents and genres of writing, represents a qualitative difference in writing technology. The author also raises concerns about the use of AI in writing, including issues of data mining, surveillance capitalism, environmental harm, and exploitative labor practices. They suggest that these technologies may lead to a post-alphabetic future where text loses its purpose as a format for human communication. [Extracted from the article]
- Published
- 2024
- Full Text
- View/download PDF
31. AI Comes for the Author.
- Author
-
Elkins, Katherine
- Subjects
- *
ARTIFICIAL intelligence , *LANGUAGE models , *GENERATIVE artificial intelligence , *GEMINI (Chatbot) , *COMPUTATIONAL intelligence - Abstract
This article explores the impact of artificial intelligence (AI) on text interpretation and the role of authors. It discusses the debates surrounding whether AI language generators can truly understand language or if they simply mimic it. While early models required skillful prompting, newer models have made the process easier. The article examines the capabilities and limitations of large language models (LLMs) like GPT models, highlighting their ability to prioritize meaning over grammar and syntax and their potential to create metaphors and similes. It also acknowledges the ethical concerns and dangers associated with AI development but emphasizes the exciting possibilities it presents. [Extracted from the article]
- Published
- 2024
- Full Text
- View/download PDF
32. "Don't Ban AI from Your Writing Classroom; Require It!".
- Author
-
Hayles, N. Katherine
- Subjects
- *
ARTIFICIAL intelligence , *LANGUAGE models , *NATURAL language processing , *STUDENT cheating , *COLLEGE students , *CITATION networks - Abstract
The article discusses the use of OpenAI's ChatGPT, a large language model, in college and university writing classrooms. While some educators are concerned about students using AI to pass off their work as their own, the author argues that instead of banning AI, institutions should embrace it as a tool to accelerate student learning. The author suggests designing assignments that encourage students to develop critical relationships with algorithmic cultures and to transparently show their contributions versus what the AI contributed. The article emphasizes the importance of process-oriented assignments, collaboration, and intellectual honesty in evaluating student learning. [Extracted from the article]
- Published
- 2024
- Full Text
- View/download PDF
33. Phantoms of Citation: AI and the Death of the Author-Function.
- Author
-
Slater, Avery
- Subjects
- *
ARTIFICIAL intelligence , *LANGUAGE models , *GENERATIVE artificial intelligence , *HONESTY , *GENERATIVE pre-trained transformers , *MIND-wandering , *CHATGPT - Abstract
This article examines the problem of fabricated citations in AI-generated writing, focusing on large language models like ChatGPT. The author argues that these fake citations reveal issues with both the processing of natural language training data and writing in general. The implications of these false citations for the future of writing and the ethics of accreditation are explored, as well as the limitations and potential dangers of ChatGPT. The article discusses concerns in various fields, such as finance and medicine, and addresses the debate surrounding AI-generated text in scholarly manuscripts and the legal responsibilities of authors. It concludes by discussing the nature of language models and their relationship to human language, suggesting that they are fulfilling poststructuralist predictions about the future of literature. The article also explores the use of language models, specifically the LLM, in generating text and the issues that arise from it. It introduces the concept of "hallucination" to describe the inaccuracies and fabricated citations produced by AI models, while also discussing the controversy surrounding this term and proposing alternative designations. The challenges posed by AI-generated text and their implications for authorship, plagiarism, and privacy are highlighted, emphasizing the need for ethical considerations and a deeper understanding of the role of AI in textual technologies. [Extracted from the article]
- Published
- 2024
- Full Text
- View/download PDF
34. ChatGPT and the Territory of Contemporary Narratology; or, A Rhetorical River Runs through It.
- Author
-
Phelan, James
- Subjects
- *
CHATGPT , *NARRATOLOGY , *LANGUAGE models , *GAZE - Abstract
This article examines the use of artificial intelligence (AI), specifically ChatGPT, in generating narrative texts. It discusses the difference between structuralist narratology and rhetorical narratology, emphasizing the impact of the latter on users' perception of AI-generated narratives. While ChatGPT can produce narratives with recognizable elements, it lacks the agency, purpose, and audience typically associated with human-authored narratives. Users often attribute these components to the AI-generated texts due to their own prompts. The article concludes that the rhetorical model captures an important aspect of narrative engagement. Additionally, the article explores the limitations of ChatGPT in generating unreliable narration, arguing that its text-centric approach fails to recognize the relationship between the author, narrator, and audience. The author contrasts ChatGPT's analysis of a passage from Sandra Cisneros's "Barbie-Q" with their own rhetorical analysis, highlighting the significance of shared knowledge between author and audience in conveying unreliability. Ultimately, the author suggests that understanding the communication of texts requires considering the broader context of author, audience, occasion, and purpose. [Extracted from the article]
- Published
- 2024
- Full Text
- View/download PDF
35. ChatGPT and the Writing of Philosophical Essays: An Experimental Study with Prospective Teachers on How the Turing Test Inverted.
- Author
-
Bohlmann, Markus and Berger, Annika M.
- Subjects
- *
CHATGPT , *WRITING instruction , *ARTIFICIAL intelligence , *SEMANTICS , *MIXED methods research - Abstract
Text-generative AI-systems have become important semantic agents with ChatGPT. We conducted a series of experiments to learn what teachers' conceptions of text-generative AI are in relation to philosophical texts. In our main experiment, using mixed methods, we had twenty-four high school students write philosophical essays, which we then randomized to essays with the same command from ChatGPT. We had ten prospective teachers assess these essays. They were able to tell whether it was an AI or student essay with 78.7 percent accuracy, which is better than the Open AI Classifier. Interestingly, however, they used criteria like argumentative and logical flawlessness and neutrality. We concluded from this that they are using an inverted Turing test and are no longer looking for rationality in machines but for irrationality in humans. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. Writing with ChatGPT.
- Author
-
Mouser, Ricky
- Subjects
- *
CHATGPT , *LANGUAGE models , *STUDENT assignments , *PLAGIARISM , *WRITING instruction - Abstract
Many instructors see the use of LLMs like ChatGPT on course assignments as a straightforward case of cheating, and try hard to prevent their students from doing so by including new warnings of consequences on their syllabi, turning to iffy plagiarism detectors, or scheduling exams to occur in-class. And the use of LLMs probably is cheating, given the sorts of assignments we are used to giving and the sorts of skills we take ourselves to be instilling in our students. But despite legitimate ethical and pedagogical concerns, the case that LLMs should never be used in academic contexts is quite difficult to see. Many primary and secondary schools are cutting back their writing instruction in an effort to teach to the test; at the same time, many high-end knowledge workers are already quietly expected to leverage their productivity with LLMs. To prepare students for an ever-changing world, we probably do have to teach them at least a little bit about writing with ChatGPT. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. Student Voices on GPT-3, Writing Assignments, and the Future College Classroom.
- Author
-
Kim, Bada, Robins, Sarah, and Huang, Jihui
- Subjects
- *
CHATGPT , *LANGUAGE models , *WRITING instruction , *STUDENT assignments , *HIGHER education - Abstract
This paper presents a summary and discussion of an assignment that asked students about the impact of Large Language Models on their college education. Our analysis summarizes students' perception of GPT-3, categorizes their proposals for modifying college courses, and identifies their stated values about their college education. Furthermore, this analysis provides a baseline for tracking students' attitudes toward LLMs and contributes to the conversation on student perceptions of the relationship between writing and philosophy. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. Don't Believe the Hype: Why ChatGPT May Breathe New Life into College Writing Instruction.
- Author
-
Mitchell-Yellin, Benjamin
- Subjects
- *
CHATGPT , *WRITING instruction , *HIGHER education , *LANGUAGE models , *TEACHING - Abstract
This paper argues that the threat Large Language Models (LLMs), such as ChatGPT, pose to writing instruction is both not entirely new and a welcome disruption to the way writing instruction is typically delivered. This new technology seems to be prompting many instructors to question whether essay responses to paper prompts reflect students' own thinking and learning. This uneasiness is long overdue, and the hope is it leads instructors to explore evidence-based best practices familiar from the scholarship of teaching and learning. We've known for some time how to better teach our students to think and write. Perhaps the arrival of LLMs will get us to put these lessons into widespread practice. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. Exploring the competence of ChatGPT for customer and patient service management.
- Author
-
Haleem, Abid, Javaid, Mohd, and Singh, Ravi Pratap
- Subjects
- *
CHATGPT , *CUSTOMER services , *ARTIFICIAL intelligence , *MEDICAL care , *TECHNOLOGICAL innovations - Abstract
The modern language generation model ChatGPT, created by Open Artificial Intelligence (AI), is recognised for its capacity to comprehend context and produce pertinent content. This model is built on the transformer architecture, which enables it to process massive volumes of data and produce text that is both cohesive and illuminating. Service is a crucial component everywhere as it provides the basis for establishing client rapport and offering aid and support. In healthcare, the application of ChatGPT for patient service support has been one of the most significant advances in recent years. ChatGPT can help overcome language obstacles and improve patient satisfaction by facilitating communication with healthcare personnel and understanding of care. It can assist in enhancing the entire patient experience by offering personalised information and support to patients and making it more straightforward for them to communicate with healthcare professionals. Its goal can be to expedite and streamline service by promptly and accurately responding to customers. Businesses of all sizes increasingly use ChatGPT since it allows them to provide 24/7 customer support without requiring human contact. This paper briefly discusses ChatGPT and the need for better services. Various perspectives on improving customer and patient services through ChatGPT are discussed. The article also discussed the major key enablers of ChatGPT for refining customer and patient assistance. Further, the paper identifies and discusses the critical application areas of ChatGPT for customer and patient service. With its ability to handle several requests simultaneously, respond quickly and accurately to client questions, and gain knowledge from every interaction, ChatGPT is revolutionising customer and patient service. Its accessibility and compatibility with various communication channels make it a desirable solution for businesses looking to improve support. As technology advances, ChatGPT is positioned to become an essential tool for businesses wishing to provide speedy and customised service. Although ChatGPT may give convincing solutions, the chance of providing accurate and updated information poses a problem for its usage in service jobs that need accurate and up-to-date information. In future, various services will become better and more efficient due to ChatGPT and AI. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. How to optimize the systematic review process using AI tools.
- Author
-
Fabiano, Nicholas, Gupta, Arnav, Bhambra, Nishaant, Luu, Brandon, Wong, Stanley, Maaz, Muhammad, Fiedorowicz, Jess G., Smith, Andrew L., and Solmi, Marco
- Subjects
- *
ARTIFICIAL intelligence , *ACADEMIC discourse , *CHATGPT - Abstract
Systematic reviews are a cornerstone for synthesizing the available evidence on a given topic. They simultaneously allow for gaps in the literature to be identified and provide direction for future research. However, due to the ever‐increasing volume and complexity of the available literature, traditional methods for conducting systematic reviews are less efficient and more time‐consuming. Numerous artificial intelligence (AI) tools are being released with the potential to optimize efficiency in academic writing and assist with various stages of the systematic review process including developing and refining search strategies, screening titles and abstracts for inclusion or exclusion criteria, extracting essential data from studies and summarizing findings. Therefore, in this article we provide an overview of the currently available tools and how they can be incorporated into the systematic review process to improve efficiency and quality of research synthesis. We emphasize that authors must report all AI tools that have been used at each stage to ensure replicability as part of reporting in methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
41. "You act as Human, and I will act as AI": Technological Rehearsals at the Interface.
- Author
-
Fang, Kathy
- Subjects
- *
NATURAL language processing , *ARTIFICIAL intelligence , *CHATGPT , *CHATBOTS , *REHEARSALS - Abstract
Chatbots and natural language processing tools have emerged as a ubiquitous yet exceptional development of algorithmic performativity. The release of ChatGPT on 30 November 2022 signaled a sea change in language-learning technological-performative relations. ChatGPT programs human knowledge as a stylized, computational performance and rehearses the human as technological. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. Twelve tips on creating and using custom GPTs to enhance health professions education.
- Author
-
Masters, Ken, Benjamin, Jennifer, Agrawal, Anoop, MacNeill, Heather, Pillow, M. Tyson, and Mehta, Neil
- Subjects
- *
SCHOOL environment , *MEDICAL personnel , *ELECTRONIC security systems , *DATABASE management , *MEDICAL education , *COMPUTER software , *MACHINE learning , *CLINICAL education , *PROFESSIONAL competence - Abstract
The custom GPT is the latest powerful feature added to ChatGPT. Non-programmers can create and share their own GPTs ("chat bots"), allowing Health Professions Educators to apply the capabilities of ChatGPT to create administrative assistants, online tutors, virtual patients, and more, to support their clinical and non-clinical teaching environments. To achieve this correctly, however, requires some skills, and this 12-Tips paper provides those: we explain how to construct data sources, build relevant GPTs, and apply some basic security. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
43. ChatGPT‐4 performance in rhinology: A clinical case series.
- Author
-
Radulesco, Thomas, Saibene, Alberto Maria, Michel, Justin, Vaira, Luigi Angelo, and Lechien, Jérôme R.
- Subjects
- *
CHATGPT , *GENERATIVE pre-trained transformers , *NOSE , *CHATBOTS - Abstract
Key points: Chatbot Generative Pre‐trained Transformer (ChatGPT)‐4 indicated more than twice additional examinations than practitioners in the management of clinical cases in rhinology.The consistency between ChatGPT‐4 and practitioner in the indication of additional examinations may significantly vary from one examination to another.The ChatGPT‐4 proposed a plausible and correct primary diagnosis in 62.5% cases, while pertinent and necessary additional examinations and therapeutic regimen were indicated in 7.5%–30.0% and 7.5%–32.5% of cases, respectively.The stability of ChatGPT‐4 responses is moderate‐to‐high. The performance of ChatGPT‐4 was not influenced by the human‐reported level of difficulty of clinical cases. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
44. Utility of a LangChain and OpenAI GPT‐powered chatbot based on the international consensus statement on allergy and rhinology: Rhinosinusitis.
- Author
-
Workman, Alan D., Rathi, Vinay K., Lerner, David K., Palmer, James N., Adappa, Nithin D., and Cohen, Noam A.
- Subjects
- *
CHATBOTS , *NOSE , *SINUSITIS , *LANGUAGE models , *ALLERGIES - Abstract
Key points: We created a LangChain/OpenAI API‐powered chatbot based solely on International Consensus Statement of Allergy and Rhinology: Rhinosinusitis (ICAR‐RS).The ICAR‐RS chatbot is able to provide direct and actionable recommendations.Utilization of consensus statements provides an opportunity for AI applications in healthcare. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
45. Comparing the performance of ChatGPT GPT‐4, Bard, and Llama‐2 in the Taiwan Psychiatric Licensing Examination and in differential diagnosis with multi‐center psychiatrists.
- Author
-
Li, Dian‐Jeng, Kao, Yu‐Chen, Tsai, Shih‐Jen, Bai, Ya‐Mei, Yeh, Ta‐Chuan, Chu, Che‐Sheng, Hsu, Chih‐Wei, Cheng, Szu‐Wei, Hsu, Tien‐Wei, Liang, Chih‐Sung, and Su, Kuan‐Pin
- Subjects
- *
GENERATIVE pre-trained transformers , *CHATGPT , *LANGUAGE models , *DIFFERENTIAL diagnosis , *PSYCHIATRISTS , *PROFESSIONAL licensure examinations - Abstract
Aim: Large language models (LLMs) have been suggested to play a role in medical education and medical practice. However, the potential of their application in the psychiatric domain has not been well‐studied. Method: In the first step, we compared the performance of ChatGPT GPT‐4, Bard, and Llama‐2 in the 2022 Taiwan Psychiatric Licensing Examination conducted in traditional Mandarin. In the second step, we compared the scores of these three LLMs with those of 24 experienced psychiatrists in 10 advanced clinical scenario questions designed for psychiatric differential diagnosis. Result: Only GPT‐4 passed the 2022 Taiwan Psychiatric Licensing Examination (scoring 69 and ≥ 60 being considered a passing grade), while Bard scored 36 and Llama‐2 scored 25. GPT‐4 outperformed Bard and Llama‐2, especially in the areas of 'Pathophysiology & Epidemiology' (χ2 = 22.4, P < 0.001) and 'Psychopharmacology & Other therapies' (χ2 = 15.8, P < 0.001). In the differential diagnosis, the mean score of the 24 experienced psychiatrists (mean 6.1, standard deviation 1.9) was higher than that of GPT‐4 (5), Bard (3), and Llama‐2 (1). Conclusion: Compared to Bard and Llama‐2, GPT‐4 demonstrated superior abilities in identifying psychiatric symptoms and making clinical judgments. Besides, GPT‐4's ability for differential diagnosis closely approached that of the experienced psychiatrists. GPT‐4 revealed a promising potential as a valuable tool in psychiatric practice among the three LLMs. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
46. A Case for Caution: Patient Use of Artificial Intelligence.
- Author
-
Stewart, Lisa, Patterson, Wesley G., Farrell, Christopher L., and Withycombe, Janice S.
- Subjects
- *
NURSES , *ADENOCARCINOMA , *CONTINUING education units , *OCCUPATIONAL roles , *ARTIFICIAL intelligence , *PROTEIN-tyrosine kinase inhibitors , *PATIENT advocacy , *INFORMATION resources , *PATIENT decision making , *LUNG cancer , *CANCER patient psychology , *GENETIC testing , *INFORMATION-seeking behavior - Abstract
Artificial intelligence use is increasing exponentially, including by patients in medical decisionmaking. Because of the limitations of chatbots and the possibility of receiving erroneous or incomplete information, patient education is a necessity. Nurses can advocate for patients by emphasizing the importance of conferring with oncology professionals before making decisions based solely on self-investigation using artificial intelligence. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
47. Artificial Intelligence Augmented Qualitative Analysis: The Way of the Future?
- Author
-
Hitch, Danielle
- Subjects
- *
LANGUAGE & languages , *DOCUMENTATION , *QUALITATIVE research , *DATA analysis , *ARTIFICIAL intelligence , *POST-acute COVID-19 syndrome , *NATURAL language processing , *THEMATIC analysis , *MACHINE learning , *RESEARCH ethics - Abstract
The artificial intelligence (AI) revolution is here and gathering momentum, thanks to new models of natural language processing (NLP) and rapidly increasing adoption by the public. NLP technology uses statistical analysis of language structures to analyse and generate human language, using text or speech as its source material. It can also be applied to visual mediums like images and videos. A few qualitative research early adopters are beginning to adopt this technology into their work, but our understanding of its potential remains in its infancy. This article will define and describe NLP-based AI and discuss its benefits and limitations for reflexive thematic analysis in health research. While there are many platforms available, ChatGPT is the most well-known and accessible. A worked example using ChatGPT to augment reflexive thematic analysis is provided to illustrate potential application in practice. This article is intended to inspire further conversation around the role of AI in qualitative research and offer practical guidance for researchers seeking to adopt this technology. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
48. SELL ME THIS ARTIFICIAL PEN: USING CHATGPT TO ENHANCE SALES ROLE PLAYS.
- Author
-
Milovic, Alex, Das Gyomlai, Moumita, Spaid, Brian, and Dingus, Rebecca
- Subjects
- *
CHATGPT , *CHATBOTS , *ARTIFICIAL intelligence , *ROLE playing - Abstract
The recent popularity of ChatGPT and artificial intelligence chatbots presents both challenges and opportunities for incorporating this modern technology in the classroom. This paper introduces an activity that uses ChatGPT to help students practice their role playing sales skills. The benefits of using this AI chatbot for role play training include allowing students to practice when it's convenient and have the ability to react to a chatbot taking on personas of different buyer types. Survey results demonstrate the effectiveness of this training method for both role play training and general familiarity with ChatGPT. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
49. Becoming-with-AI: Rethinking Professional Knowledge Through Generative Knowing.
- Author
-
Lim, Ahreum and Nicolaides, Aliki
- Subjects
- *
PROFESSIONAL practice , *INTERPROFESSIONAL relations , *DIFFUSION of innovations , *ARTIFICIAL intelligence , *KNOWLEDGE management , *REFLECTION (Philosophy) , *PROFESSIONS , *ABILITY , *PROFESSIONAL employee training , *THOUGHT & thinking , *LABOR supply , *TRAINING - Abstract
ChatGPT presents a significant disruption in the ways of knowing that highlight the value of control and mastery of knowledge as a quest for certainty. ChatGPT, as a step toward Artificial General Intelligence (AGI) lets us surface Schön's prescient insight on the crisis of confidence in professionals. Instead of hastily asking questions about how to cope with the advances in artificial intelligence from the gaze of deficit and improvement, however, we resort to the nonrepresentational epistemologies that highlights reflection as seeing as. How might we reconceptualize a professional knowledge model that encourages ethical commitment to enacting practices that deploy relational forces within high-pressure, time-constrained professional environments? We attempt to answer the question by proposing a theory that cultivates conditions for experimenting with experiential encounters with others and creatively activating potentials, which we call generative knowing. Doing so, we suggest how compassionately disruptive condition mobilizes professional learners to engage the machinic creativity that collapses semiotic chains and effectuates diverse imaginations of the future of becoming-with ChatGPT. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
50. From jargon to clarity: Improving the readability of foot and ankle radiology reports with an artificial intelligence large language model.
- Author
-
Butler, James J., Harrington, Michael C., Tong, Yixuan, Rosenbaum, Andrew J., Samsonov, Alan P., Walls, Raymond J., and Kennedy, John G.
- Subjects
- *
ARTIFICIAL intelligence , *LANGUAGE models , *CHATGPT , *X-rays , *COMPUTED tomography - Abstract
The purpose of this study was to evaluate the efficacy of an Artificial Intelligence Large Language Model (AI-LLM) at improving the readability foot and ankle orthopedic radiology reports. The radiology reports from 100 foot or ankle X-Rays, 100 computed tomography (CT) scans and 100 magnetic resonance imaging (MRI) scans were randomly sampled from the institution's database. The following prompt command was inserted into the AI-LLM: "Explain this radiology report to a patient in layman's terms in the second person: [Report Text]". The mean report length, Flesch reading ease score (FRES) and Flesch-Kincaid reading level (FKRL) were evaluated for both the original radiology report and the AI-LLM generated report. The accuracy of the information contained within the AI-LLM report was assessed via a 5-point Likert scale. Additionally, any "hallucinations" generated by the AI-LLM report were recorded. There was a statistically significant improvement in mean FRES scores in the AI-LLM generated X-Ray report (33.8 ± 6.8 to 72.7 ± 5.4), CT report (27.8 ± 4.6 to 67.5 ± 4.9) and MRI report (20.3 ± 7.2 to 66.9 ± 3.9), all p < 0.001. There was also a statistically significant improvement in mean FKRL scores in the AI-LLM generated X-Ray report (12.2 ± 1.1 to 8.5 ± 0.4), CT report (15.4 ± 2.0 to 8.4 ± 0.6) and MRI report (14.1 ± 1.6 to 8.5 ± 0.5), all p < 0.001. Superior FRES scores were observed in the AI-LLM generated X-Ray report compared to the AI-LLM generated CT report and MRI report, p < 0.001. The mean Likert score for the AI-LLM generated X-Ray report, CT report and MRI report was 4.0 ± 0.3, 3.9 ± 0.4, and 3.9 ± 0.4, respectively. The rate of hallucinations in the AI-LLM generated X-Ray report, CT report and MRI report was 4%, 7% and 6%, respectively. AI-LLM was an efficacious tool for improving the readability of foot and ankle radiological reports across multiple imaging modalities. Superior FRES scores together with superior Likert scores were observed in the X-Ray AI-LLM reports compared to the CT and MRI AI-LLM reports. This study demonstrates the potential use of AI-LLMs as a new patient-centric approach for enhancing patient understanding of their foot and ankle radiology reports. Jel Classifications: IV [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.