2,224 results on '"CHATGPT"'
Search Results
2. Who Determines What Is Relevant? Humans or AI? Why Not Both?
- Author
-
Faggioli, Guglielmo, Dietz, Laura, Clarke, Charles L. A., Demartini, Gianluca, Hagen, Matthias, Hauff, Claudia, Kando, Noriko, Kanoulas, Evangelos, Potthast, Martin, Stein, Benno, and Wachsmuth, Henning
- Subjects
- *
HUMAN-artificial intelligence interaction , *ARTIFICIAL intelligence , *HUMAN-computer interaction , *ARTIFICIAL intelligence & society , *LANGUAGE models , *CHATGPT , *CHATBOTS - Abstract
The article offers an opinion regarding how a human-artificial intelligence (AI) collaboration can asses relevance. Topics include the improvements to large language models (LLMs) such as OpenAI's chatbot ChatGPT, the reasons why utilizing only LLMs can be problematic, and how human judgement can improve fairness, efficiency, and effectiveness.
- Published
- 2024
- Full Text
- View/download PDF
3. The Science of Detecting LLM-Generated Text.
- Author
-
Tang, Ruixiang, Chuang, Yu-Neng, and Hu, Xia
- Subjects
- *
LANGUAGE models , *NATURAL language processing , *COMPUTATIONAL linguistics , *CHATGPT , *CHATBOTS , *ARTIFICIAL intelligence , *SEMANTIC computing - Abstract
This research article focuses on the science of detecting large language model (LLM) generated text. The authors discuss the advancement of natural language generation (NLG) technology, including OpenAI's ChatGPT, and explain how the two detection methods, black-box detection and white-box detection, work to mitigate the potential misuse of LLMs.
- Published
- 2024
- Full Text
- View/download PDF
4. Generative Artificial Intelligence: 8 Critical Questions for Libraries.
- Author
-
Bridges, Laurie M., McElroy, Kelly, and Welhouse, Zach
- Subjects
- *
GENERATIVE artificial intelligence , *LANGUAGE models , *INTELLECTUAL freedom - Abstract
In this article, we provide a brief overview of generative artificial intelligence (GenAI) and large language models (LLMs). We then propose eight critical questions that libraries should ask when exploring this technology and its implications for their communities. We argue that libraries have a unique role in facilitating informed and responsible use of GenAI, as well as safeguarding and promoting the values of access, privacy, and intellectual freedom. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Can ChatGPT Learn Chinese or Swahili?
- Author
-
Savage, Neil
- Subjects
- *
CHATGPT , *LANGUAGE models , *LANGUAGE & languages , *JAPANESE language , *SYNTAX (Grammar) - Abstract
The article discusses the performance of large language models (LLMs) like ChatGPT in languages other than English, highlighting challenges and potential solutions. Researchers found that LLMs struggle to mimic humans in languages like Japanese due to differences in syntax and writing styles, leading to concerns about their effectiveness in non-English languages. Efforts to improve LLM performance in other languages include innovative tokenization methods and leveraging search data to supplement training data, aiming to address the scarcity of language-specific training resources.
- Published
- 2024
- Full Text
- View/download PDF
6. Thus Spake ChatGPT: On the reliability of AI-based chatbots for science communication.
- Author
-
Dutta, Subhabrata and Chakraborty, Tanmoy
- Subjects
- *
CHATGPT , *STATISTICAL reliability , *SCIENTIFIC communication , *LANGUAGE models , *SCIENTIFIC knowledge - Abstract
The article provides the author's perspective on the reliability of the ChatGPT, which is an artificially intelligent (AI) chatbot that derives from a large language model (LLM), in the communication of science. Particular focus is given to the errors that ChatGPT encounters when communicating scientific knowledge to a general audience.
- Published
- 2023
- Full Text
- View/download PDF
7. Generative AI as a New Innovation Platform: Considering the stability and longevity of a potential new foundational technology.
- Author
-
Cusumano, Michael A.
- Subjects
- *
GENERATIVE artificial intelligence , *CHATGPT , *ARTIFICIAL neural networks , *LANGUAGE models , *GOVERNMENT regulation , *ACCURACY of information - Abstract
This article presents generative artificial intelligence (AI) as a potential new innovation platform. First this article provides the history of generative AI with a discussion of Microsoft, OpenAI, and the use of neural networks and large language models. Next, a look at whether generative AI could meet the criteria of an innovation platform. Lastly, a discussion of regulation and governance.
- Published
- 2023
- Full Text
- View/download PDF
8. Popping the chatbot hype balloon.
- Author
-
Goudarzi, Sara
- Subjects
- *
CHATBOTS , *CHATGPT , *ARTIFICIAL intelligence , *LANGUAGE models , *PERSONALLY identifiable information , *SCIENCE fiction - Abstract
Since ChatGPT's release in November 2022, artificial intelligence has come into the spotlight. Inspiring both fascination and fear, chatbots have stirred debates among researchers, developers, and policy makers. The concerns range from concrete and tangible ones—which include replication of existing biases and discrimination at scale, harvesting personal data, and spreading misinformation—to more existential fears that their development will lead to machines with human-like cognitive abilities. Understanding how chatbots work and the human labor and data involved can better help evaluate the validity of concerns surrounding these systems, which although innovative, are hardly the stuff of science fiction. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
9. Generative AI Degrades Online Communities.
- Author
-
Burtch, Gordon, Lee, Dokyun, and Chen, Zhichen
- Subjects
- *
VIRTUAL communities , *LANGUAGE models , *CHATGPT , *CHATBOTS , *INTERNET forums , *INTERNET users , *ONLINE comments , *VIRTUAL culture - Abstract
The article focuses on how large language models (LLMs) are influencing online communities. The authors offer their opinions, stating that generative artificial intelligence (AI) technologies, such as ChatGPT, are causing a decrease in user participation of knowledge communities and degrading the quality of answers the community provides.
- Published
- 2024
- Full Text
- View/download PDF
10. ChatGPT's performance evaluation for annotating multi-label text in Indonesian language.
- Author
-
Hakim, M. Faris Al and Prasetiyo, Budi
- Subjects
- *
CHATGPT , *INDONESIAN language , *LANGUAGE models , *SENTIMENT analysis , *ARTIFICIAL intelligence - Abstract
The high need for artificial intelligence applications in all fields impacts the appropriate dataset for building a good machine. Labeling datasets becomes one of the main tasks needed before training the machine, especially in sentiment analysis. Aspect-based sentiment analysis has more labels in its process than others. The high number of data also impacts the high cost of processing the data, including the labeling process. Nevertheless, all the problems still need to be solved, including multi-label in Indonesian. It is a potential task that needs to be done by giving several labels in an instance. ChatGPT, as one of the Large Language Models (LLM), has a high potential to carry out the labeling process. ChatGPT-3.5 was examined to label for aspect-based sentiment analysis in the Indonesian language in this study. CASA dataset containing 1080 rows was used to evaluate the performance of the model. The results show that comprehensive exploration must be applied to produce optimal ChatGPT performance for classifying multi-label in Indonesian. The study results will impact the efficiency of the labeling process in multi-label case that needs more effort to be finished. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Criticize my Code, not me: Using AI-Generated Feedback in Computer Science Teaching.
- Author
-
Kunz, Sibylle and Steffen, Adrienne
- Subjects
- *
COMPUTER science education , *CHATGPT , *LANGUAGE models , *WOMEN in higher education , *GENDER stereotypes , *TEACHER training - Abstract
Large Language Models (LLMs) like ChatGPT can help teachers to tailor learning tasks for their students, combining learning objectives and storytelling to raise interest in the subject. AI-based learning task design can help to support competency-based learning, especially for girls in STEM courses like computer science, where otherwise the "Leaky STEM pipeline" (Speer 2023) leads to a constant loss of female students over school time. LLMs support many steps of the creation cycle of learning tasks. One important step is the feedback process between teachers and students during and after solving the tasks. Students need person-related as well as process-related feedback to make progress. Sometimes problems occur when teachers give feedback in a way that embarrasses or hurts the students. Especially female students often need more confirmation to make them aware of their progress, but studies show that boys demand and get more atention by teachers in this situation. This is one of the many reasons why girls lose motivation and interest in STEM courses over time. Since male and female teachers differ in expressing feedback without being aware of it, it is necessary to raise their consciousness. LLMs like ChatGPT can be used in two scenarios here. The first scenario is helping teachers to formulate objective feedback in a way that is adequate and understandable for the target group - e.g., young girls or boys - in a specific situation. The second scenario is training the teacher in a Socrafic way, where the LLM simulates a student receiving the feedback and reacting to it according to established communication models like the Four Ears-model by Schulz von Thun (Schulz von Thun 1981) or Berne's Transactional Analysis (Berne, 1964). This case study provides examples and prompting schemes for both scenarios and discusses the fragile balance between avoiding gender stereotypes in LLMs and giving more helpful and sustainable feedback for female students to foster self-esteem and competency-awareness. [ABSTRACT FROM AUTHOR]
- Published
- 2024
12. The Roles of Social Perception and AI Anxiety in Individuals’ Attitudes Toward ChatGPT in Education.
- Author
-
Wang, Chengcheng, Li, Xing, Liang, Zheng, Sheng, Yingying, Zhao, Qingbai, and Chen, Shi
- Abstract
AbstractPublic attitudes are essential for technology promotion and policy formulation. The present study investigated the Chinese public’s knowledge of ChatGPT, as well as examined the roles played by Big-Five personality traits, social perception, and AI anxiety in shaping the public’s attitudes toward ChatGPT using the questionnaire method. Results showed that: (1) Nearly, 1/3 of teachers surveyed did not know ChatGPT at all, and all of them were primary and secondary school teachers. (2) The level of knowledge about ChatGPT was significantly related to gender, educational level, teaching stage (in teacher samples), and major (in student samples). (3) The public’s positive attitude is higher significantly than the negative attitude. (4) Social perception positively predicted positive attitudes and negatively predicted negative attitudes. Moreover, a notably higher predictive power for positive attitudes compared to negative attitudes was demonstrated by social perception. There is an equally predictive effect of competence perception and warmth perception on attitudes, without any domain effect observed. (5) AI anxiety only positively predicted negative attitudes but did not impact positive attitudes. In explaining negative attitudes, AI anxiety exhibited a higher explanatory power compared to the Big-Five personality traits, primarily correlating with neuroticism. The findings indicate that it’s inappropriate to consider attitude evaluation toward AI as a single dimension. There are relatively independent components for positive and negative attitudes toward AI. The roles played by other predictive variables in attitudes are discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Les adjectifs évaluatifs dans le discours juridictionnel : Analyse de corpus.
- Author
-
Dolata-Zaród, Anna
- Abstract
L’objectif de cet article est d’analyser l’utilisation des adjectifs évaluatifs dans le cadre des recherches quantitatives et qualitatives fondées sur le corpus du discours juridictionnel français. Dans cette analyse, nous adopterons l’approche Appraisal (Martin & White 2005) qui s’intéresse aux marques linguistiques laissant entrevoir la présence subjective du sujet parlant dans le texte. Notre corpus comprend 100 arrêts de la Cour de cassation de 2018 à 2022 inclus. Nous utiliserons la plateforme AnaText pour l’analyse quantitative et ChatGPT pour l’analyse qualitative. Cette recherche nous permettra d’identifier les adjectifs évaluatifs les plus souvent utilisés par les juges et d’indiquer les cibles et la polarité de l’argumentation judiciaire.This article aims to analyse the use of evaluative adjectives in quantitative and qualitative research based on the corpus of French jurisdictional discourse. In this analysis, we adopt the Appraisal approach (Martin & White 2005), which focuses on linguistic marks that reveal the subjective presence of the speaker in the text. Our corpus comprises 100 judgments of the Court of Cassation from 2018 to 2022 inclusive. We use the AnaText platform for quantitative analysis and ChatGPT for qualitative analysis. This research allows us to identify the evaluative adjectives most often used by judges and to indicate the targets and polarity of judicial argumentation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Using GPT-4 to write a scientific review article: a pilot evaluation study.
- Author
-
Wang, Zhiping Paul, Bhandary, Priyanka, Wang, Yizhou, and Moore, Jason H.
- Subjects
- *
GENERATIVE pre-trained transformers , *LANGUAGE models , *CHATGPT , *TECHNICAL writing , *PILOT projects - Abstract
GPT-4, as the most advanced version of OpenAI's large language models, has attracted widespread attention, rapidly becoming an indispensable AI tool across various areas. This includes its exploration by scientists for diverse applications. Our study focused on assessing GPT-4's capabilities in generating text, tables, and diagrams for biomedical review papers. We also assessed the consistency in text generation by GPT-4, along with potential plagiarism issues when employing this model for the composition of scientific review papers. Based on the results, we suggest the development of enhanced functionalities in ChatGPT, aiming to meet the needs of the scientific community more effectively. This includes enhancements in uploaded document processing for reference materials, a deeper grasp of intricate biomedical concepts, more precise and efficient information distillation for table generation, and a further refined model specifically tailored for scientific diagram creation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Quality of science journalism in the age of Artificial Intelligence explored with a mixed methodology.
- Author
-
Dijkstra, Anne M., de Jong, Anouk, and Boscolo, Marco
- Subjects
- *
SCIENCE journalism , *ARTIFICIAL intelligence , *CHATGPT , *MEMBERSHIP in associations, institutions, etc. , *THEMATIC analysis - Abstract
Science journalists, traditionally, play a key role in delivering science information to a wider audience. However, changes in the media ecosystem and the science-media relationship are posing challenges to reliable news production. Additionally, recent developments such as ChatGPT and Artificial Intelligence (AI) more generally, may have further consequences for the work of (science) journalists. Through a mixed-methodology, the quality of news reporting was studied within the context of AI. A content analysis of media output about AI (news articles published within the time frame 1 September 2022–28 February 2023) explored the adherence to quality indicators, while interviews shed light on journalism practices regarding quality reporting on and with AI. Perspectives from understudied areas in four European countries (Belgium, Italy, Portugal, and Spain) were included and compared. The findings show that AI received continuous media attention in the four countries. Furthermore, despite four different media landscapes, the reporting in the news articles adhered to the same quality criteria such as applying rigour, including sources of information, accessibility, and relevance. Thematic analysis of the interview findings revealed that impact of AI and ChatGPT on the journalism profession is still in its infancy. Expected benefits of AI related to helping with repetitive tasks (e.g. translations), and positively influencing journalistic principles of accessibility, engagement, and impact, while concerns showed fear for lower adherence to principles of rigour, integrity and transparency of sources of information. More generally, the interviewees expressed concerns about the state of science journalism, including a lack of funding influencing the quality of reporting. Journalists who were employed as staff as well as those who worked as freelancers put efforts in ensuring quality output, for example, via editorial oversight, discussions, or memberships of associations. Further research into the science-media relationship is recommended. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. Multi role ChatGPT framework for transforming medical data analysis.
- Author
-
Chen, Haoran, Zhang, Shengxiao, Zhang, Lizhong, Geng, Jie, Lu, Jinqi, Hou, Chuandong, He, Peifeng, and Lu, Xuechun
- Abstract
The application of ChatGPTin the medical field has sparked debate regarding its accuracy. To address this issue, we present a Multi-Role ChatGPT Framework (MRCF), designed to improve ChatGPT's performance in medical data analysis by optimizing prompt words, integrating real-world data, and implementing quality control protocols. Compared to the singular ChatGPT model, MRCF significantly outperforms traditional manual analysis in interpreting medical data, exhibiting fewer random errors, higher accuracy, and better identification of incorrect information. Notably, MRCF is over 600 times more time-efficient than conventional manual annotation methods and costs only one-tenth as much. Leveraging MRCF, we have established two user-friendly databases for efficient and straightforward drug repositioning analysis. This research not only enhances the accuracy and efficiency of ChatGPT in medical data science applications but also offers valuable insights for data analysis models across various professional domains. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Opportunities and risks involved in using ChatGPT to create first grade science lesson plans.
- Author
-
Powell, Wardell and Courchesne, Steven
- Subjects
- *
CHATGPT , *LESSON planning , *GENERATIVE artificial intelligence , *STUDENT teachers , *CURRICULUM frameworks - Abstract
Generative AI can potentially support teachers in lesson planning by making the process of generating an outline more efficient. This qualitative study employed an exploratory case study design to examine a specific lesson design activity involving a series of prompts and responses from ChatGPT. The desired science lesson on heredity was aimed at first grade students. We analyzed the process's efficiency, finding that within 30 minutes we could generate and substantially refine a lesson plan that accurately aligned with the desired curriculum framework and the 5E model of instruction. However, the iterations of the lesson plan included questionable components, missing details, and a fake resource. We discussed the implications of these findings for faculty looking to train pre-service teachers to appropriately use generative AI in lesson planning. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. Are Preprints a Threat to the Credibility and Quality of Artificial Intelligence Literature in the ChatGPT Era? A Scoping Review and Qualitative Study.
- Author
-
Adarkwah, Michael Agyemang, Islam, A. Y. M. Atiquil, Schneider, Käthe, Luckin, Rose, Thomas, Michael, and Spector, Jonathan Michael
- Abstract
AbstractChatGPT, as the pioneer of advanced generative AI tools, has triggered scholarly discussions about the potential use of such AI technologies in interdisciplinary fields. With a focus on the surge of AI-related preprints since the introduction of ChatGPT by OpenAI, the study investigated what the surge implies for AI literature, particularly in terms of credibility and quality. A scoping review was initially conducted to study the characteristics of the AI-related preprints in the Web of Science (WoS) database and also in five (5) preprint platforms (ArXiv, MedRxiv, SocArxiv, SSRN, and Research Square). The publication date range was set at January 01, 2023 to September 08, 2023. This was followed up by an interpretive phenomenological analysis (IPA) of the perceptions of experts in the AI field about the preprints. Employing a scoping review of AI-related preprints across six databases and a qualitative analysis of 15 AI experts’ opinions, our study reveals concerns about the research accuracy, quality, and credibility of preprints, and advocates for a robust evaluation and high-quality assurance process to promote open science objectives during their dissemination. Specifically, 45,918 AI-related preprints were found in the six preprint databases or repositories across different fields. The nine themes from the IPA showed that preprints can be of value. However, experts advocated for the safe and responsible use of AI-related preprints, involving such tenets as maintaining ethical integrity and high-quality work on the part of authors and establishing sound AI-content guidelines from publishers and editors. Future studies are recommended to investigate the impact of preprints on decision-making processes in educational research and practice. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. Assessment of the impacts of artificial intelligence (AI) on intercultural communication among postgraduate students in a multicultural university environment.
- Author
-
Sarwari, Abdul Qahar, Javed, Muhammad Naeem, Mohd Adnan, Hamedi, and Abdul Wahab, Mohammad Nubli
- Subjects
- *
COLLEGE environment , *GRADUATE students , *CROSS-cultural communication , *COLLEGE students , *COMMUNICATION barriers - Abstract
Artificial intelligence (AI) broadly influences different aspects of human life, especially human communication. One of the main concerns of the broad use of AI in daily interactions among different people could be whether it helps them interact easily or complicates their interactions. To answer the mentioned question, this study assessed the impacts of AI on intercultural communication among postgraduate students in a multicultural university environment. A newly developed survey instrument was used to conduct this study. The participants of this study were 115 postgraduate students from nine different countries. The descriptive statistics, reliability analysis, and Bivariate correlation tests of the 29th version of IBM-SPSS software were used to analyze the quantitative data, and inductive coding and conceptual content analysis were used to code and analyze the qualitative data. Based on descriptive results, the vast majority (93%) of the participants already used and experienced AI in their daily lives, and the majority of them believed that AI and AI technologies connect different cultures, reduce language and cultural barriers, and help people of different cultures to interact and be connected. Based on the results from the correlation test, there were strong positive correlations between AI attitudes and AI benefits, and also between AI regulation and AI benefits. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. Text-to-video generative artificial intelligence: sora in neurosurgery.
- Author
-
Mohamed, Ali A. and Lucke-Wold, Brandon
- Subjects
- *
GENERATIVE artificial intelligence , *LANGUAGE models , *NATURAL language processing , *COMPUTER vision , *ARTIFICIAL intelligence - Abstract
Artificial intelligence (AI) has increased in popularity in neurosurgery, with recent interest in generative AI algorithms such as the Large Language Model (LLM) ChatGPT. Sora, an innovation in generative AI, leverages natural language processing, deep learning, and computer vision to generate impressive videos from text prompts. This new tool has many potential applications in neurosurgery. These include patient education, public health, surgical training and planning, and research dissemination. However, there are considerable limitations to the current model such as physically implausible motion generation, spontaneous generation of subjects, unnatural object morphing, inaccurate physical interactions, and abnormal behavior presentation when many subjects are generated. Other typical concerns are with respect to patient privacy, bias, and ethics. Further, appropriate investigation is required to determine how effective generative videos are compared to their non-generated counterparts, irrespective of any limitations. Despite these challenges, Sora and other iterations of its text-to-video generative application may have many benefits to the neurosurgical community. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. Evaluating language models for mathematics through interactions.
- Author
-
Collins, Katherine M., Jiang, Albert Q., Frieder, Simon, Wong, Lionel, Zilka, Miri, Bhatt, Umang, Lukasiewicz, Thomas, Yuhuai Wu, Tenenbaum, Joshua B., Hart, William, Gowers, Timothy, Wenda Li, Weller, Adrian, and Jamnik, Mateja
- Subjects
- *
LANGUAGE models , *GENERATIVE pre-trained transformers , *CHATGPT , *MATHEMATICS , *MATHEMATICS students - Abstract
There is much excitement about the opportunity to harness the power of large language models (LLMs) when building problem-solving assistants. However, the standard methodology of evaluating LLMs relies on static pairs of inputs and outputs; this is insufficient for making an informed decision about which LLMs are best to use in an interactive setting, and how that varies by setting. Static assessment therefore limits how we understand language model capabilities. We introduce CheckMate, an adaptable prototype platform for humans to interact with and evaluate LLMs. We conduct a study with CheckMate to evaluate three language models (InstructGPT, ChatGPT, and GPT-4) as assistants in proving undergraduate-level mathematics, with a mixed cohort of participants from undergraduate students to professors of mathematics. We release the resulting interaction and rating dataset, MathConverse. By analyzing MathConverse, we derive a taxonomy of human query behaviors and uncover that despite a generally positive correlation, there are notable instances of divergence between correctness and perceived helpfulness inLLMgenerations, among other findings. Further, we garner a more granular understanding of GPT-4 mathematical problemsolving through a series of case studies, contributed by experienced mathematicians. We conclude with actionable takeaways for ML practitioners and mathematicians: models that communicate uncertainty, respond well to user corrections, and can provide a concise rationale for their recommendations, may constitute better assistants. Humans should inspect LLM output carefully given their current shortcomings and potential for surprising fallibility. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. ChatGPT for good? Taking ‘beneficence’ seriously in the regulation of generative artificial intelligence.
- Author
-
Singh Chauhan, Krishna Deo
- Abstract
Generative AI platforms such as ChatGPT have found prominence in recent times with their ability to generate texts, images, etc. Several questions pertaining to ethical and legal issues surrounding ChatGPT have arisen. In this paper, I discuss the nature and background of generative AI, situating its development in the historical context of AI. I then discuss my primary research questions: is sufficient attention paid in the literature on ethics of AI to the principle of beneficence and is there theoretical clarity on its meaning? I highlight that while there is great deal of discussion on what harms can arise from generative AI and how to stop them, there is very little discussion on what amounts to AI-for-good, particularly in the literature of AI ethics and regulation. To the extent that such discussion exists, it pushes ahead with suggesting specific solutions, without fully addressing the underlying question of what makes their prescriptions beneficent. I demonstrate that the principle of beneficence as understood in biomedical ethics and human rights frameworks has limited utility for the ethics of generative AI. These gaps in the AI regulation are prominent, as they can derail long term progress of generative AI and the realization of its full potential. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. Google, ChatGPT, questions of omniscience and wisdom.
- Author
-
Hoffman, Frank J. and Iso, Klairung
- Abstract
The article explores how platforms like Google and ChatGPT, which claim omniscience and wisdom-like attributes, prompt philosophical questions. It revisits religious perspectives on omniscience and their influence on the pursuit of wisdom. The article suggests that while Google may offer compartmentalized omniscience based on user preferences, ChatGPT’s factual accuracy challenges its characterization as omniscient. Nonetheless, ChatGPT can still help humans progress toward wisdom, by integrating the co-creation of knowledge between humans and the unfolding of divine knowledge from Process Thought and Buddhist epistemology insights. Notably, instead of offering definitive answers, the paper is written with a sense of deep humility to encourage ongoing inquiry and investigation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. Predictors of higher education students’ behavioural intention and usage of ChatGPT: the moderating roles of age, gender and experience.
- Author
-
Arthur, Francis, Salifu, Iddrisu, and Abam Nortey, Sharon
- Abstract
The adoption and usage of ChatGPT among students are influenced by various factors, including individual characteristics such as age, gender, and experience with technology use. However, studies on the moderating roles of gender, age, and experience in predicting students’ behavioural intention and usage of ChatGPT are limited. This study employed the Unified Theory of Acceptance and Use of Technology (UTAUT2) model to examine the predictors of Higher Education (HE) students’ behavioural intention and usage of ChatGPT. The study employed a descriptive cross-sectional survey design with an adapted instrument to collect data from 486 students. Using the Partial Least Squares Structural Equation Modelling approach, the results showed that hedonic motivation, performance expectancy, effort expectancy, and social influence were significant predictors of students’ behavioural intention, whereas behavioural intention and facilitating conditions had significant influence on students’ actual use of ChatGPT. Age and gender were found to moderate the relationship between facilitating conditions and the use of ChatGPT. Lastly, experience moderated the relationship between habit and the use of ChatGPT, and the relationship between hedonic motivation and behavioural intention. These findings have implications for the design and implementation of ChatGPT in higher education towards enhancing students’ engagement and learning outcomes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. Use of large language models to optimize poison center charting.
- Author
-
Matsler, Nikolaus, Pepin, Lesley, Banerji, Shireen, Hoyte, Christopher, and Heard, Kennon
- Abstract
AbstractIntroductionMethodsResultsDiscussionConclusionsEfficient and complete medical charting is essential for patient care and research purposes. In this study, we sought to determine if Chat Generative Pre-Trained Transformer could generate cogent, suitable charts from recorded, real-world poison center calls and abstract and tabulate data.De-identified transcripts of real-world hospital-initiated poison center consults were summarized by Chat Generative Pre-Trained Transformer 4.0. Additionally, Chat Generative Pre-Trained Transformer organized tables for data points, including vital signs, test results, therapies, and recommendations. Seven trained reviewers, including certified specialists in poison information and board-certified medical toxicologists, graded summaries using a 1 to 5 scale to determine appropriateness for entry into the medical record. Intra-rater reliability was calculated. Tabulated data was quantitatively evaluated for accuracy. Finally, reviewers selected preferred documentation: original or Chat Generative Pre-Trained Transformer organized.Eighty percent of summaries had a median score high enough to be deemed appropriate for entry into the medical record. In three duplicate cases, reviewers did change scores, leading to moderate intra-rater reliability (kappa = 0.6). Among all cases, 91 percent of data points were correctly abstracted into table format.By utilizing a large language model with a unified prompt, charts can be generated directly from conversations in seconds without the need for additional training. Charts generated by Chat Generative Pre-Trained Transformer were preferred over extant charts, even when they were deemed unacceptable for entry into the medical record prior to the correction of errors. However, there were several limitations to our study, including poor intra-rater-reliability and a limited number of cases examined.In this study, we demonstrate that large language models can generate coherent summaries of real-world poison center calls that are often acceptable for entry to the medical record as is. When errors were present, these were often fixed with the addition or deletion of a word or phrase, presenting an enormous opportunity for efficiency gains. Our future work will focus on implementing this process in a prospective fashion. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. Exploring ChatGPT as a writing assessment tool.
- Author
-
Bucol, Junifer Leal and Sangkawong, Napattanissa
- Abstract
This research paper employs an exploratory framework to evaluate the potential of ChatGPT as an Automated Writing Evaluation (AWE) tool in teaching English as a Foreign Language (EFL) in Thailand. The main objective is to investigate how well ChatGPT can assess students’ writing using prompts and pre-defined rubrics compared to human raters. Moreover, the study examines its strengths and weaknesses as an assessment tool by analysing the teachers’ reflections during the assessment process. Quantitative analyses revealed significant relationships between trial accounts in comparison with the human ratings. Qualitative analysis unearths patterns in the feedback, shedding light on ChatGPT’s strengths and its limitations as an AWE tool. ChatGPT displays substantial promise as an AWE tool, offering distinct features such as human-like interface, consistency, efficiency, and scalability. Nonetheless, educators must be cognisant of its limitations. This study recognises that the strategic use of ChatGPT could enhance the evaluation process among teachers and foster the development of EFL students’ written communication skills. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. ChatGPT-facilitated professional development: evidence from professional trainers’ learning achievements, self-worth, and self-confidence.
- Author
-
Chang, Chun-Chun and Hwang, Gwo-Jen
- Abstract
Professional trainers play an important role in helping new recruits adapt to the workplace. In general hospitals, training courses for clinical teachers still adopt the lecture method. Such a teacher training approach focuses on the way of delivering knowledge and skills, while the training for their teaching of case handling as well as their self-worth and self-confidence could be insufficient. In order to cope with this problem, the present study proposed a ChatGPT-based training mode (ChatGPT-TM) for professional development. To verify its effects, we conducted an experiment in a “Using ChatGPT in Case Teaching” course for clinical teachers in hospitals, and explored their learning achievement, self-worth, self-confidence, and learning perceptions using the ChatGPT training mode (ChatGPT-TM) and the conventional training mode (C-TM). The results showed that the ChatGPT-TM could effectively enhance clinical teachers’ learning achievement in case teaching, self-worth, and self-confidence in comparison with the C-TM. The main contribution of this study is that it revealed that ChatGPT could allow clinical teachers to carry out reflection, verify references, and integrate theory and practice, which improved their learning achievement, made them realize their self-worth, and increased their self-confidence in performing professional training tasks. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. Protecting older consumers in the digital age: a commentary on ChatGPT, helplines and the way to prevent accessible fraud.
- Author
-
Segal, Michal
- Abstract
Older people are often targeted by fraudsters due to their unique characteristics and vulnerabilities. Being a victim of exploitation can lead to negative emotional and financial consequences. The purpose of this commentary is to present ChatGPT’s potential to provide accessible information and support, helping older consumers protect themselves when confronted with exploitation, address the limitations of ChatGPT and propose solutions to overcome these limitations. Integrating tailored human and technological solutions, such as helplines, AI chatbots, and involving older adults in development, is crucial. By providing adequate training and support, the goal of ensuring accessibility for all users can be achieved. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. ChatGPT: A Conceptual Review of Applications and Utility in the Field of Medicine.
- Author
-
Rao, Shiavax J., Isath, Ameesh, Krishnan, Parvathy, Tangsrivimol, Jonathan A., Virk, Hafeez Ul Hassan, Wang, Zhen, Glicksberg, Benjamin S., and Krittanawong, Chayakrit
- Subjects
- *
ELDER care , *WEIGHT loss , *MEDICAL education , *MENTAL health , *ARTIFICIAL intelligence , *DECISION making in clinical medicine , *PATIENT care , *MEDICAL students , *PARADIGMS (Social sciences) , *MEDICAL research , *PHYSICAL fitness , *MEDICATION therapy management , *CONCEPTUAL structures , *MEDICINE , *INDIVIDUALIZED medicine , *HUMAN error , *NUTRITION , *PHYSICAL activity - Abstract
Artificial Intelligence, specifically advanced language models such as ChatGPT, have the potential to revolutionize various aspects of healthcare, medical education, and research. In this narrative review, we evaluate the myriad applications of ChatGPT in diverse healthcare domains. We discuss its potential role in clinical decision-making, exploring how it can assist physicians by providing rapid, data-driven insights for diagnosis and treatment. We review the benefits of ChatGPT in personalized patient care, particularly in geriatric care, medication management, weight loss and nutrition, and physical activity guidance. We further delve into its potential to enhance medical research, through the analysis of large datasets, and the development of novel methodologies. In the realm of medical education, we investigate the utility of ChatGPT as an information retrieval tool and personalized learning resource for medical students and professionals. There are numerous promising applications of ChatGPT that will likely induce paradigm shifts in healthcare practice, education, and research. The use of ChatGPT may come with several benefits in areas such as clinical decision making, geriatric care, medication management, weight loss and nutrition, physical fitness, scientific research, and medical education. Nevertheless, it is important to note that issues surrounding ethics, data privacy, transparency, inaccuracy, and inadequacy persist. Prior to widespread use in medicine, it is imperative to objectively evaluate the impact of ChatGPT in a real-world setting using a risk-based approach. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
30. Determinants of ChatGPT Use and its Impact on Learning Performance: An Integrated Model of BRT and TPB.
- Author
-
Al-Qaysi, Noor, Al-Emran, Mostafa, Al-Sharafi, Mohammed A., Iranmanesh, Mohammad, Ahmad, Azhana, and Mahmoud, Moamin A.
- Abstract
AbstractThe rapid emergence of Generative Artificial Intelligence (GAI) heralds a significant shift, opening new frontiers in how education is delivered. This groundbreaking wave of technological advancement is poised to redefine traditional learning, promising to enhance the educational landscape with unprecedented levels of personalized learning and accessibility. Despite GAI’s progressive infiltration into various educational strata, limited empirical research exists on its impact on students’ learning performance. Drawing on the Theory of Planned Behavior (TPB) and Behavioral Reasoning Theory (BRT), this study investigates the determinants affecting students’ use of ChatGPT and its influence on learning performance. The data were collected from 357 university students and were analyzed using the PLS-SEM technique. The results supported the role of ChatGPT in positively affecting students’ learning performance. In addition, the results showed that reasons for and against adoption are pivotal in shaping students’ attitudes. ChatGPT use is found to be significantly affected by attitudes, subjective norms, and perceived behavioral control. Besides the theoretical contributions, the findings offer various implications for stakeholders and underscore the necessity for educational institutions to foster a conducive environment for GAI adoption, addressing ethical and technical concerns to optimize learning experiences. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. Co-journeying with ChatGPT in tertiary education: identity transformation of EMI teachers in Taiwan.
- Author
-
Tsou, Wenli, Lin, Angel M. Y., and Chen, Fay
- Abstract
This paper responds to the prevalent discourse on the ‘lack of English proficiency’ among EMI teachers in contexts where English is used as an additional language. It explores how AI-powered tools, particularly ChatGPT, enable EMI teachers in Taiwan to leverage their expertise and teach in English with more confidence. This study first describes the training of an EMI PD programme. Then it reports on the challenges and strategies, and the extended and innovative use by EMI teachers after the training. By reporting on the experiences of three EMI teachers, this study shows how collaborating with ChatGPT contributes to developing an empowered EMI teacher identity. This study fills a research gap, shedding light on the transformative potential of collaborating with ChatGPT to transform content teachers’ identities into an EMI teacher identity. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. Human–computer pragmatics trialled: some (im)polite interactions with ChatGPT 4.0 and the ensuing implications.
- Author
-
Quan, Zhi and Chen, Zhiwei
- Abstract
Upon rapid evolution, ChatGPT can now generate content that is linguistically accurate and logically sound, while sidestepping ethical, social and legal concerns. This research seeks to investigate whether ChatGPT will employ different pragmatic strategies in its responses to (im)polite questions. In our experiment, this AI-powered tool was instructed to answer 200 self-made questions over four (im)politeness levels, and the 200 responses were collected to go through linguistic and sentiment analysis. Triangulated data, together with typical examples, show that ChatGPT tends to give shorter and less positive answers to less polite questions, appearing to be less responsive when confronted with more blunt and offensive inquiries. This, to some extent, resembles how human beings react when treated impolitely. A tentative explanation may be that, given its nature as a large language model, ChatGPT mirrors human interaction in various scenarios, and draws on prevalent human communication tendencies. Thus, interacting with ChatGPT is more of a human-society interaction than human-machine communication in the real sense. Our research sheds light on the coined “human-machine pragmatics”, i.e. how humans can best communicate with computers for the best informative and affective outcomes. The implications for language education are also discussed in the end. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. Digital assemblages with AI for creative interpretation of short stories.
- Author
-
O'Halloran, Kieran
- Subjects
- *
GENERATIVE artificial intelligence , *LANGUAGE models , *COMPUTER literacy , *ARTIFICIAL intelligence , *DIGITAL literacy , *CHATGPT - Abstract
I demonstrate an approach fostering inventive interpretation of short stories in Literary Studies and higher education generally. It involves constructing an 'assemblage'—at its simplest, an evolving network of unusual connections for creative outcome. The assemblage of this article combines freshly located research literature, directly and indirectly related to a story's themes, and/or the personality type of protagonists. Importantly, this assemblage also utilizes text analysis software revealing the relatively invisible (e.g. (in)frequent words, parts of speech, and topics) and Large Language Model (LLM) Generative AI to enrich the interpretation. The use of all these elements helps productively exceed initial intuitions about the story, facilitating creativity. I model the approach using Edgar Allan Poe's short story, The Black Cat , whose protagonist is a homicidal psychopath. Specifically, the assemblage here includes relevant software-based research (a corpus analysis of homicidal psychopathic language), non-software-based research (psychoanalytical literary criticism of The Black Cat using the empirically validated concept of transference), text analysis software (WMatrix and Datayze), and the LLM Generative AI, 'ChatGPT' (using the freely available LLM GPT-3.5). One use of this approach is as a pedagogy in Literary Studies employing text analysis software (e.g. on a digital stylistics course). Yet given creative adaptability is a key 21st-century skill, with digital literacy—including the use of Generative AI—an important contemporary competence, and with the short story genre universally known, I highlight too the utility of this approach as a university-wide pedagogy for enhancing creative thinking. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. Clinical and Surgical Applications of Large Language Models: A Systematic Review.
- Author
-
Pressman, Sophia M., Borna, Sahar, Gomez-Cabello, Cesar A., Haider, Syed Ali, Haider, Clifton R., and Forte, Antonio Jorge
- Subjects
- *
LANGUAGE models , *MEDICAL care , *MEDICAL personnel , *CLINICAL medicine , *ARTIFICIAL intelligence , *PUBLICATION bias , *BIBLIOGRAPHIC databases - Abstract
Background: Large language models (LLMs) represent a recent advancement in artificial intelligence with medical applications across various healthcare domains. The objective of this review is to highlight how LLMs can be utilized by clinicians and surgeons in their everyday practice. Methods: A systematic review was conducted following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. Six databases were searched to identify relevant articles. Eligibility criteria emphasized articles focused primarily on clinical and surgical applications of LLMs. Results: The literature search yielded 333 results, with 34 meeting eligibility criteria. All articles were from 2023. There were 14 original research articles, four letters, one interview, and 15 review articles. These articles covered a wide variety of medical specialties, including various surgical subspecialties. Conclusions: LLMs have the potential to enhance healthcare delivery. In clinical settings, LLMs can assist in diagnosis, treatment guidance, patient triage, physician knowledge augmentation, and administrative tasks. In surgical settings, LLMs can assist surgeons with documentation, surgical planning, and intraoperative guidance. However, addressing their limitations and concerns, particularly those related to accuracy and biases, is crucial. LLMs should be viewed as tools to complement, not replace, the expertise of healthcare professionals. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. ChatGPT's Efficacy in Queries Regarding Polycystic Ovary Syndrome and Treatment Strategies for Women Experiencing Infertility.
- Author
-
Devranoglu, Belgin, Gurbuz, Tugba, and Gokmen, Oya
- Subjects
- *
POLYCYSTIC ovary syndrome , *INFERTILITY , *CHATGPT , *LANGUAGE models , *ARTIFICIAL intelligence , *MEDICAL personnel - Abstract
This study assesses the efficacy of ChatGPT-4, an advanced artificial intelligence (AI) language model, in delivering precise and comprehensive answers to inquiries regarding managing polycystic ovary syndrome (PCOS)-related infertility. The research team, comprising experienced gynecologists, formulated 460 structured queries encompassing a wide range of common and intricate PCOS scenarios. The queries were: true/false (170), open-ended (165), and multiple-choice (125) and further classified as 'easy', 'moderate', and 'hard'. For true/false questions, ChatGPT-4 achieved a flawless accuracy rate of 100% initially and upon reassessment after 30 days. In the open-ended category, there was a noteworthy enhancement in accuracy, with scores increasing from 5.53 ± 0.89 initially to 5.88 ± 0.43 at the 30-day mark (p < 0.001). Completeness scores for open-ended queries also experienced a significant improvement, rising from 2.35 ± 0.58 to 2.92 ± 0.29 (p < 0.001). In the multiple-choice category, although the accuracy score exhibited a minor decline from 5.96 ± 0.44 to 5.92 ± 0.63 after 30 days (p > 0.05). Completeness scores for multiple-choice questions remained consistent, with initial and 30-day means of 2.98 ± 0.18 and 2.97 ± 0.25, respectively (p > 0.05). ChatGPT-4 demonstrated exceptional performance in true/false queries and significantly improved handling of open-ended questions during the 30 days. These findings emphasize the potential of AI, particularly ChatGPT-4, in enhancing decision-making support for healthcare professionals managing PCOS-related infertility. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. New Towers of Babel: Faith and Doubt in the Future of Translation.
- Author
-
Long, Hoyt
- Subjects
- *
MACHINE translating , *TRANSLATING & interpreting , *GENERATIVE artificial intelligence , *LANGUAGE models , *SOCIAL media - Abstract
This article examines the impact of machine translation and generative AI on literary translation. It presents two approaches to engaging with machine translation: coordinated friction and playful experimentation. The article discusses the capabilities and limitations of language models, specifically GPT-4, in translation. It explores the potential of playful experimentation with language models to expand the understanding of translation as a cultural medium, while also emphasizing the need for skepticism and critical examination. The article highlights the importance of understanding the relationship between humans and machines in translation and calls for further investigation and documentation of their real-world effects. [Extracted from the article]
- Published
- 2024
- Full Text
- View/download PDF
37. On Artificial and Post-artificial Texts: Machine Learning and the Reader's Expectations of Literary and Non-literary Writing.
- Author
-
Bajohr, Hannes
- Subjects
- *
LANGUAGE models , *MACHINE learning , *TURING test , *CHATGPT - Abstract
With the advent of ChatGPT and other large language models, the number of artificial texts we encounter on a daily basis is about to increase substantially. This essay asks how this new textual situation may influence what one can call the "standard expectation of unknown texts," which has always included the assumption that any text is the work of a human being. As more and more artificial writing begins to circulate, the essay argues, this standard expectation will shift—first, from the immediate assumption of human authorship to, second, a creeping doubt: did a machine write this? In the wake of what Matthew Kirschenbaum has called the "textpocalypse," however, this state cannot be permanent. The author suggests that after this second transitional period, one may suspend the question of origins and, third, take on a post-artificial stance. One would then focus only on what a text says, not on who wrote it; post-artificial writing would be read with an agnostic attitude about its origins. This essay explores the implications of such post-artificiality by looking back to the early days of text synthesis, considering the limitations of aesthetic Turing tests, and indulging in reasoned speculation about the future of literary and nonliterary text generation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. Borges and AI.
- Author
-
Raley, Rita and Samolsky, Russell
- Subjects
- *
ARTIFICIAL intelligence , *GENERATIVE artificial intelligence , *NATURAL language processing , *LANGUAGE models , *BEREAVEMENT , *ZENO'S paradoxes - Abstract
The article "Borges and AI" explores the connection between the writings of Jorge Luis Borges and artificial intelligence (AI). It discusses how Borges's story "Borges and I" foreshadows poststructuralist theories of writing and the rise of large language models (LLMs). The article delves into the themes of fictional capture and the potential for AI to surpass human creativity. It also examines the implications of AI for creative and critical writers, highlighting the challenges and uncertainties it brings. The text considers different perspectives on the relationship between human authors and generative AI models, as well as the impact of AI on education and student writing. The authors stress the importance of human involvement in textual analysis and the preservation of human authorship in the face of advancing AI technology. [Extracted from the article]
- Published
- 2024
- Full Text
- View/download PDF
39. LLMs and the Amazing Shrinking University.
- Author
-
Evron, Nir
- Subjects
- *
LANGUAGE models , *YOUNG adults , *GESTALT therapy , *UNIVERSITY & college admission - Abstract
The article discusses the potential impact of large language models (LLMs) on higher education, particularly in the humanities. The author suggests that LLMs have the potential to revolutionize teaching and learning by providing personalized and adaptive instruction. However, they also raise concerns about the future of universities and the humanities, as LLMs may lead to a contraction of the higher education system and a reevaluation of the value of a college degree. The author speculates that universities may become more specialized and focused on producing high-quality intellectual work, while the role of professors as inspiring teachers will remain important. [Extracted from the article]
- Published
- 2024
- Full Text
- View/download PDF
40. Facsimile Machines.
- Author
-
Kirschenbaum, Matthew
- Subjects
- *
GENERATIVE artificial intelligence , *LANGUAGE models - Abstract
The article discusses the impact of generative artificial intelligence, specifically ChatGPT, on the field of writing. The author explores the historical context of word processing and the anxieties surrounding new technologies. They argue that ChatGPT, with its ability to generate whole documents and genres of writing, represents a qualitative difference in writing technology. The author also raises concerns about the use of AI in writing, including issues of data mining, surveillance capitalism, environmental harm, and exploitative labor practices. They suggest that these technologies may lead to a post-alphabetic future where text loses its purpose as a format for human communication. [Extracted from the article]
- Published
- 2024
- Full Text
- View/download PDF
41. AI Comes for the Author.
- Author
-
Elkins, Katherine
- Subjects
- *
ARTIFICIAL intelligence , *LANGUAGE models , *GENERATIVE artificial intelligence , *GEMINI (Chatbot) , *COMPUTATIONAL intelligence - Abstract
This article explores the impact of artificial intelligence (AI) on text interpretation and the role of authors. It discusses the debates surrounding whether AI language generators can truly understand language or if they simply mimic it. While early models required skillful prompting, newer models have made the process easier. The article examines the capabilities and limitations of large language models (LLMs) like GPT models, highlighting their ability to prioritize meaning over grammar and syntax and their potential to create metaphors and similes. It also acknowledges the ethical concerns and dangers associated with AI development but emphasizes the exciting possibilities it presents. [Extracted from the article]
- Published
- 2024
- Full Text
- View/download PDF
42. "Don't Ban AI from Your Writing Classroom; Require It!".
- Author
-
Hayles, N. Katherine
- Subjects
- *
ARTIFICIAL intelligence , *LANGUAGE models , *NATURAL language processing , *STUDENT cheating , *COLLEGE students , *CITATION networks - Abstract
The article discusses the use of OpenAI's ChatGPT, a large language model, in college and university writing classrooms. While some educators are concerned about students using AI to pass off their work as their own, the author argues that instead of banning AI, institutions should embrace it as a tool to accelerate student learning. The author suggests designing assignments that encourage students to develop critical relationships with algorithmic cultures and to transparently show their contributions versus what the AI contributed. The article emphasizes the importance of process-oriented assignments, collaboration, and intellectual honesty in evaluating student learning. [Extracted from the article]
- Published
- 2024
- Full Text
- View/download PDF
43. Phantoms of Citation: AI and the Death of the Author-Function.
- Author
-
Slater, Avery
- Subjects
- *
ARTIFICIAL intelligence , *LANGUAGE models , *GENERATIVE artificial intelligence , *HONESTY , *GENERATIVE pre-trained transformers , *MIND-wandering , *CHATGPT - Abstract
This article examines the problem of fabricated citations in AI-generated writing, focusing on large language models like ChatGPT. The author argues that these fake citations reveal issues with both the processing of natural language training data and writing in general. The implications of these false citations for the future of writing and the ethics of accreditation are explored, as well as the limitations and potential dangers of ChatGPT. The article discusses concerns in various fields, such as finance and medicine, and addresses the debate surrounding AI-generated text in scholarly manuscripts and the legal responsibilities of authors. It concludes by discussing the nature of language models and their relationship to human language, suggesting that they are fulfilling poststructuralist predictions about the future of literature. The article also explores the use of language models, specifically the LLM, in generating text and the issues that arise from it. It introduces the concept of "hallucination" to describe the inaccuracies and fabricated citations produced by AI models, while also discussing the controversy surrounding this term and proposing alternative designations. The challenges posed by AI-generated text and their implications for authorship, plagiarism, and privacy are highlighted, emphasizing the need for ethical considerations and a deeper understanding of the role of AI in textual technologies. [Extracted from the article]
- Published
- 2024
- Full Text
- View/download PDF
44. ChatGPT and the Territory of Contemporary Narratology; or, A Rhetorical River Runs through It.
- Author
-
Phelan, James
- Subjects
- *
CHATGPT , *NARRATOLOGY , *LANGUAGE models , *GAZE - Abstract
This article examines the use of artificial intelligence (AI), specifically ChatGPT, in generating narrative texts. It discusses the difference between structuralist narratology and rhetorical narratology, emphasizing the impact of the latter on users' perception of AI-generated narratives. While ChatGPT can produce narratives with recognizable elements, it lacks the agency, purpose, and audience typically associated with human-authored narratives. Users often attribute these components to the AI-generated texts due to their own prompts. The article concludes that the rhetorical model captures an important aspect of narrative engagement. Additionally, the article explores the limitations of ChatGPT in generating unreliable narration, arguing that its text-centric approach fails to recognize the relationship between the author, narrator, and audience. The author contrasts ChatGPT's analysis of a passage from Sandra Cisneros's "Barbie-Q" with their own rhetorical analysis, highlighting the significance of shared knowledge between author and audience in conveying unreliability. Ultimately, the author suggests that understanding the communication of texts requires considering the broader context of author, audience, occasion, and purpose. [Extracted from the article]
- Published
- 2024
- Full Text
- View/download PDF
45. ChatGPT and the Writing of Philosophical Essays: An Experimental Study with Prospective Teachers on How the Turing Test Inverted.
- Author
-
Bohlmann, Markus and Berger, Annika M.
- Subjects
- *
CHATGPT , *WRITING instruction , *ARTIFICIAL intelligence , *SEMANTICS , *MIXED methods research - Abstract
Text-generative AI-systems have become important semantic agents with ChatGPT. We conducted a series of experiments to learn what teachers' conceptions of text-generative AI are in relation to philosophical texts. In our main experiment, using mixed methods, we had twenty-four high school students write philosophical essays, which we then randomized to essays with the same command from ChatGPT. We had ten prospective teachers assess these essays. They were able to tell whether it was an AI or student essay with 78.7 percent accuracy, which is better than the Open AI Classifier. Interestingly, however, they used criteria like argumentative and logical flawlessness and neutrality. We concluded from this that they are using an inverted Turing test and are no longer looking for rationality in machines but for irrationality in humans. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
46. Writing with ChatGPT.
- Author
-
Mouser, Ricky
- Subjects
- *
CHATGPT , *LANGUAGE models , *STUDENT assignments , *PLAGIARISM , *WRITING instruction - Abstract
Many instructors see the use of LLMs like ChatGPT on course assignments as a straightforward case of cheating, and try hard to prevent their students from doing so by including new warnings of consequences on their syllabi, turning to iffy plagiarism detectors, or scheduling exams to occur in-class. And the use of LLMs probably is cheating, given the sorts of assignments we are used to giving and the sorts of skills we take ourselves to be instilling in our students. But despite legitimate ethical and pedagogical concerns, the case that LLMs should never be used in academic contexts is quite difficult to see. Many primary and secondary schools are cutting back their writing instruction in an effort to teach to the test; at the same time, many high-end knowledge workers are already quietly expected to leverage their productivity with LLMs. To prepare students for an ever-changing world, we probably do have to teach them at least a little bit about writing with ChatGPT. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
47. Student Voices on GPT-3, Writing Assignments, and the Future College Classroom.
- Author
-
Kim, Bada, Robins, Sarah, and Huang, Jihui
- Subjects
- *
CHATGPT , *LANGUAGE models , *WRITING instruction , *STUDENT assignments , *HIGHER education - Abstract
This paper presents a summary and discussion of an assignment that asked students about the impact of Large Language Models on their college education. Our analysis summarizes students' perception of GPT-3, categorizes their proposals for modifying college courses, and identifies their stated values about their college education. Furthermore, this analysis provides a baseline for tracking students' attitudes toward LLMs and contributes to the conversation on student perceptions of the relationship between writing and philosophy. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
48. Don't Believe the Hype: Why ChatGPT May Breathe New Life into College Writing Instruction.
- Author
-
Mitchell-Yellin, Benjamin
- Subjects
- *
CHATGPT , *WRITING instruction , *HIGHER education , *LANGUAGE models , *TEACHING - Abstract
This paper argues that the threat Large Language Models (LLMs), such as ChatGPT, pose to writing instruction is both not entirely new and a welcome disruption to the way writing instruction is typically delivered. This new technology seems to be prompting many instructors to question whether essay responses to paper prompts reflect students' own thinking and learning. This uneasiness is long overdue, and the hope is it leads instructors to explore evidence-based best practices familiar from the scholarship of teaching and learning. We've known for some time how to better teach our students to think and write. Perhaps the arrival of LLMs will get us to put these lessons into widespread practice. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
49. Exploring the competence of ChatGPT for customer and patient service management.
- Author
-
Haleem, Abid, Javaid, Mohd, and Singh, Ravi Pratap
- Subjects
- *
CHATGPT , *CUSTOMER services , *ARTIFICIAL intelligence , *MEDICAL care , *TECHNOLOGICAL innovations - Abstract
The modern language generation model ChatGPT, created by Open Artificial Intelligence (AI), is recognised for its capacity to comprehend context and produce pertinent content. This model is built on the transformer architecture, which enables it to process massive volumes of data and produce text that is both cohesive and illuminating. Service is a crucial component everywhere as it provides the basis for establishing client rapport and offering aid and support. In healthcare, the application of ChatGPT for patient service support has been one of the most significant advances in recent years. ChatGPT can help overcome language obstacles and improve patient satisfaction by facilitating communication with healthcare personnel and understanding of care. It can assist in enhancing the entire patient experience by offering personalised information and support to patients and making it more straightforward for them to communicate with healthcare professionals. Its goal can be to expedite and streamline service by promptly and accurately responding to customers. Businesses of all sizes increasingly use ChatGPT since it allows them to provide 24/7 customer support without requiring human contact. This paper briefly discusses ChatGPT and the need for better services. Various perspectives on improving customer and patient services through ChatGPT are discussed. The article also discussed the major key enablers of ChatGPT for refining customer and patient assistance. Further, the paper identifies and discusses the critical application areas of ChatGPT for customer and patient service. With its ability to handle several requests simultaneously, respond quickly and accurately to client questions, and gain knowledge from every interaction, ChatGPT is revolutionising customer and patient service. Its accessibility and compatibility with various communication channels make it a desirable solution for businesses looking to improve support. As technology advances, ChatGPT is positioned to become an essential tool for businesses wishing to provide speedy and customised service. Although ChatGPT may give convincing solutions, the chance of providing accurate and updated information poses a problem for its usage in service jobs that need accurate and up-to-date information. In future, various services will become better and more efficient due to ChatGPT and AI. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
50. How to optimize the systematic review process using AI tools.
- Author
-
Fabiano, Nicholas, Gupta, Arnav, Bhambra, Nishaant, Luu, Brandon, Wong, Stanley, Maaz, Muhammad, Fiedorowicz, Jess G., Smith, Andrew L., and Solmi, Marco
- Subjects
- *
ARTIFICIAL intelligence , *ACADEMIC discourse , *CHATGPT - Abstract
Systematic reviews are a cornerstone for synthesizing the available evidence on a given topic. They simultaneously allow for gaps in the literature to be identified and provide direction for future research. However, due to the ever‐increasing volume and complexity of the available literature, traditional methods for conducting systematic reviews are less efficient and more time‐consuming. Numerous artificial intelligence (AI) tools are being released with the potential to optimize efficiency in academic writing and assist with various stages of the systematic review process including developing and refining search strategies, screening titles and abstracts for inclusion or exclusion criteria, extracting essential data from studies and summarizing findings. Therefore, in this article we provide an overview of the currently available tools and how they can be incorporated into the systematic review process to improve efficiency and quality of research synthesis. We emphasize that authors must report all AI tools that have been used at each stage to ensure replicability as part of reporting in methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.