1. ChatGPT als Recherchetool?: Fehlertypologie, technische Ursachenanalyse und hochschuldidaktische Implikationen.
- Author
-
Oertner, Monika
- Subjects
- *
GENERATIVE artificial intelligence , *CHATBOTS , *CHATGPT , *ARTIFICIAL intelligence , *HIGHER education , *HEALTH literacy - Abstract
"ChatGPT may produce inaccurate information about people, places, or facts," Open AI warns when using its chatbot. Being able to assess this unreliability is part of AI literacy, which recently became a future skill at university, understood as a precondition for future careers. The article offers an error typology, which is linked to AI's technical functionality. Twenty categories of flaws in AI information are assigned to three fields of causative factors: training data, generation process, and programming. Additionally, a paradoxical mechanism pattern in user psychology is outlined – automation bias vs. Eliza effect –, and Harry Frankfurt's concept of "bullshit" is introduced, which seems tailor-made for AI information. Some error types, especially those caused in the generation process itself, must be regarded as non-recoverable. The use of generative AI as an information and research tool therefore harbors a substantial and lasting risk potential – for competence development in higher education as well as for our knowledge-based society as a whole. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF