Back to Search Start Over

Strong and weak alignment of large language models with human values

Authors :
Mehdi Khamassi
Marceau Nahon
Raja Chatila
Source :
Scientific Reports, Vol 14, Iss 1, Pp 1-17 (2024)
Publication Year :
2024
Publisher :
Nature Portfolio, 2024.

Abstract

Abstract Minimizing negative impacts of Artificial Intelligent (AI) systems on human societies without human supervision requires them to be able to align with human values. However, most current work only addresses this issue from a technical point of view, e.g., improving current methods relying on reinforcement learning from human feedback, neglecting what it means and is required for alignment to occur. Here, we propose to distinguish strong and weak value alignment. Strong alignment requires cognitive abilities (either human-like or different from humans) such as understanding and reasoning about agents’ intentions and their ability to causally produce desired effects. We argue that this is required for AI systems like large language models (LLMs) to be able to recognize situations presenting a risk that human values may be flouted. To illustrate this distinction, we present a series of prompts showing ChatGPT’s, Gemini’s and Copilot’s failures to recognize some of these situations. We moreover analyze word embeddings to show that the nearest neighbors of some human values in LLMs differ from humans’ semantic representations. We then propose a new thought experiment that we call “the Chinese room with a word transition dictionary”, in extension of John Searle’s famous proposal. We finally mention current promising research directions towards a weak alignment, which could produce statistically satisfying answers in a number of common situations, however so far without ensuring any truth value.

Details

Language :
English
ISSN :
20452322
Volume :
14
Issue :
1
Database :
Directory of Open Access Journals
Journal :
Scientific Reports
Publication Type :
Academic Journal
Accession number :
edsdoj.47039c97e14d4395a2ad7f168faadf53
Document Type :
article
Full Text :
https://doi.org/10.1038/s41598-024-70031-3