Back to Search
Start Over
STELA: a community-centred approach to norm elicitation for AI alignment.
- Source :
-
Scientific reports [Sci Rep] 2024 Mar 19; Vol. 14 (1), pp. 6616. Date of Electronic Publication: 2024 Mar 19. - Publication Year :
- 2024
-
Abstract
- Value alignment, the process of ensuring that artificial intelligence (AI) systems are aligned with human values and goals, is a critical issue in AI research. Existing scholarship has mainly studied how to encode moral values into agents to guide their behaviour. Less attention has been given to the normative questions of whose values and norms AI systems should be aligned with, and how these choices should be made. To tackle these questions, this paper presents the STELA process (SocioTEchnical Language agent Alignment), a methodology resting on sociotechnical traditions of participatory, inclusive, and community-centred processes. For STELA, we conduct a series of deliberative discussions with four historically underrepresented groups in the United States in order to understand their diverse priorities and concerns when interacting with AI systems. The results of our research suggest that community-centred deliberation on the outputs of large language models is a valuable tool for eliciting latent normative perspectives directly from differently situated groups. In addition to having the potential to engender an inclusive process that is robust to the needs of communities, this methodology can provide rich contextual insights for AI alignment.<br /> (© 2024. The Author(s).)
- Subjects :
- Humans
Morals
Rest
Artificial Intelligence
Language
Subjects
Details
- Language :
- English
- ISSN :
- 2045-2322
- Volume :
- 14
- Issue :
- 1
- Database :
- MEDLINE
- Journal :
- Scientific reports
- Publication Type :
- Academic Journal
- Accession number :
- 38503818
- Full Text :
- https://doi.org/10.1038/s41598-024-56648-4