Back to Search Start Over

Taxonomy of Risks posed by Language Models

Authors :
Kasirzadeh, Atoosa
Kasirzadeh, Atoosa
Publication Year :
2022

Abstract

Responsible innovation on large-scale Language Models (LMs) re- quires foresight into and in-depth understanding of the risks these models may pose. This paper develops a comprehensive taxon- omy of ethical and social risks associated with LMs. We identify twenty-one risks, drawing on expertise and literature from com- puter science, linguistics, and the social sciences. We situate these risks in our taxonomy of six risk areas: I. Discrimination, Hate speech and Exclusion, II. Information Hazards, III. Misinformation Harms, IV. Malicious Uses, V. Human-Computer Interaction Harms, and VI. Environmental and Socioeconomic harms. For risks that have already been observed in LMs, the causal mechanism leading to harm, evidence of the risk, and approaches to risk mitigation are discussed. We further describe and analyse risks that have not yet been observed but are anticipated based on assessments of other language technologies, and situate these in the same taxonomy. We underscore that it is the responsibility of organizations to engage with the mitigations we discuss throughout the paper. We close by highlighting challenges and directions for further research on risk evaluation and mitigation with the goal of ensuring that language models are developed responsibly.

Details

Database :
OAIster
Notes :
text, English
Publication Type :
Electronic Resource
Accession number :
edsoai.on1357548374
Document Type :
Electronic Resource