Back to Search Start Over

Use large language models to promote equity

Authors :
Pierson, Emma
Shanmugam, Divya
Movva, Rajiv
Kleinberg, Jon
Agrawal, Monica
Dredze, Mark
Ferryman, Kadija
Gichoya, Judy Wawira
Jurafsky, Dan
Koh, Pang Wei
Levy, Karen
Mullainathan, Sendhil
Obermeyer, Ziad
Suresh, Harini
Vafa, Keyon
Publication Year :
2023

Abstract

Advances in large language models (LLMs) have driven an explosion of interest about their societal impacts. Much of the discourse around how they will impact social equity has been cautionary or negative, focusing on questions like "how might LLMs be biased and how would we mitigate those biases?" This is a vital discussion: the ways in which AI generally, and LLMs specifically, can entrench biases have been well-documented. But equally vital, and much less discussed, is the more opportunity-focused counterpoint: "what promising applications do LLMs enable that could promote equity?" If LLMs are to enable a more equitable world, it is not enough just to play defense against their biases and failure modes. We must also go on offense, applying them positively to equity-enhancing use cases to increase opportunities for underserved groups and reduce societal discrimination. There are many choices which determine the impact of AI, and a fundamental choice very early in the pipeline is the problems we choose to apply it to. If we focus only later in the pipeline -- making LLMs marginally more fair as they facilitate use cases which intrinsically entrench power -- we will miss an important opportunity to guide them to equitable impacts. Here, we highlight the emerging potential of LLMs to promote equity by presenting four newly possible, promising research directions, while keeping risks and cautionary points in clear view.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2312.14804
Document Type :
Working Paper