Back to Search Start Over

Why should we ever automate moral decision making?

Authors :
Conitzer, Vincent
Source :
In: Ethics and Trust in Human-AI Collaboration: Socio-Technical Approaches, 2023
Publication Year :
2024

Abstract

While people generally trust AI to make decisions in various aspects of their lives, concerns arise when AI is involved in decisions with significant moral implications. The absence of a precise mathematical framework for moral reasoning intensifies these concerns, as ethics often defies simplistic mathematical models. Unlike fields such as logical reasoning, reasoning under uncertainty, and strategic decision-making, which have well-defined mathematical frameworks, moral reasoning lacks a broadly accepted framework. This absence raises questions about the confidence we can place in AI's moral decision-making capabilities. The environments in which AI systems are typically trained today seem insufficiently rich for such a system to learn ethics from scratch, and even if we had an appropriate environment, it is unclear how we might bring about such learning. An alternative approach involves AI learning from human moral decisions. This learning process can involve aggregating curated human judgments or demonstrations in specific domains, or leveraging a foundation model fed with a wide range of data. Still, concerns persist, given the imperfections in human moral decision making. Given this, why should we ever automate moral decision making -- is it not better to leave all moral decision making to humans? This paper lays out a number of reasons why we should expect AI systems to engage in decisions with a moral component, with brief discussions of the associated risks.

Details

Database :
arXiv
Journal :
In: Ethics and Trust in Human-AI Collaboration: Socio-Technical Approaches, 2023
Publication Type :
Report
Accession number :
edsarx.2407.07671
Document Type :
Working Paper