Back to Search Start Over

Toward a Taxonomy of Trust for Probabilistic Machine Learning

Authors :
Broderick, Tamara
Gelman, Andrew
Meager, Rachael
Smith, Anna L.
Zheng, Tian
Source :
Grantee Submission. 2022.
Publication Year :
2022

Abstract

Probabilistic machine learning increasingly informs critical decisions in medicine, economics, politics, and beyond. To aid the development of trust in these decisions, we develop a taxonomy delineating where trust in an analysis can break down: (1) in the translation of real-world goals to goals on a particular set of training data, (2) in the translation of abstract goals on the training data to a concrete mathematical problem, (3) in the use of an algorithm to solve the stated mathematical problem, and (4) in the use of a particular code implementation of the chosen algorithm. We detail how trust can fail at each step and illustrate our taxonomy with two case studies. Finally, we describe a wide variety of methods that can be used to increase trust at each step of our taxonomy. The use of our taxonomy highlights steps where existing research work on trust tends to concentrate and also steps where building trust is particularly challenging. [This paper was published in "Science Advances."]

Details

Language :
English
Database :
ERIC
Journal :
Grantee Submission
Publication Type :
Report
Accession number :
ED634088
Document Type :
Reports - Descriptive
Full Text :
https://doi.org/10.1126/sciadv.abn3999