Back to Search Start Over

Data Authenticity, Consent, & Provenance for AI are all broken: what will it take to fix them?

Authors :
Longpre, Shayne
Mahari, Robert
Obeng-Marnu, Naana
Brannon, William
South, Tobin
Gero, Katy
Pentland, Sandy
Kabbara, Jad
Source :
Proceedings of ICML 2024, in PMLR 235:32711-32725. URL: https://proceedings.mlr.press/v235/longpre24b.html
Publication Year :
2024

Abstract

New capabilities in foundation models are owed in large part to massive, widely-sourced, and under-documented training data collections. Existing practices in data collection have led to challenges in tracing authenticity, verifying consent, preserving privacy, addressing representation and bias, respecting copyright, and overall developing ethical and trustworthy foundation models. In response, regulation is emphasizing the need for training data transparency to understand foundation models' limitations. Based on a large-scale analysis of the foundation model training data landscape and existing solutions, we identify the missing infrastructure to facilitate responsible foundation model development practices. We examine the current shortcomings of common tools for tracing data authenticity, consent, and documentation, and outline how policymakers, developers, and data creators can facilitate responsible foundation model development by adopting universal data provenance standards.<br />Comment: ICML 2024 camera-ready version (Spotlight paper). 9 pages, 2 tables

Details

Database :
arXiv
Journal :
Proceedings of ICML 2024, in PMLR 235:32711-32725. URL: https://proceedings.mlr.press/v235/longpre24b.html
Publication Type :
Report
Accession number :
edsarx.2404.12691
Document Type :
Working Paper