Back to Search
Start Over
Towards Integrating Fairness Transparently in Industrial Applications
- Publication Year :
- 2020
- Publisher :
- arXiv, 2020.
-
Abstract
- Numerous Machine Learning (ML) bias-related failures in recent years have led to scrutiny of how companies incorporate aspects of transparency and accountability in their ML lifecycles. Companies have a responsibility to monitor ML processes for bias and mitigate any bias detected, ensure business product integrity, preserve customer loyalty, and protect brand image. Challenges specific to industry ML projects can be broadly categorized into principled documentation, human oversight, and need for mechanisms that enable information reuse and improve cost efficiency. We highlight specific roadblocks and propose conceptual solutions on a per-category basis for ML practitioners and organizational subject matter experts. Our systematic approach tackles these challenges by integrating mechanized and human-in-the-loop components in bias detection, mitigation, and documentation of projects at various stages of the ML lifecycle. To motivate the implementation of our system -- SIFT (System to Integrate Fairness Transparently) -- we present its structural primitives with an example real-world use case on how it can be used to identify potential biases and determine appropriate mitigation strategies in a participatory manner.<br />Comment: 14 pages, 4 figures
Details
- Database :
- OpenAIRE
- Accession number :
- edsair.doi.dedup.....57614a9f926d2a3741dd9302c98db66c
- Full Text :
- https://doi.org/10.48550/arxiv.2006.06082