Back to Search
Start Over
A Framework for Assurance Audits of Algorithmic Systems
- Source :
- The 2024 ACM Conference on Fairness, Accountability, and Transparency
- Publication Year :
- 2024
-
Abstract
- An increasing number of regulations propose AI audits as a mechanism for achieving transparency and accountability for artificial intelligence (AI) systems. Despite some converging norms around various forms of AI auditing, auditing for the purpose of compliance and assurance currently lacks agreed-upon practices, procedures, taxonomies, and standards. We propose the criterion audit as an operationalizable compliance and assurance external audit framework. We model elements of this approach after financial auditing practices, and argue that AI audits should similarly provide assurance to their stakeholders about AI organizations' ability to govern their algorithms in ways that mitigate harms and uphold human values. We discuss the necessary conditions for the criterion audit and provide a procedural blueprint for performing an audit engagement in practice. We illustrate how this framework can be adapted to current regulations by deriving the criteria on which bias audits can be performed for in-scope hiring algorithms, as required by the recently effective New York City Local Law 144 of 2021. We conclude by offering a critical discussion on the benefits, inherent limitations, and implementation challenges of applying practices of the more mature financial auditing industry to AI auditing where robust guardrails against quality assurance issues are only starting to emerge. Our discussion -- informed by experiences in performing these audits in practice -- highlights the critical role that an audit ecosystem plays in ensuring the effectiveness of audits.
- Subjects :
- Computer Science - Computers and Society
Subjects
Details
- Database :
- arXiv
- Journal :
- The 2024 ACM Conference on Fairness, Accountability, and Transparency
- Publication Type :
- Report
- Accession number :
- edsarx.2401.14908
- Document Type :
- Working Paper
- Full Text :
- https://doi.org/10.1145/3630106.3658957