Back to Search Start Over

RECAST: Interactive Auditing of Automatic Toxicity Detection Models

Authors :
Wright, Austin P.
Shaikh, Omar
Park, Haekyu
Epperson, Will
Ahmed, Muhammed
Pinel, Stephane
Yang, Diyi
Chau, Duen Horng
Publication Year :
2020

Abstract

As toxic language becomes nearly pervasive online, there has been increasing interest in leveraging the advancements in natural language processing (NLP), from very large transformer models to automatically detecting and removing toxic comments. Despite the fairness concerns, lack of adversarial robustness, and limited prediction explainability for deep learning systems, there is currently little work for auditing these systems and understanding how they work for both developers and users. We present our ongoing work, RECAST, an interactive tool for examining toxicity detection models by visualizing explanations for predictions and providing alternative wordings for detected toxic speech.<br />Comment: 8 Pages, 3 figures, The eighth International Workshop of Chinese CHI Proceedings

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2001.01819
Document Type :
Working Paper
Full Text :
https://doi.org/10.1145/3403676.3403691