Back to Search Start Over

Towards Identifying Social Bias in Dialog Systems: Frame, Datasets, and Benchmarks

Authors :
Zhou, Jingyan
Deng, Jiawen
Mi, Fei
Li, Yitong
Wang, Yasheng
Huang, Minlie
Jiang, Xin
Liu, Qun
Meng, Helen
Source :
EMNLP 2022
Publication Year :
2022

Abstract

The research of open-domain dialog systems has been greatly prospered by neural models trained on large-scale corpora, however, such corpora often introduce various safety problems (e.g., offensive languages, biases, and toxic behaviors) that significantly hinder the deployment of dialog systems in practice. Among all these unsafe issues, addressing social bias is more complex as its negative impact on marginalized populations is usually expressed implicitly, thus requiring normative reasoning and rigorous analysis. In this paper, we focus our investigation on social bias detection of dialog safety problems. We first propose a novel Dial-Bias Frame for analyzing the social bias in conversations pragmatically, which considers more comprehensive bias-related analyses rather than simple dichotomy annotations. Based on the proposed framework, we further introduce CDail-Bias Dataset that, to our knowledge, is the first well-annotated Chinese social bias dialog dataset. In addition, we establish several dialog bias detection benchmarks at different label granularities and input types (utterance-level and context-level). We show that the proposed in-depth analyses together with these benchmarks in our Dial-Bias Frame are necessary and essential to bias detection tasks and can benefit building safe dialog systems in practice.

Details

Database :
arXiv
Journal :
EMNLP 2022
Publication Type :
Report
Accession number :
edsarx.2202.08011
Document Type :
Working Paper