Back to Search Start Over

Using Explainability to Help Children UnderstandGender Bias in AI

Authors :
E. Vidal
Gaspar Isaac Melsión
Iolanda Leite
Ilaria Torre
Universitat Politècnica de Catalunya. Departament d'Enginyeria Electrònica
Universitat Politècnica de Catalunya. BCN SEER - Barcelona Science and Engineering Education Research Group
Source :
IDC, UPCommons. Portal del coneixement obert de la UPC, Universitat Politècnica de Catalunya (UPC)
Publication Year :
2021
Publisher :
ACM, 2021.

Abstract

The final publication is available at ACM via http://dx.doi.org/10.1145/3459990.3460719 Machine learning systems have become ubiquitous into our society. This has raised concerns about the potential discrimination that these systems might exert due to unconscious bias present in the data, for example regarding gender and race. Whilst this issue has been proposed as an essential subject to be included in the new AI curricula for schools, research has shown that it is a difficult topic to grasp by students. We propose an educational platform tailored to raise the awareness of gender bias in supervised learning, with the novelty of using Grad-CAM as an explainability technique that enables the classifier to visually explain its own predictions. Our study demonstrates that preadolescents (N=78, age 10-14) significantly improve their understanding of the concept of bias in terms of gender discrimination, increasing their ability to recognize biased predictions when they interact with the interpretable model, highlighting its suitability for educational programs. Peer Reviewed Objectius de Desenvolupament Sostenible::4 - Educació de Qualitat::4.4 - Per a 2030, augmentar substancialment el nombre de joves i persones adultes que tenen les competències necessàries, en particular tècniques i professionals, per a accedir a l’ocupació, el treball digne i l’emprenedoria Objectius de Desenvolupament Sostenible::4 - Educació de Qualitat

Details

Database :
OpenAIRE
Journal :
Interaction Design and Children
Accession number :
edsair.doi.dedup.....eff3357f50cfb809eb5deca00da2f932
Full Text :
https://doi.org/10.1145/3459990.3460719