1. Can We Trust AI Agents? An Experimental Study Towards Trustworthy LLM-Based Multi-Agent Systems for AI Ethics
- Author
-
de Cerqueira, José Antonio Siqueira, Agbese, Mamia, Rousi, Rebekah, Xi, Nannan, Hamari, Juho, and Abrahamsson, Pekka
- Subjects
Computer Science - Computers and Society ,Computer Science - Artificial Intelligence ,I.2.0 ,K.6.3 - Abstract
AI-based systems, including Large Language Models (LLMs), impact millions by supporting diverse tasks but face issues like misinformation, bias, and misuse. Ethical AI development is crucial as new technologies and concerns emerge, but objective, practical ethical guidance remains debated. This study examines LLMs in developing ethical AI systems, assessing how trustworthiness-enhancing techniques affect ethical AI output generation. Using the Design Science Research (DSR) method, we identify techniques for LLM trustworthiness: multi-agents, distinct roles, structured communication, and multiple rounds of debate. We design the multi-agent prototype LLM-BMAS, where agents engage in structured discussions on real-world ethical AI issues from the AI Incident Database. The prototype's performance is evaluated through thematic analysis, hierarchical clustering, ablation studies, and source code execution. Our system generates around 2,000 lines per run, compared to only 80 lines in the ablation study. Discussions reveal terms like bias detection, transparency, accountability, user consent, GDPR compliance, fairness evaluation, and EU AI Act compliance, showing LLM-BMAS's ability to generate thorough source code and documentation addressing often-overlooked ethical AI issues. However, practical challenges in source code integration and dependency management may limit smooth system adoption by practitioners. This study aims to shed light on enhancing trustworthiness in LLMs to support practitioners in developing ethical AI-based systems.
- Published
- 2024