Back to Search
Start Over
Towards Risk Modeling for Collaborative AI
- Source :
- WAIN@ICSE
- Publication Year :
- 2021
-
Abstract
- Collaborative AI systems aim at working together with humans in a shared space to achieve a common goal. This setting imposes potentially hazardous circumstances due to contacts that could harm human beings. Thus, building such systems with strong assurances of compliance with requirements domain specific standards and regulations is of greatest importance. Challenges associated with the achievement of this goal become even more severe when such systems rely on machine learning components rather than such as top-down rule-based AI. In this paper, we introduce a risk modeling approach tailored to Collaborative AI systems. The risk model includes goals, risk events and domain specific indicators that potentially expose humans to hazards. The risk model is then leveraged to drive assurance methods that feed in turn the risk model through insights extracted from run-time evidence. Our envisioned approach is described by means of a running example in the domain of Industry 4.0, where a robotic arm endowed with a visual perception component, implemented with machine learning, collaborates with a human operator for a production-relevant task.<br />4 pages, 2 figures
- Subjects :
- Shared space
FOS: Computer and information sciences
Visual perception
Computer science
Computer Science - Artificial Intelligence
Domain (software engineering)
Task (project management)
Software Engineering (cs.SE)
Computer Science - Software Engineering
Harm
Artificial Intelligence (cs.AI)
Risk analysis (engineering)
Component (UML)
Task analysis
Robotic arm
Subjects
Details
- Language :
- English
- Database :
- OpenAIRE
- Journal :
- WAIN@ICSE
- Accession number :
- edsair.doi.dedup.....67583d92fc9c465ebf3dc5b4d46943d6