Back to Search
Start Over
Guidelines For Agentic AI Safety Volume 1: Agentic AI Safety Experts Focus Group - Sept. 2024
- Publication Year :
- 2024
-
Abstract
- Welcome to this draft first volume overview of our Safer Agentic AI Foundations guidelines, a work in progress. Our Working Group of 25 experts ( see https://www.linkedin.com/groups/12966081/) is releasing these guidelines under a Creative Commons license, allowing free use and application by all and for the benefit of humanity. Our Working Group has employed a Weighted Factors Methodology to map the factors which can drive or inhibit safety in agentic systems, based on fundamental principles. We have used this same process many times previously to generate a range of global standards, certifications, and guidelines for improving ethical qualities in AI systems. We hope that this overview of the driving and inhibitory factors in agentic AI systems—those capable of independent decision-making and action—will provide a strengthened awareness of the complexities involved. These issues should be accounted for when dealing with these advanced forms of machine intelligence. We very much welcome your comments, feedback, and informal peer review. Your input will be carefully considered as we develop the final guidelines. Should you also desire further information on agentic AI and its safety, we will be pleased to accommodate your request. We expect to release the full guidelines by November 2024. You can reach us at the addresses below and keep informed of our developments via our mailing list. Thank you for your interest and engagement.
Details
- Database :
- OAIster
- Notes :
- text, English
- Publication Type :
- Electronic Resource
- Accession number :
- edsoai.on1457294143
- Document Type :
- Electronic Resource