1. Operating itself safely: merging the concepts of ‘safe to operate’ and ‘operate safely’ for lethal autonomous weapons systems containing artificial intelligence.
- Author
-
Spayne, Peter, Lacey, Laura, Cahillane, Marie, and Saddington, Alistair
- Subjects
- *
LETHAL autonomous weapons , *WEAPONS systems , *ARTIFICIAL intelligence , *SAFETY regulations , *MARITIME law - Abstract
The Ministry of Defence, specifically the Royal Navy, uses the ‘Duty Holder Structure’ to manage how it complies with deviations to maritime laws and health and safety regulations where military necessity requires it. The output statements ensuring compliance are ‘safe to operate’ certification for all platforms and equipment, and the ‘operate safely’ declaration for people who are suitably trained within the organisation. Together this forms
the Safety Case . Consider a handgun; the weapon has calibration, design and maintenance certification to prove it issafe to operate , and the soldier is trained to be qualified competent to make predictable judgement calls on how and when to pull the trigger (operate safely ). Picture those statements as separate circles drawn on a Venn diagram. As levels of autonomy and complexity are dialled up the two circles converge. Should autonomy increase to the point that the decision to fire be under the control of an Artificial Intelligence within the weapon’s software then the two circles will overlap. This paper details research conclusions within the overlap, and proposes a new methodology able to certify that an AI based autonomous weapons system is “safe to operate itself safely” when in an autonomous state. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF