Back to Search Start Over

Operating itself safely: merging the concepts of ‘safe to operate’ and ‘operate safely’ for lethal autonomous weapons systems containing artificial intelligence.

Authors :
Spayne, Peter
Lacey, Laura
Cahillane, Marie
Saddington, Alistair
Source :
Defence Studies. Oct2024, p1-35. 35p. 6 Illustrations.
Publication Year :
2024

Abstract

The Ministry of Defence, specifically the Royal Navy, uses the ‘Duty Holder Structure’ to manage how it complies with deviations to maritime laws and health and safety regulations where military necessity requires it. The output statements ensuring compliance are ‘safe to operate’ certification for all platforms and equipment, and the ‘operate safely’ declaration for people who are suitably trained within the organisation. Together this forms <italic>the Safety Case</italic>. Consider a handgun; the weapon has calibration, design and maintenance certification to prove it is <italic>safe to operate</italic>, and the soldier is trained to be qualified competent to make predictable judgement calls on how and when to pull the trigger (<italic>operate safely</italic>). Picture those statements as separate circles drawn on a Venn diagram. As levels of autonomy and complexity are dialled up the two circles converge. Should autonomy increase to the point that the decision to fire be under the control of an Artificial Intelligence within the weapon’s software then the two circles will overlap. This paper details research conclusions within the overlap, and proposes a new methodology able to certify that an AI based autonomous weapons system is “safe to operate itself safely” when in an autonomous state. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
14702436
Database :
Academic Search Index
Journal :
Defence Studies
Publication Type :
Academic Journal
Accession number :
180355938
Full Text :
https://doi.org/10.1080/14702436.2024.2415712