|
" Autonomous weapons and stability "
Scharre, Paul
Stone, John Graham ; Betz, David James
Document Type
|
:
|
Latin Dissertation
|
Record Number
|
:
|
1103630
|
Doc. No
|
:
|
TLets806777
|
Main Entry
|
:
|
Scharre, Paul
|
Title & Author
|
:
|
Autonomous weapons and stability\ Scharre, PaulStone, John Graham ; Betz, David James
|
College
|
:
|
King's College London
|
Date
|
:
|
2020
|
student score
|
:
|
2020
|
Degree
|
:
|
Ph.D.
|
Abstract
|
:
|
Militaries around the globe are developing increasingly autonomous robotic weapons, raising the prospect of future fully autonomous weapons that could search for, select, and engage targets on their own. Autonomous weapons have been used in narrow circumstances to-date, such as defending ships and vehicles under human supervision. By examining over 50 case studies of autonomous military and cyber systems in use or currently in development around the globe, this thesis concludes that widespread use of fully autonomous weapons would be a novel development in war. Autonomous weapons raise important considerations for crisis stability, escalation control, and war termination that have largely been underexplored to-date. Fully autonomous weapons could affect stability in a number of ways. Autonomous weapons could increase stability by replacing human decision-making with automation, linking political leaders more directly to tactical actions. Accidents with autonomous weapons could lead to the opposite, however, causing unintended lethal engagements that escalate a crisis or make it more difficult to terminate conflict. Nine total mechanisms are explored by which fully autonomous weapons could affect stability, four of which would enhance stability and five of which would undermine stability. The significance of these mechanisms is assessed, and the most prominent factor is the extent to which autonomous weapons increase or decrease human control. If autonomous weapons were likely to operate consistent with human intent in realistic military operational environments, then widespread autonomous weapons would improve crisis stability, escalation control, and war termination by increasing political leaders’ control over their military forces and reducing the risk of accidents or unauthorized escalatory actions. If, however, unanticipated action by autonomous weapons is likely in realistic environments, then their use would undermine crisis stability, escalation control, and war termination because of the potential for unintended lethal engagements that could escalate conflicts or make it more difficult to end wars. The competing frameworks of normal accident theory and high-reliability organizations make different predictions about the likelihood of accidents with complex systems. Over 25 case studies of military and non-military complex systems in real-world environments are used to test these competing theories. These include extensive U.S. experience with the Navy Aegis and Army Patriot air and missile defense systems and their associated safety track records. Experience with military and civilian autonomous systems suggest that three key conditions exist that will undermine reliability for fully autonomous weapons in wartime: (1) the absence of routine day-to-day experience under realistic conditions, since testing and peacetime environments will not perfectly replicate wartime conditions; (2) the presence of adversarial actors; and (3) the lack of human judgment to flexibly respond to novel situations. The thesis concludes that normal accident theory best describes the situation of fully autonomous weapons and that militaries are highly unlikely to be able to achieve high-reliability operations with fully autonomous weapons. Unanticipated lethal actions are likely to be normal consequences of using fully autonomous weapons and militaries can reduce but not entirely eliminate these risks. Recent and potential future advances in artificial intelligence and machine learning are likely to exacerbate these risks, even as they enable more capable systems. This stems from the challenge of accurately predicting the behavior of machine learning systems in complex environments, their poor performance in response to novel situations, and their vulnerability to various forms of spoofing and manipulation. The widespread deployment of fully autonomous weapons is therefore likely to undermine stability because of the risk of unintended lethal engagements. Four policy and regulatory options are explored to mitigate this risk. Their likelihood of success is assessed based on analysis of over 40 historical cases of attempts to control weapons and the specific features of autonomous weapons. The thesis concludes with three potentially viable regulatory approaches to mitigate risks from fully autonomous weapons.
|
Added Entry
|
:
|
Betz, David James
|
|
:
|
Stone, John Graham
|
Added Entry
|
:
|
King's College London
|
| |