Asset Publisher

Ethical Behavior in Lethal Autonomous Systems Expert Lectures at NPS

Naval Postgraduate School (NPS) guest lecturer, Georgia Tech University Regents’ Professor Ronald C. Arkin, presents a discussion on “Governing Ethical Behavior in Lethal Autonomous Systems” in Ingersoll Hall on the university campus. The lecture was sponsored by the Consortium for Robotics and Unmanned Systems Education and Research (CRUSER).

Throughout the history of mankind, war has been a constant aspect of the human condition. Equally persistent is the impact of technological advances in weaponry, and how this shapes the outcome of conflicts. From just the right stick to swing at an enemy’s head to the latest supersonic cruise missile, technology has played its part in shaping war.

As technology advances, armed conflict changes as well, and one of the DoD’s most embraced new technologies is unmanned systems. Autonomous robots that once were relegated to science fiction are now a reality, but one of many questions is, how do these systems change the ethics of war.

Addressing this question, the Naval Postgraduate School (NPS), in the forefront of the development of unmanned systems, hosted a lecture titled “Governing Ethical Behavior in Lethal Autonomous Systems” presented by leading expert in the field Georgia Tech University Regents’ Professor Ronald C. Arkin.

The lecture, sponsored by NPS’ Consortium for Robotics and Unmanned Systems Education and Research (CRUSER), addressed directly the question of ethical behavior in unmanned systems with lethal capabilities. It touched on the dilemmas facing the increasing integration of these systems that become deadlier and more autonomous as they develop.

“That’s the insidious aspect of autonomous systems,” said Arkin. “They are creeping up on us through continuous development and enhancements of unmanned capabilities.”

The idea of lethal autonomous systems conjures science fiction apocalyptic images of humans losing control of the machines they built to serve them. For Arkin, the reality of this issue is much more than the plot of a blockbuster movie.

“Lethal autonomy is already here,” said Arkin. “It already exists and it is already used in the military.”
Arkin says that, whether we want it or not, these types of systems are being developed by the United States, our allies, and even our enemies. He argues that the arrival of these systems is here and the issue of ethical behavior must be addressed.

Arkin pointed out that the pace of war has become faster and more lethal than a human’s ability to react and that the benefits of unmanned systems are becoming more evident.

“Intelligence is being pushed further and further towards the tip of the spear,” said Arkin. “We are fundamentally limited by the speed in which we can process information.”

However, Arkin noted that these realities made him become more concerned with the effects these systems would have on the nature of warfare. He saw this become more and more evident in the first years of the 21st century when he saw that his own projects were succeeding.

“We were seeing that our work was making a difference and it made me think long and hard about the kind of research that I should be doing,” said Arkin. “I wanted to make sure that we as a community became interested in discussing the actual issues of what happens when we succeed, not if we succeed.”

Arkin offered solutions, possible scenarios, and examples of situations in which the use of these systems could be quite beneficial. He pointed out that humans are inherently not designed for warfare. In many instances, when we go to war, we essentially ask our soldiers to go against their survival instincts and risk their lives in battle. Also, Arkin said, humans are filled with emotions that affect their effectiveness in combat such as fear, hate and revenge … machines don’t have those characteristics.

“We have a responsibility as scientists to be able to try and find ways in which we can reduce man’s inhumanity to man – and that includes the battlefield.” Added Arkin, “It especially should include the battlefield, because that’s where humanity, some would say, is at its best but also is at its worse.”

Of course, no system is fail-safe, Arkin conceded. There would be instances in which systems would fail, but if these systems make fewer mistakes than a human being does, it would be a success.

Touching on the philosophical aspect of the subject, the computer scientist closed with poignant questions. “Should warfighters be robots, or in many respects, don’t we try to re-engineer human beings to comply in ways that are inherently unnatural in the conduct of war?” He continued, “And should robots be soldiers? Could we actually create combat systems that behaved in a more humane manner than human beings do, in the battlefield?”

bookmarks move script

Current Headlines Sidebar
Asset Publisher

empty content

 

Media contact box

MEDIA CONTACT
 

Office of University Communications
1 University Circle
Monterey, CA 93943
(831) 656-1068
https://nps.edu/office-of-university-communications
pao@nps.edu