Scuttlebutt Disclaimer

"USS Hermitage: Scuttlebutt’s Thanksgiving edition - 1943" by USS Hermitage (AP-54) is licensed under agreement with the U.S. Naval Institute.

Asset Publisher

Dimensions of Autonomous Decision-Making

Dimensions of Autonomous Decision-Making

"Dimensions of Autonomous Decision-making: A First Step in Transforming the Policies and Ethics Principles Regarding Autonomous Systems into Practical System Engineering Requirements" was recently published by the Center for Naval Analyses.

The Center for Naval Analyses (CNA) recently published, Dimensions of Autonomous Decision-making: A First Step in Transforming the Policies and Ethics Principles Regarding Autonomous Systems into Practical System Engineering Requirements. It is the output of the project Dr. Michael Stumborg, CNA, described at the Policy and Ethics of Intelligent Autonomous Systems Technical Exchange Meeting sponsored by the Office of the Secretary of Defense earlier this year.

According to Stumborg et al.,

This study identifies the dimensions of autonomous decision-making (DADs)—the categories of potential risk that one should consider before transferring decision-making capabilities to an intelligent autonomous system (IAS). The objective of this study was to provide some of the tools needed to implement existing policies with respect to the legal, ethical, and militarily effective use of IAS. These tools help to identify and either mitigate or accept the risks associated with the use of IAS that might result in a negative outcome.

The 13 DADs identified by this study were developed from a comprehensive list of 565 “risk elements” drawn from hundreds of documents authored by a global cadre of individuals both in favor of, and opposed to, the use of autonomy technology in weapons systems. Additionally, these risk items go beyond current DOD policies and procedures because we expect those to change and IAS technologies to evolve. We captured each risk element in the form of a question. Each can then be easily modified to become a “shall statement” for use by the acquisition community in developing functional requirements that ensure the legal and ethical use of autonomous systems.1 This approach can elevate artificial intelligence (AI) ethics from a set of subjectively defined and thus unactionable policies and principles, to a set of measurable and testable contractual obligations.

The risk elements can also be used by military commanders as a (measurable and testable) pre-operational risk assessment “checklist” to ensure that autonomous systems are not used in an unethical manner. In this way, the Department of Defense (DOD) can make fully informed risk assessment decisions before developing or deploying autonomous systems. Because our study results were specifically designed for use within the defense acquisition system and the military planning process, they provide a first step in transforming the policies and ethics principles regarding autonomous systems into practical system engineering requirements.

The 13 DADs we identified are as follows:

  • Standard semantics and concepts: ensures the use of common terminology and concepts throughout the life cycle of an IAS and amongst the different user communities to prevent risks that would occur due to miscommunication.
  • Continuity of legal accountability: ensures that a human is legally accountable for the IAS at all times, with no gaps in accountability during fast-paced and dynamic military operations.
  • Degree of autonomy: ensures that adjustments can be made to the degree of system autonomy to accommodate dynamic operational conditions and match changing risk tolerance levels.
  • Necessity of autonomy: ensures that use of an IAS provides a military advantage (to include reducing the probability of collateral damage) commensurate with any additional risk introduced by its use.
  • Command and control: ensures that all practicable measures are taken to prevent the loss of command and control over an IAS and ensures that the IAS can detect and prevent unintended consequences and deactivate systems that may engage in unintended behaviors.
  • Presence of persons and objects protected from the use of force: ensures that the IAS can identify and not intentionally harm persons or objects in a manner that would violate laws, policies, or the rules of engagement.
  • Pre-operational audit logs: ensures positive control over all aspects of an IAS during its acquisition by documenting the provenance of data, software, hardware, personnel interactions and processes executed from pre-acquisition inception to delivery to the fleet.
  • Operational audit logs: ensures that inputs, actions, interactions, and outcomes are recorded for post-operational analysis, supporting legal accountability, sharing of lessons learned, and making improvements to future tactics, techniques, procedures and technologies.
  • Human-machine teaming: ensures that human judgement is exercised (particularly when the use of force is involved).
  • Test and evaluation adequacy: ensures that the depth, breadth, and complexity of the contemplated operational environment are represented to the greatest extent practicable during test and evaluation.
  • Autonomy training and education: ensures everyone associated with the development and use of the IAS understands its attributes well enough to execute their responsibilities to act to avoid illegal and unethical use.
  • Mission duration and geographic extent: ensures that a mission’s time length and spatial extent do not invalidate pre-mission risk assessments and planning factors.
  • Civil and natural rights: ensures that the IAS, when used in other than a lethal autonomous weapons application, is engineered to safeguard both civil and natural rights and to identify and mitigate the bias sometimes present in autonomous systems.

This study makes six recommendations on how best to use the 13 DADs and their 565 risk elements to aggressively move the DOD AI ethical principles from the articulation phase to the implementation phase:

  • Make the presence of ethical use enablers a mandatory key performance parameter for IAS: turns ethics principles into measurable and testable contractual obligations.
  • Incorporate IAS risk mitigation checklists into doctrine and planning: provides the doctrinal foundation needed to make IAS-related risk assessment a mandatory component of long-term strategic, and shorter-term operational planning.
  • Maintain an authoritative and standardized Joint Autonomy Risk Elements List (JAREL): transforms the list of 565 risk elements into the primary tool for implementing IAS-related ethics principles in a repeatable and tailorable way.
  • Make the JAREL publicly available to the greatest extent possible: promotes public trust in DOD use of IAS, improves DOD’s ability to leverage and attract an IAS development workforce and improves the US’ ability to attract allies and partners.
  • Reimagine the approach to “defining” standard terminology: removes the barrier to implementation created by the use of poorly defined or undefined subjective terminology in ethics-related policy that are prone to misinterpretation or differing interpretations.
  • Create a research and development portfolio: provides the technologies that enable ethically conforming IAS.

Finally, our findings support the DOD’s commitment to the ethical use of IAS by taking a transparent approach to implementing the DOD AI ethical principles. To demonstrate this transparency, the sponsors of this study agreed to make this report publicly available. Doing so can reduce the misinformation, miscommunication, and misinterpretation of statements and intentions made by the many organizations and communities involved in the debate over the development and use of AI in warfare systems. (pp. i-iii)

Click here to download a copy of the report.


Reference

Stumborg, M. F., Roh, B., & Rosen, M. (2021). Dimensions of Autonomous Decision-making: A First Step in Transforming the Policies and Ethics Principles Regarding Autonomous Systems into Practical System Engineering Requirements. Center for Naval Analyses. https://www.cna.org/CNA_files/PDF/Dimensions-of-Autonomous-Decision-making.pdf

To the Top icon
Asset Publisher

Recent Posts


USCG Autonomy Workshop Report
AI Implementation in U.S. Federal Agencies
NUWC Division, Keyport Strategic Plan
New National Defense Strategy Published
New National Security Strategy Issued
AI Bill of Rights Blueprint
R&D Funding Opportunity for Universities
Launch of Secure Maritime 5G