Scuttlebutt Disclaimer

"USS Hermitage: Scuttlebutt’s Thanksgiving edition - 1943" by USS Hermitage (AP-54) is licensed under agreement with the U.S. Naval Institute.

Asset Publisher

T&E of AI and Autonomy: An Assurance Case Framework

T&E of AI and Autonomy: An Assurance Case Framework

The Institute for Defense Analysis (IDA) recently issued two insightful documents on trust, trustworthiness, and assurance of artificial intelligence and autonomy.  The executive summary in Trust, Trustworthiness, and Assurance of AI and Autonomy, states,

“[I]it is not useful to speak of trust in artificial intelligence (AI) or AI-enabled autonomous systems (AIAS) as if “trust” were a single thing. [The author argues] further that it is simply wrong to believe that any important kind of trust can be “built in” to AIAS through system design choices and testing. Each of the important kinds of trust associated with AIAS—and there are several—will require additional deliberate action beyond design and testing, and this is why trust cannot be built in to a system. We need to clean up both our language and our thinking about trust in AIAS in order to focus better on the core challenges to successful employment of such systems.

For purposes of argument, [the author defined] a system to be trustworthy to the extent that:

  1. When employed correctly, it will dependably do well what it is intended to do.
  2. When employed correctly, it will dependably not do undesirable things.
  3. When paired with the humans it is intended to work with, it will dependably be employed correctly.

That last criterion is important, because it is quite possible to design and build AIAS that could function as intended, but that humans cannot interact with in the necessary ways. Considering all three criteria, the authorities who regulate the use of an AIAS must know when it is trustworthy, with sufficient justified confidence that they are willing to permit its use. At present, we face a choice between fielding AIAS of unknown trustworthiness or being bounded in what we can do by the limitations of our ability to provide evidence for trustworthiness that is both valid and compelling.

The second, T&E of AI and Autonomy: An Assurance Case Framework Version 2.0, is a presentation that will be delivered to the Naval R&D Establishment Unmanned Vehicles and Autonomous Systems (UVAS) Working Group today.

The IDA “is a nonprofit corporation that operates three Federally Funded Research and Development Centers. Its mission is to answer the most challenging U.S. security and science policy questions with objective analysis, leveraging extraordinary scientific, technical, and analytic expertise” (IDA, 2021, p. 2).

To the Top icon
Asset Publisher

Recent Posts

USCG Autonomy Workshop Report
AI Implementation in U.S. Federal Agencies
NUWC Division, Keyport Strategic Plan
New National Defense Strategy Published
New National Security Strategy Issued
AI Bill of Rights Blueprint
R&D Funding Opportunity for Universities
Launch of Secure Maritime 5G