Breadcrumb

Beyond Human Error: Assurance for Human- Machine Teams with Advanced Autonomy

Joshua A. Kroll

Problem Statement


Enabling Human-Machine Teaming with advanced autonomy requires trust by commanders, who will otherwise eschew even advanced capabilities when they remain responsible. This requires clarifying lines of responsibility for machine and human-machine team behaviors Current assurance methods (e.g., MIL-STD 882E) do not rise to this challenge.  

 

Impact



Assessment of autonomous systems safety is an open basic research problem

No methods currently link desired system-level invariants to formalizable software or control requirements amenable to verification ‒ Particularly with regard to human-machine teaming ‒ Particularly with regard to use cases

Systematize state-of-the-art (ad-hoc analysis)

 

Transition


Improvement to autonomy safety standards/guidance (e.g. MIL-STD 882F & associated guidance; implementation/assessment guidance for DoDD 3000.09 approvals)

Continued support opportunities: ‒ ONR “Science of Autonomy” Program (Code 351) ‒ USDR&E Critical Focus in Trusted Autonomy ‒ AFOSR BAA focus topic “Trust & Influence” ‒ NASA University Leadership Initiative (multi-institutional) for Systems Safety/Advanced Air Mobility/Advanced Capabilities for Emergency Response Operations