Breadcrumb

Adaptive Target Tracking and Trail with Maritime Autonomous Systems: A Multi-Fidelity and Explainable Reinforcement Learning Approach

Brian Bingham, Michael McCarrin

Problem Statement


PI: Brian Bingham, Mechanical and Aerospace Engineering
CoPI: Micheal McCarrin, Computer Science, Oberlin College

Develop an autonomous system that uses DRL to track and trail acoustic targets in complex maritime environments, using a coordinated fleet of USVs and UAVs.

Employ a multi-fidelity simulation framework: lowfidelity models for rapid policy testing and highfidelity simulations for policy evaluation.

XRL techniques to provide interpretable decisionmaking processes, allowing operators to understand autonomous actions.

 

Impact 


New methods for Deep Reinforcement Learning (DRL) and Explainable Reinforcement Learning (XRL) in autonomous maritime systems in uncertain environments.

Enhancing autonomy of USVs and UAVs to improve operational effectiveness, enable persistent surveillance and reduce risk to personnel.

Foundational frameworks to develop intelligent, reliable, trustworthy capabilites.

 

Transition


Seed research will generate preliminary results for competitive proposals to ONR Code 35 and Science of Autonomy.