Technology Readiness Levels - SLAMR 2.0
Technology Readiness Levels
Technology Readiness Levels (TRLs), initially developed by the National Aeronautical and Space Administration (NASA), are used in U.S. Department of Defense (DoD) Technology Readiness Assessments. The breakdown of TRL definitions, descriptions, and supporting information below are from the DoD's Technology Readiness Assessment Guidance. The hardware and software descriptions are from NASA's Technology Readiness Level Definitions.
|TRL||Definition||Description||Supporting Information||Hardware Description||Software Description|
|Basic principles observed and reported.||Lowest level of technology readiness. Scientific research begins to be translated into applied research and development (R&D). Examples might include paper studies of a technology’s basic properties.||Published research that identifies the principles that underlie this technology. References to who, where, when.||Scientific knowledge generated underpinning hardware technology concepts/applications.||Scientific knowledge generated underpinning basic properties of software architecture and mathematical formulation.|
|2||Technology concept and/or application formulated.||Invention begins. Once basic principles are observed, practical applications can be invented. Applications are speculative, and there may be no proof or detailed analysis to support the assumptions. Examples are limited to analytic studies.||Publications or other references that outline the application being considered and that provide analysis to support the concept.||Invention begins, practical application is identified but is speculative, no experimental proof or detailed analysis is available to support the conjecture.||Practical application is identified but is speculative, no experimental proof or detailed analysis is available to support the conjecture. Basic properties of algorithms, representations and concepts defined. Basic principles coded. Experiments performed with synthetic data.|
|3||Analytical and experimental critical function and/or characteristic proof of concept.||Active R&D is initiated. This includes analytical studies and laboratory studies to physically validate the analytical predictions of separate elements of the technology. Examples include components that are not yet integrated or representative.||Results of laboratory tests performed to measure parameters of interest and comparison to analytical predictions for critical subsystems. References to who, where, and when these tests and comparisons were performed.||Analytical studies place the technology in an appropriate context and laboratory demonstrations, modeling and simulation validate analytical prediction.||Development of limited functionality to validate critical properties and predictions using non-integrated software components.|
|4||Component and/or breadboard validation in a laboratory environment.||Basic technological components are integrated to establish that they will work together. This is relatively “low fidelity” compared with the eventual system. Examples include integration of “ad hoc” hardware in the laboratory.||System concepts that have been considered and results from testing laboratory- scale breadboard(s). References to who did this work and when. Provide an estimate of how breadboard hardware and test results differ from the expected system goals.||A low fidelity system/component breadboard is built and operated to demonstrate basic functionality and critical test environments, and associated performance predictions are defined relative to the final operating environment.||Key, functionally critical, software components are integrated, and functionally validated, to establish interoperability and begin architecture development. Relevant Environments defined and performance in this environment predicted.|
|5||Component and/or breadboard validation in a relevant environment.||Fidelity of breadboard technology increases significantly. The basic technological components are integrated with reasonably realistic supporting elements so they can be tested in a simulated environment. Examples include “high-fidelity” laboratory integration of components||Results from testing laboratory breadboard system are integrated with other supporting elements in a simulated operational environment. How does the “relevant environment” differ from the expected operational environment? How do the test results compare with expectations? What problems, if any, were encountered? Was the breadboard system refined to more nearly match the expected system goals?||A medium fidelity system/component brassboard is built and operated to demonstrate overall performance in a simulated operational environment with realistic support elements that demonstrates overall performance in critical areas. Performance predictions are made for subsequent development phases.||End-to-end software elements implemented and interfaced with existing systems/simulations conforming to target environment. End-to-end software system, tested in relevant environment, meeting predicted performance. Operational environment performance predicted. Prototype implementations developed.|
|6||System/subsystem model or prototype demonstration in a relevant environment.||Representative model or prototype system, which is well beyond that of TRL 5, is tested in a relevant environment. Represents a major step up in a technology’s demonstrated readiness. Examples include testing a prototype in a high-fidelity laboratory or in a simulated operational environment.||Results from laboratory testing of a prototype system that is near the desired configuration in terms of performance, weight, and volume. How did the test environment differ from the operational environment? Who performed the tests? How did the test compare with expectations? What problems, if any, were encountered? What are/were the plans, options, or actions to resolve the problem before moving to the next level?||A high fidelity system/component prototype that adequately addresses all critical scaling issues is built and operated in a relevant environment to demonstrate operations under critical environmental conditions.||Prototype implementations of the software demonstrated on full-scale realistic problems. Partially integrate with existing hardware/software systems. Limited documentation available. Engineering feasibility fully demonstrated.|
|7||System/Subsystem model or prototype demonstration in a relevant environment.||Prototype near or at planned operational system. Represents a major step up from TRL 6 by requiring demonstration of an actual system prototype in an operational environment (e.g., in an aircraft, in a vehicle, or in space).||Results from testing a prototype system in an operational environment. Who performed the tests? How did the test compare with expectations? What problems,
if any, were encountered? What are/were the plans, options, or actions to resolve problems before moving to the next level?
|A high fidelity engineering unit that adequately addresses all critical scaling issues is built and operated in a relevant environment to demonstrate performance in the actual operational environment and platform (ground, airborne, or space).||Prototype software exists having all key functionality available for demonstration and test. Well integrated with operational hardware/software systems demonstrating operational feasibility. Most software bugs removed. Limited documentation available.|
|8||Actual system completed and qualified through test and demonstration.||Technology has been proven to work in its final form and under expected conditions. In almost all cases, this TRL represents the end of true system development. Examples include developmental test and evaluation (DT&E) of the system in its intended weapon system to determine if it meets design specifications.||Results of testing the system in its final configuration under the expected range of environmental conditions in which it will be expected to operate. Assessment of whether it will meet its operational requirements. What problems, if any, were encountered? What are/were the plans, options, or actions to resolve problems before finalizing the design?||The final product in its final configuration is successfully demonstrated through test and analysis for its intended operational environment and platform (ground, airborne, or space).||All software has been thoroughly debugged and fully integrated with all operational hardware and software systems. All user documentation, training documentation, and maintenance documentation completed. All functionality successfully demonstrated in simulated operational scenarios. Verification and Validation (V&V) completed.|
|9||Actual system proven through successful mission operations.||Actual application of the technology in its final form and under mission conditions, such as those encountered in operational test and evaluation (OT&E). Examples include using the system under operational mission conditions.||OT&E reports.||The final product is successfully operated in an actual mission.||All software has been thoroughly debugged and fully integrated with all operational hardware/software systems. All documentation has been completed. Sustaining software engineering support is in place. System has been successfully operated in the operational environment.|
- National Aeronautical and Space Administration. (n.d.) Technology Readiness Level Definitions [PDF file]. Retrieved from https://www.nasa.gov/pdf/458490main_TRL_Definitions.pdf.
- U.S. Department of Defense. (2011, April). Technology Readiness Assessment (TRA) Guidance [PDF file]. Retrieved from https://apps.dtic.mil/dtic/tr/fulltext/u2/a554900.pdf.