Video Scoring - Aerodynamic Decelerator Systems Center
Video Scoring Home
This research addressed the problem of determining the payload’s three-dimensional position and possibly attitude based on observations obtained by several fixed cameras on the ground. Figure A shows an example of surveyed camera sites for the combined Corral/Mohave drop zone (DZ), whereas Fig.B demonstrates an example of stabilized Kineto Tracking Mount (KTM) that is used to record the airdrop event from aircraft exit to impact. Figure B shows an operator seat (in the center) with multiple cameras (having different focal lengths). During the airdrop the operator manually points cameras at the test article. (Currently KTMs are controlled remotely.) All KTMs have azimuth and elevation encoders, which sample and record Az/El angles along with Coordinated Universal Time (UTC) for every frame.
Figure A. Two drop zones (pentagons) with multiple KTM sites around them (rhombs).
Figure B. The KTM equipped with multiple cameras.
- Firstly, it is needed to estimate test article performance (e.g., a descent rate at certain altitudes and at touchdown for parachute systems)
- Secondly, this information can be further used for model identification and control algorithms development
- Thirdly, parachute- or parafoil-payload systems (including clusters) are the multiple-body flexible structures, so knowing the behavior of each component (payload and canopy), as opposed to just payload center, allows to model/improve their interaction
Obviously, nowadays an inertial measurement unit (IMU) and/or global positioning system (GPS) receiver can be used to acquire accurate TSPI of any moving object. However, when applied to massive testing of different test articles there are several reasons preventing of using those modern navigation means. To start with, you cannot install IMU/GPS units on all test articles (there are to many of them). Second, the harsh condition of operating some of the articles will result in destroying IMU/GPS units at the end of each or every other test. Third, some of the test articles simply cannot accommodate IMU/GPS units either because of the size constraints (for instance, miniature air delivery systems, shells and bullets) or because of non-rigid structure of the object (canopies). Fourth, for parachute and parafoil systems the GPS signal is simply not available for the first 30 seconds of a ballistic flight after exiting an aircraft. All aforementioned reasons make using information, recorded by multiple KTMs for each test anyway, to determine TSPI of the test articles very relevant.
This research addressed this issue in three ways. First, the position estimation problem was properly formulated, addressed, and solved. Second, the pose (position and attitude) estimation problem was dealt with. Third, the graphical user interface helping to better understanding the pose estimation problem was developed.
In addition to the papers on the video scoring, written by the NPS faculty and listed on the ADSC Publications page below you will find some more papers written by others and related to different pieces of software used/tested within this project
- Lagarias, J.C., Reeds, J.A., Wright M.H., and Wright, P.E., “Convergence Properties of the Nelder-Mead Simplex Method in Low Dimensions,”SIAM Journal of Optimization, vol.9, no.1, 1998, pp.112-147.
- David, P., DeMenthon, D., Duraiswami, R., Samet, H., “SoftPOSIT: Simultaneous Pose and Correspondence Determination,” International Journal of Computer Vision, vol.59, no.3, 2004, pp.259-284.
- DeMenthon, D., Davis, L.S., “Model-Based Object Pose in 25 Lines of Code,” International Journal of Computer Vision, vol.15, no.1-2, 1995, pp.123-141.
- DeMenthon, D., Davis, L.S., “Recognition and Tracking of 3D Objects by 1D Search,” Proceedings of the DARPA Image Understanding Workshop, Washington, DC, 1993, pp.653-659.
- Mikolajczyk, K., and Schmid, C., “A Performance Evaluation of Local Descriptors,” Transactions on Pattern Analysis and Machine Intelligence, vol.27, no.10, 2005, pp.1615–1630
- Lowe, D.G., “Distinctive Image Features from Scale-Invariant Keypoints,” International Journal of Computer Vision, vol.60, no.2, 2004, pp.91-110.
- Lowe, D.G., “Local Feature View Clustering for 3D Object Recognition,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Kauai, Hawaii, December 2001, pp.682-688.
- Lowe, D.G., “Object Recognition from Local Scale-Invariant Features,” Proceedings of the International Conference on Computer Vision, Corfu, Greece, September, 21-22, 1999, pp.1150-1157.
- Lowe, D.G., “Fitting Parameterized Three-Dimensional Models to Images,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.13, no.5, 1991, pp.441–450.