- Patent Number:
10618,673
- Appl. No:
15/488182
- Application Filed:
April 14, 2017
- Abstract:
Systems and methods described herein incorporate autonomous navigation using a vision-based guidance system. The vision-based guidance system enables autonomous trajectory planning and motion execution by the described systems and methods without feedback or communication with external operators. The systems and methods described herein can autonomously track an object of interest while seeking to obtain a diversity of views of the object of interest to aid in object identification. The systems and methods described include a robust reacquisition methodology. By handling navigation and tracking autonomously, systems described herein can react more quickly to non-cooperative moving objects of interest and can operate in situations where communications with external operators is compromised or absent.
- Inventors:
Chan, Michael T. (Bedford, MA, US); Duarte, Ronald (Cambridge, MA, US)
- Assignees:
Massachusetts Institute of Technology (Cambridge, MA, US)
- Claim:
1. An autonomous vehicle system, comprising: a chassis including one or more motors; an imaging system attached to the chassis; a vision-based guidance system including a memory and at least one of a central processing unit (CPU) or a graphics processing unit (GPU) configured to: acquire images of an object of interest using the imaging system; analyze the images of the object to determine an object position and an object size relative to the imaging system; determine a path and trajectory of the autonomous vehicle system relative to the object of interest, the trajectory identifying a sequence of waypoints and the path identifying a route through the sequence of waypoints, each waypoint indicating a position in space relative to the object of interest, at least one waypoint in the sequence of waypoints identifying a distance and specific viewing angle from the autonomous vehicle to the object of interest; and control the one or more motors to move the autonomous vehicle system along the trajectory while keeping the object of interest in view of the imaging system.
- Claim:
2. The system of claim 1 , wherein analyzing the images of the object of interest to determine the object position and the object size relative to the imaging system includes updating an object model based on at least a portion of the acquired images using a discriminative learning-based tracking algorithm.
- Claim:
3. The system of claim 2 , wherein the vision-based guidance system is further configured to: compute a tracking confidence score between at least a first image and a second image as determined by the discriminative learning-based tracking algorithm; in response to the tracking confidence score being greater than or equal to a threshold value, update the object model using the second image using the discriminative learning-based tracking algorithm; and in response to the tracking confidence score being less than the threshold value, halt updating the object model.
- Claim:
4. The system of claim 3 , wherein the tracking confidence score is a rate of change of a score output by the discriminative learning-based tracking algorithm.
- Claim:
5. The system of claim 4 , wherein the discriminative learning-based tracking algorithm is a support vector machine.
- Claim:
6. The system of claim 3 , wherein the vision-based guidance system is further configured to: automatically enter a search mode to reacquire the object of interest in an image in response to the tracking confidence score being less than the threshold value.
- Claim:
7. The system of claim 2 , wherein the vision-based guidance system is further configured to apply a feature-based matching algorithm to the images to compensate for relative motion of the imaging system with respect to the object of interest.
- Claim:
8. The system of claim 1 , further comprising an altimeter.
- Claim:
9. The system of claim 1 , wherein the vision-based guidance system is configured to communicate with a remote operator to send or receive information related to the object of interest including images.
- Claim:
10. The system of claim 1 , wherein the imaging system is attached to a gimbal of the chassis.
- Claim:
11. The system of claim 10 , wherein the vision-based guidance system is further configured to move the imaging system using the gimbal to keep the object of interest in view of the imaging system.
- Claim:
12. The system of claim 1 , wherein the vision-based guidance system is implemented entirely onboard the chassis.
- Claim:
13. The system of claim 1 , wherein determining a trajectory relative to an object of interest includes: computing a variation of information metric using one or more of the acquired images of the object of interest; determining a waypoint based on a measure of information gain; and planning a path to reach the waypoint.
- Claim:
14. The system of claim 13 , wherein determining a waypoint based on a measure of information gain includes: accessing a lookup table specific to a class of the object of interest; and identifying the waypoint corresponding to the estimated next-best view in the lookup table based upon the current location of the autonomous vehicle system in relation to the object of interest.
- Claim:
15. The system of claim 14 , wherein the class of the object of interest is at least one of a building, a vehicle, an individual, or a natural feature.
- Claim:
16. The system of claim 13 , wherein determining a waypoint based on a measure of information gain includes: selecting the waypoint from a pre-determined path.
- Claim:
17. A method of autonomously tracking an object of interest, comprising: acquiring images of the object of interest using an imaging system attached to a chassis of an autonomous vehicle system, the chassis including one or more motors; analyzing the images of the object of interest to determine an object position and an object size relative to the imaging system; determining a path and trajectory of the autonomous vehicle system relative to the object of interest, the trajectory identifying a sequence of waypoints and the path identifying a route through the sequence of waypoints, each waypoint indicating a position in space relative to the object of interest, at least one waypoint in the sequence of waypoints identifying a distance and specific viewing angle from the autonomous vehicle to the object of interest; and controlling the one or more motors to move the chassis along the trajectory while keeping the object of interest in view of the imaging system.
- Claim:
18. The method of claim 17 , wherein analyzing the images of the object of interest to determine the object position and the object size relative to the imaging system includes updating an object model based on at least a portion of the acquired images using a discriminative learning-based tracking algorithm.
- Claim:
19. The method of claim 18 , further comprising computing a tracking confidence score between at least a first image and a second image in the as determined by the discriminative learning-based tracking algorithm; in response to the tracking confidence score being greater than or equal to a threshold value, updating the object model using the second image using the discriminative learning-based tracking algorithm; and in response to the tracking confidence score being less than the threshold value, halt updating the object model.
- Claim:
20. The method of claim 19 , wherein the tracking confidence score is a rate of change of a score output by the discriminative learning-based tracking algorithm.
- Claim:
21. The method of claim 20 , wherein the discriminative learning-based tracking algorithm is a support vector machine.
- Claim:
22. The method of claim 19 , further comprising automatically entering a search mode to reacquire the object of interest in an image in response to the tracking confidence score being less than the threshold value.
- Claim:
23. The method of claim 17 , further comprising applying a feature-based matching algorithm to the images to compensate for relative motion of the imaging system with respect to the object of interest.
- Claim:
24. The method of claim 17 wherein determining a trajectory relative to an object of interest includes: computing a variation of information metric using one or more of the acquired images of the object of interest; determining a waypoint based on a measure of information gain; and planning a path to reach the waypoint.
- Claim:
25. The method of claim 24 , wherein determining a waypoint based on a measure of information gain includes: accessing a lookup table specific to a class of the object of interest; and identifying the waypoint corresponding to the estimated next-best view in the lookup table based upon the current location of the autonomous vehicle system in relation to the object of interest.
- Claim:
26. The method of claim 25 , wherein the class of the object of interest is at least one of a building, a vehicle, an individual, or a natural feature.
- Claim:
27. The method of claim 24 , wherein determining a waypoint based on a measure of information gain includes: selecting the waypoint from a pre-determined path.
- Claim:
28. The method of claim 17 , further comprising moving the imaging system using a gimbal of the chassis to keep the object of interest in view of the imaging system.
- Patent References Cited:
7447593 November 2008 Estkowski
9164506 October 2015 Zang
2005/0083212 April 2005 Chew
2009/0174575 July 2009 Allen et al.
2010/0070343 March 2010 Taira et al.
2010/0195870 August 2010 Ai
2010/0250022 September 2010 Hines
2012/0106800 May 2012 Khan et al.
2012/0143808 June 2012 Karins
2012/0148092 June 2012 Ni et al.
2013/0030870 January 2013 Swinson et al.
2013/0088600 April 2013 Wu et al.
2013/0287248 October 2013 Gao et al.
2014/0037142 February 2014 Bhanu et al.
2014/0200744 July 2014 Molander
2014/0362230 December 2014 Bulan et al.
2016/0155339 June 2016 Saad
2016/0277646 September 2016 Carr
2017/0008521 January 2017 Braunstein
2017/0053167 February 2017 Ren
2017/0177955 June 2017 Yokota
2017/0301109 October 2017 Chan
2018/0109767 April 2018 Li
2018/0120846 May 2018 Falk-Pettersen
2018/0137373 May 2018 Rasmusson, Jr.
2018/0158197 June 2018 Dasgupta
2018/0188043 July 2018 Chen
2018/0189971 July 2018 Hildreth
2018/0246529 August 2018 Hu
2019/0094888 March 2019 Hiroi
2019/0196513 June 2019 Zhou
2019/0223237 July 2019 Hong
2019/0227540 July 2019 Suvitie
2019/0287411 September 2019 Ozawa
- Other References:
A. Qadir et al., “Implementation of an onboard visual tracking system with small Unmanned Aerial Vehicle (UAV).” International Journal of Innovative Technology and Creative Engineering, vol. 1, No. 10, p. 17-25 (Oct. 2011). cited by applicant
A.E. Ortiz et al., “On Multi-UAV Scheduling for Human Operator Target Identification,” American Control Conference (Jul. 1, 2011). cited by applicant
C. Stauffer et al, “Adaptive Background Mixture Models for Real-Time Tracking,” Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2. pp. 246-252 (Jun. 1999). cited by applicant
F. Almiaro et al.,“Designing Decision and Collaboration Support Technology for Operators in Multi-UAV Operations Teams,” prepared for Boeing, Phantom Works, presented by HAL at MIT (Jul. 2007). cited by applicant
F. Chaumette et al., “Visual Servo Control, Part II: Advanced Approaches,” IEEE Robotics and Automation Magazine, vol. 14. No. 1. pp. 109-118 (Mar. 2007). cited by applicant
G. Rosman et al., “Coresets for k-Segmentation of Streaming Data,” Advances in Neural Information Processing Systems 27, pp. 559-567 (Dec. 2014). cited by applicant
H. Bay et al., “Speeded Up Robust Features (SURF),” Computer Vision and Image Understanding, vol. 110. No. 3, pp. 346-359 (Sep. 2007). cited by applicant
I. Tsochantaridis et al., “Large Margin Methods for Structured and Interdependent Output Variables,” Journal of Machine Learning Research, vol. 6, pp. 1453-1484 (Sep. 2005). cited by applicant
J. Johnson, “Analysis of image forming systems,” Image Intensifier Symposium, AD 220160 (Warfare Electrical Engineering Department, U.S. Army Research and Development Laboratories, Ft. Belvoir, Va.), pp. 244-273 (Oct. 1958). cited by applicant
J. Pestana et al., “Vision based GPS-denied object tracking and following for UAV,” IEEE Symposium on Safety, Security and Rescue Robotics (Oct. 2013). cited by applicant
L. Geng et al., “UAV Surveillance Mission Planning with Gimbaled Sensors,” IEEE Intl. Conf on Control and Automation (Jun. 2014). cited by applicant
Lockheed Martin, OnPoint Vision Systems. Retrieved online at: . 1 page (2017). cited by applicant
M. Kristan et al., “The Visual Object Tracking VOT2013 Challenge Results,” IEEE Workshop on Visual Object Tracking Challenge, pp. 98-111 (Dec. 2013). cited by applicant
M. Ozuysal et al., “Pose Estimation for Category Specific Multiview Object Localization.” in Conference on Computer Vision and Pattern Recognition, Miami, FL (Jan. 2009). cited by applicant
NVIDIA, “NVIDIA Jetson TK1 Development Kit: Bringing GPU-accelerated computing to embedded systems,” Technical Brief (Apr. 2014). cited by applicant
P.C. Niedfeldt et al., “Enhanced UAS Surveillance Using a Video Utility Metric,” Unmanned Systems, vol. 1, pp. 277-296 (Sep. 2013). cited by applicant
P.F. Felzenszwalb et al., “Object Detection with Discriminatively Trained Part-Based Models.” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32. No. 9. pp. 1621-1645 (Sep. 2010). cited by applicant
Q. Le., “Active Perception: Interactive Manipulation for Improving Object Detection.” Technical Report, Stanford University (2010). cited by applicant
R. Bajcsy, “Active Perception,” Proceedings of the IEEE, vol. 76. No. 8, pp. 966-1005 (Mar. 1988). cited by applicant
R. Higgins, “Automatic event recognition for enhanced situational awareness in UAV video.” Military Communications Conference, 2005. MILCOM 2005. IEEE, vol. 3, pp. 1706-1711 (Oct. 2005). cited by applicant
S. Hare et al., “STRUCK: Structured Output Tracking with Kernels,” Proceedings of the 2011 IEEE International Conference on Computer Vision, pp. 263-270 (Oct. 2011). cited by applicant
- Primary Examiner:
Grant, II, Jerome
- Attorney, Agent or Firm:
McCarter & English, LLP
- Accession Number:
edspgr.10618673
No Comments.