New Method of Sparse Visual Saliency Feature Extraction and Application
in Unmanned Vehicle Environment Sensing
Abstract:
A new method based on Hessian matrix threshold of finding local low-level saliency features is proposed in this study after the standard local invariant feature extraction algorithm SRUF (Speeded Up Robust Features) is analyzed. In this method, the number of saliency feature points can change with the change of Hessian threshold. The saliency feature points will become sparser when Hessian threshold becomes larger. When a certain extreme threshold which is defined as Hessian Threshold Node is reached, the retained discriminative feature points are remarkable stability characteristics, also make up the best sparse saliency features set. This method is applied in the unmanned vehicle environment sensing system to help extract the before going vehicles saliency feature and realize object tracking and obstacle detection. Experiment results show that this is an quick and robust method to determine saliency feature points quantitatively and is very suitable for the occasion which has strong demand on real-time.
How to cite this article
Du Ming-Fang, Wang Jun-Zheng, Li Jing and Cao Hai-Qing, 2013. New Method of Sparse Visual Saliency Feature Extraction and Application
in Unmanned Vehicle Environment Sensing. Information Technology Journal, 12: 5914-5921.
REFERENCES
Chen, Z.X., F.L. Chang and C.Y. Liu, 2010. Multi-features fusion license plates locating algorithm based on feature. Control Decision, 25: 1909-1912.
Direct Link
Chen, Z.X., C.Y. Liu and F.L. Chang, 2010. Vehicle Type Recognition Based on Biological Vision Salience. Comput.Sci., 37: 207-208.
Direct Link
Harel, J., C. Koch and P. Perona, 2007. Graph-based visual saliency. Adv. Neural. Info.Process. Syst., 19: 545-552.
Direct Link
Itti, L. and P. Baldi, 2005. Bayesian surprise attracts human attention. Adv. Neur. Info. Process Syst., 19: 547-554.
Direct Link
Koch, C. and S. Ullman, 1985. Shifts in selective visual attention: Towards the underlying neural circuitry. Human Neurobiol., 4: 219-227.
Direct Link
Koch, C. and T. Poggio, 1999. Predicting the Visual World: Silence is Golden. Nat. Neurosci., 2: 9-10.
Direct Link
Felzenszwalb, P., D. McAllester and D. Ramanan, 2008. A discriminatively trained, multiscale, deformable part model. Proceeding of the IEEE Conference on Computer Vision and Pattern Recognition, June 23-28, Anchorage, AK., 1-8.
Radhakrishna A., S. Hemami, F. Estrada, and S. Susstrunk, 2009. Frequency-tuned Salient Region Detection CVPR 20-25 June 2009 Computer Vision and Pattern Recognition, 1597-1604.
Zhai, Y. and M. Shah, 2006. Visual attention detection in video sequences using spatiotemporal cues. Proceedings of the 14th Annual ACM International Conference on Multimedia, (MM 06), ACM, New York, USA., pp: 815-824.
Siagian, C. and L. Itti, 2008. Comparison of gist models in rapid scene categorization tasks. J. Vision, Vol. 8.
CrossRef
Zhang, Y., Z.L. Zhang and Z.K. Shen, 2008. A Salient Feature Extraction Algorithm Fusing the Motion Characteristic of Objects. J. nat. univ. defense technol., 30: 109-115.
Direct Link
Zhang, Y., Z.L. Zhang and Z.K. Shen, 2008. The Images Tracking Algorithm Using Particle Filter Based on Dynamic Salient Features of Targets. acta electron. sinica. 36: 2306-2311.
Direct Link
Bay, H., A. Ess, T. Tuytelaars and L. Van Gool, 2006. SURF: Speeded up robust features. Proceedings of the 9th European Conference on Computer Vision, May 7-13, 2006, Graz, Austria, pp: 404-417.
Bay, H., A. Ess, T. Tuytelaars and L. van Gool, 2008. Speeded-Up Robust Features (SURF). Comput. Vision Image Understand., 110: 346-359.
CrossRef Direct Link
© Science Alert. All Rights Reserved