IGFTT: towards an efficient alternative to SIFT and SURF
Date issued
2015
Journal Title
Journal ISSN
Volume Title
Publisher
Václav Skala - UNION Agency
Abstract
The invariant feature detectors are essential components in many computer vision applications, such as
tracking, simultaneous localization and mapping (SLAM), image search, machine vision, object recognition, 3D
reconstruction from multiple images, augmented reality, stereo vision, and others. However, it is very challenging
to detect high quality features while maintaining a low computational cost. Scale-Invariant Feature Transform
(SIFT) and Speeded-Up Robust Features (SURF) algorithms exhibit great performance under a variety of image
transformations, however these methods rely on costly keypoint’s detection. Recently, fast and efficient variants
such as Binary Robust Invariant Scalable Keypoints (BRISK) and Oriented Fast and Rotated BRIEF (ORB) were
developed to offset the computational burden of these traditional detectors.
In this paper, we propose to improve the Good Features to Track (GFTT) detector, coined IGFTT. It approximates
or even outperforms the state-of-art detectors with respect to repeatability, distinctiveness, and robustness, yet can
be computed much faster than Maximally Stable Extremal Regions (MSER), SIFT, BRISK, KAZE, Accelerated
KAZE (AKAZE) and SURF. This is achieved by using the search of maximal-minimum eigenvalue in the image
on scale-space and a new orientation extraction method based on eigenvectors.
A comprehensive evaluation on standard datasets shows that IGFTT achieves quite a high performance with a
computation time comparable to state-of-the-art real-time features. The proposed method shows exceptionally
good performance compared to SURF, ORB, GFTT, MSER, Star, SIFT, KAZE, AKAZE and BRISK.
Description
Subject(s)
IGFTT, detektory funkcí, klíčový bod, počítačové vidění, opakovatelnost
Citation
WSCG 2015: full papers proceedings: 23rd International Conference in Central Europeon Computer Graphics, Visualization and Computer Visionin co-operation with EUROGRAPHICS Association, p. 73-80.