The image-matching algorithm that throws the rest out of the window
A newly developed image-matching algorithm outperforms existing ones on various levels and works properly in many more situations.
Mapping every point in an image with the corresponding one in another image might sound easy at first, but it is a fundamental task in many computer vision and photography applications. Although various efficient algorithms to do this have been developed, all of them tend to fail when images of the same thing or place are taken in radically different lighting conditions, or when different camera modes are used (such as in near-infrared pictures). It gets worse if objects are moved or their shapes are altered, or when the camera itself is moved or rotated.
Motivated by this lack of a more reliable-image matching algorithm, a team of researchers led by Professor Kwanghoon Sohn from Yonsei University in Seoul, Korea, have developed a fast, efficient and innovative algorithm, called DASC, which can match two images taken in very different conditions.
They used an innovative method based on the observation that images have an underlying local internal layout determined by several patches. These patches are used to measure an internal value called “local self-similarity,” which is less affected by distortions caused by different imaging conditions or by geometric changes in the objects within the image. Their approach addresses many challenges that conventional methods failed to overcome, such as the degradation of the center pixel of a patch affecting the overall performance of the algorithm.
The researchers also reduced the computational requirements of their matching program by simplifying the computations and by using an edge-aware filter, which drastically improved execution times while ensuring an accurate match.
They went on to thoroughly quantify and analyze the algorithm’s performance by comparing it with several preexisting matching methods. They used publicly available images containing particularly difficult pairs, such as photographs of a newspaper before and after serious wrinkles were made. Many other previously published algorithms failed to provide accurate matching for such pictures. The results obtained are very promising. “We found that our method outperforms conventional approaches on various multi-modal and multi-spectral benchmarks,” says Professor Sohn.
The usefulness of this method is evident with the increasing number of computer vision applications, which can range from automation and robotics to image searching. “We believe our method will serve as an essential tool for several applications using multi-modal and multi-spectral images,” adds Sohn. The ubiquitousness of cameras in our modern society provides a setting in which matching algorithms of this kind can find interesting and revolutionary applications virtually everywhere.
Professor Seong Chan Jun
Professor Donghyun Kim