Feature descriptor resistant to barrel distortion?
Hi,
I have a set of pictures captured with a wide-angle (fisheye) lens camera, and I want to find correspondences between them. Overlap areas are not very large, so most of my matches lie on the edges of the images, where barrel distortion is worse.
The solution I'm applying so far is undistorting the images, calculate keypoints/features/matches, and distort again, estimating equivalent pixels for inlier correspondences.
I would like to know if there is any feature descriptor resistant to this effect, in order to skip the previous step. Thanks.
Ow I am guessing that most feature descriptors should be locally invariant to barrel distortion. For example with sift of surf, you might want to hack the range around a point of interest that is used for the descriptor part. As long as the keypoint detector works fine, there must be a descriptor that is resistant against barrel distortion.
And yes, research on this problem has been done before :) don't reinvent the weel :)
Yeah the main problem is that sets of extracted keypoints are way different between images due to radial distortion. Different keypoints may have similar local descriptors, and that's why the number of bad matches is high in my case. I've tried several keypoint detector/feature descriptor combinations with different parameter setups and got no success. I'll have a look to that paper, though. Thanks.