Ask Your Question

HeywoodFloyd's profile - activity

2013-08-13 15:21:43 -0600 commented answer SURF and SIFT detect different features depending on how the application is run

Sort-of figured out what the problem was. For some reason, sometimes when the program ran, the camera image was flipped. I just put in an option to re-flip the image.

However, this brings up two other questions (which, fortunately, I don't really have to solve) 1) Why does the camera image flip depending on how the program is called (I'm using OpenCV calls to read the camera, so it may be an OpenCV problem, or it might be a camera driver problem) 2) Why aren't similar features found for a flipped image? Shouldn't SURF (or SIFT or FREAK or whatever) find matching features, regardless of how the image is oriented? Isn't that one of the uses?

2013-08-13 09:52:48 -0600 commented answer SURF and SIFT detect different features depending on how the application is run

Instead of tracking features over time, I tried averaging together several camera images to reduce the effects of transient noise, and then blurring the images so that only large features would be left. Same results: it will compute a valid homography matrix only when the application is run a certain way for a single account.

2013-08-12 13:41:42 -0600 received badge  Supporter (source)
2013-08-09 21:57:54 -0600 received badge  Student (source)
2013-08-09 14:47:26 -0600 asked a question SURF and SIFT detect different features depending on how the application is run

I'm seeing strange behavior for cv::SurfFeatureDetector::detect and cv::SiftFeatureDetector::detect. I've got a camera pointing to a monitor, and I need to know how the camera image coordinates correspond to the screen coordinates. My approach is to put an image on the screen, and grab a camera shot. I then detect features of the screen image and the camera image, find matching feature pairs, and use their coordinates to extract a homography matrix.

The problem occurs in feature detection. Depending on how the application is run, I get different sets of features for the camera image. Visually, the camera images are indistinguishable but I get different feature sets if I run the app by double-clicking on it, or running it from the debugger, or if I run it from a different account. This is all on the same computer with the same camera/monitor, running the exact same code; not a copy.

I've tried relaxing/tightening various parameters of SIFT and SURF, but always results differ depending on how the application is run.

I'm using OpenCV 2.4.5, building with Visual Studio 2010 and running on a Windows 7 Pro, 64-bit, although I'm building the OpenCV application as a 32-bit.

Has anybody else seen this behavior?