Ask Your Question

christoph's profile - activity

2016-06-07 04:13:01 -0500 received badge  Notable Question (source)
2014-05-19 23:53:12 -0500 received badge  Popular Question (source)
2013-10-07 05:34:15 -0500 commented answer how do descriptors treat image bounds

I marked this as the right answer, because it is the actual source code from https://github.com/Itseez/opencv/tree/master/modules/features2d/src. So this should be proof enough. Thanks to all who participated in answering my question.

2013-08-09 08:50:26 -0500 commented answer how do descriptors treat image bounds

Maybe I was a bit to hastefull to suggest answering the question like this. It is a good idea to get this "suspicion" confirmed by some one official, although I'm quite positive that Steven is actually right. I'll leave it unmarked for now and wait for confirmation. The reason why I rushed him into answering was that there are so many unanswered questions on this site which lessens the overall quality a bit...

2013-08-09 05:24:55 -0500 received badge  Nice Question (source)
2013-08-09 04:44:19 -0500 commented question how do descriptors treat image bounds

btw: If anyone of you might care to write an answer, I'll gladly accept it as the right answer.

2013-08-09 04:40:46 -0500 commented question how do descriptors treat image bounds

That's what I thought. I'm trying to implement the LEHF descriptor, which describes not points but lines. See this paper: http://www.bmva.org/bmvc/2012/BMVC/paper083/paper083.pdf They don't mention how they handle this. Discarding the whole line might not be the best option, because there will allways be some continuous lines reaching over the whole image eg. from left to right. Do you have any thoughts on this?

2013-08-07 11:07:32 -0500 asked a question how do descriptors treat image bounds

Given that a keypoint is located near an image edge (e.g. at Image(2, 2) ). What does the descriptor do if it describes a region which has a 10 px radius?

Does it compute the values for valid Pixels only? Or does it get rejected completely?

2013-08-05 07:24:41 -0500 commented question OpenCV 2.4 build Failed error while creating iOS framework in Mac

Just to extend my previous comment: The file 'jmemansi.o' is missing for all three platforms (armv7, armv7s & i386).

2013-08-05 07:13:16 -0500 commented question OpenCV 2.4 build Failed error while creating iOS framework in Mac

Same here. Actually there is a build failure way before the lines you posted.

/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/libtool: can't open file: /Users/christophkapffer/_privat/ios/build/iPhoneOS-armv7/3rdparty/libjpeg/OpenCV.build/Release-iphoneos/libjpeg.build/Objects-normal/armv7/jmemansi.o (No such file or directory) Command /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/libtool failed with exit code 1

That is the reason why the libtool can't merge the different builds in order to create a universal framework. There is no install directory for iPhoneOS-armv7, because that build failed in the first place (see above).

2013-05-07 06:28:29 -0500 received badge  Good Question (source)
2013-01-31 03:55:32 -0500 commented question iOS6 + Opencv (Latest Compile) Linker Only for classes in Feature2D module

In your build settings under Apple LLVM compiler have you tried setting the C++ Standard Library to "LLVM C++ standard library with C++11 support"?

2012-08-01 05:20:31 -0500 received badge  Nice Question (source)
2012-08-01 05:15:00 -0500 received badge  Scholar (source)
2012-08-01 05:14:59 -0500 received badge  Supporter (source)
2012-08-01 05:13:38 -0500 commented answer performance of findHomography

Thank you for these great suggestions. By now I only implemented the first one, and I can confirm that the quality of matches are indeed a key factor regarding speed. I started with a simple distance filter to get the n best matches and might have found a bug.

If I use the build in cross check method in brute force matcher (initializing it like this: BFMatcher(NORM_HAMMING, true) ) all distances have a value of 2.14748e+09, which looks like some kind of overflow to me. If I use BFMatcher(NORM_HAMMING) I get reasonable values. Features and descriptors are both ORB. I'm using the stable 2.4.2 iOS build on an iPhone 4S.

I am positive that these distance values were responsible for the slow down in findHomography in the first place.

2012-07-31 12:37:07 -0500 commented question performance of findHomography

The object image has about 260 keypoints with a resolution of 327x245 px and the scene images (camera frames) are usually between 460 to 490 with 352x288 res. They give me usually 260 matches. Another thing I might add is the fact, that the tracked object is expected to appear about 2 to 3 times smaller in the scene image than in its reference image. So in the scene image there are obviously less keypoints on the object itself than on some background clutter. So I guess I have to get rid of some bad matches first, before calculating the homography?

2012-07-31 11:35:13 -0500 received badge  Student (source)
2012-07-31 10:48:33 -0500 asked a question performance of findHomography

I am trying to detect a planar object from a video stream. Keypoint detection and feature extraction work both fairly well, as well as matching. findHomography however takes very long (up to 1,6 sec on my mobile). I tried several combinations of detector/extractor/matcher to change the number and quality of keypoints. My current setup is ORB and BFMatcher<Hamming>. My question is: How can I speed up the homography calculation? I'm using ransac, but changing the threshold seems to have little to no effect (neither in quality nor performance). Should I use some other methods like getPerspectiveTransform, or estimateAffine3D or solvePnp?

On a side note: I also had a hard time to get a useful matrix out of findHomography in the first place, although I sticked pretty close to opencv's samples.