2016-06-07 04:13:01 -0600 | received badge | ● Notable Question (source) |
2014-05-19 23:53:12 -0600 | received badge | ● Popular Question (source) |
2013-10-07 05:34:15 -0600 | commented answer | how do descriptors treat image bounds I marked this as the right answer, because it is the actual source code from https://github.com/Itseez/opencv/tree/master/modules/features2d/src. So this should be proof enough. Thanks to all who participated in answering my question. |
2013-08-09 08:50:26 -0600 | commented answer | how do descriptors treat image bounds Maybe I was a bit to hastefull to suggest answering the question like this. It is a good idea to get this "suspicion" confirmed by some one official, although I'm quite positive that Steven is actually right. I'll leave it unmarked for now and wait for confirmation. The reason why I rushed him into answering was that there are so many unanswered questions on this site which lessens the overall quality a bit... |
2013-08-09 05:24:55 -0600 | received badge | ● Nice Question (source) |
2013-08-09 04:44:19 -0600 | commented question | how do descriptors treat image bounds btw: If anyone of you might care to write an answer, I'll gladly accept it as the right answer. |
2013-08-09 04:40:46 -0600 | commented question | how do descriptors treat image bounds That's what I thought. I'm trying to implement the LEHF descriptor, which describes not points but lines. See this paper: http://www.bmva.org/bmvc/2012/BMVC/paper083/paper083.pdf They don't mention how they handle this. Discarding the whole line might not be the best option, because there will allways be some continuous lines reaching over the whole image eg. from left to right. Do you have any thoughts on this? |
2013-08-07 11:07:32 -0600 | asked a question | how do descriptors treat image bounds Given that a keypoint is located near an image edge (e.g. at Image(2, 2) ). What does the descriptor do if it describes a region which has a 10 px radius? Does it compute the values for valid Pixels only? Or does it get rejected completely? |
2013-08-05 07:24:41 -0600 | commented question | OpenCV 2.4 build Failed error while creating iOS framework in Mac Just to extend my previous comment: The file 'jmemansi.o' is missing for all three platforms (armv7, armv7s & i386). |
2013-08-05 07:13:16 -0600 | commented question | OpenCV 2.4 build Failed error while creating iOS framework in Mac Same here. Actually there is a build failure way before the lines you posted. /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/libtool: can't open file: /Users/christophkapffer/_privat/ios/build/iPhoneOS-armv7/3rdparty/libjpeg/OpenCV.build/Release-iphoneos/libjpeg.build/Objects-normal/armv7/jmemansi.o (No such file or directory) Command /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/libtool failed with exit code 1 That is the reason why the libtool can't merge the different builds in order to create a universal framework. There is no install directory for iPhoneOS-armv7, because that build failed in the first place (see above). |
2013-05-07 06:28:29 -0600 | received badge | ● Good Question (source) |
2013-01-31 03:55:32 -0600 | commented question | iOS6 + Opencv (Latest Compile) Linker Only for classes in Feature2D module In your build settings under Apple LLVM compiler have you tried setting the C++ Standard Library to "LLVM C++ standard library with C++11 support"? |
2012-08-01 05:20:31 -0600 | received badge | ● Nice Question (source) |
2012-08-01 05:15:00 -0600 | received badge | ● Scholar (source) |
2012-08-01 05:14:59 -0600 | received badge | ● Supporter (source) |
2012-08-01 05:13:38 -0600 | commented answer | performance of findHomography Thank you for these great suggestions. By now I only implemented the first one, and I can confirm that the quality of matches are indeed a key factor regarding speed. I started with a simple distance filter to get the n best matches and might have found a bug. If I use the build in cross check method in brute force matcher (initializing it like this: I am positive that these distance values were responsible for the slow down in |
2012-07-31 12:37:07 -0600 | commented question | performance of findHomography The object image has about 260 keypoints with a resolution of 327x245 px and the scene images (camera frames) are usually between 460 to 490 with 352x288 res. They give me usually 260 matches. Another thing I might add is the fact, that the tracked object is expected to appear about 2 to 3 times smaller in the scene image than in its reference image. So in the scene image there are obviously less keypoints on the object itself than on some background clutter. So I guess I have to get rid of some bad matches first, before calculating the homography? |
2012-07-31 11:35:13 -0600 | received badge | ● Student (source) |
2012-07-31 10:48:33 -0600 | asked a question | performance of findHomography I am trying to detect a planar object from a video stream. Keypoint detection and feature extraction work both fairly well, as well as matching. On a side note: I also had a hard time to get a useful matrix out of |