2017-08-07 07:55:41 -0600 | commented question | how to make make bounding box around object? This should help |
2017-08-04 09:31:37 -0600 | asked a question | Feature Matching; Detection of multiple Object Instances Hello, I would like to implement a feature-matching-approach for multiple object detection. In related questions [http://answers.opencv.org/question/17...] [http://answers.opencv.org/question/45...] a meanshift clustering for the feature points is recommended. In SO a python implementation is given. Is there a c++-equivalent for the approach from V. Gai. Especially the following MeanShift part? |
2017-06-08 00:39:58 -0600 | commented question | Comparing Two Contours: Rotation invariant? so how can this part of the code work if the extractor is invariant? } edit: Because if its invariant dist would be independent of the angle - or do I understand something wrong? |
2017-06-07 13:45:36 -0600 | asked a question | Comparing Two Contours: Rotation invariant? I found one approach for estimate the orientation of two contours here , which rotates one contour and checks the distance to the original. I changed the headers to and the main to: It may be kind of a stupid question, but first of all i don't know, why the transformation of the contours should improve the result of computeDistance. Is the <cv::shapecontextdistanceextractor> not invariant to rotation and translation, because it does an internal fit? If this would be the case, my results would be coherent, because I always get 0 as distance (but unfortunately no image as well). Also the result from an other program, where i match rotated contours with cv::shapecontextdistanceextractor> as well as the hausdorff metric seems not to be wrong (small distances, but no exact 0). |
2017-06-07 13:16:35 -0600 | commented question | Problem with estimateRigidTransform: mat dst is empty thanks for your code! really impressive performance, but it will take a while until I understand everything in detail! |
2017-06-07 03:26:03 -0600 | commented question | How to estimate transformation after hausdorff / shape context matching I found one approach here. But i'm not sure, why the transformation of the contours should improve the result of computeDistance. Is the <cv::shapecontextdistanceextractor> not invariant to rotation and translation? Berak mentioned the internal fit. |
2017-06-07 00:10:12 -0600 | received badge | ● Enthusiast |
2017-06-06 13:25:51 -0600 | commented question | Problem with estimateRigidTransform: mat dst is empty Thank you, i will have a look. As a last question: is the order of points relevant for findHomography? |
2017-06-06 07:53:53 -0600 | commented question | Problem with estimateRigidTransform: mat dst is empty Thanks again! If I edit your dst Mat like described here, i was able to map the points in your example to the right corespondence with which means I am still not 100% sure if the order in vector matters -but it seems to be unstable or luck. Anyhow, if i push more than 5 points in vertex1 dst is getting empty again, even if the bool fullAffine is set to true. I think that the function is limeted to a few points. if there is not other way to estimate a rigid transform in opencv with a lot of unsorted points,can i use findHomography (perhaps as overkill)? |
2017-06-06 04:22:39 -0600 | commented question | Problem with estimateRigidTransform: mat dst is empty Thank you for your response, if i get you right the order of the Points in the vector is relevant? I noticed that if I canged the line to or i get an output in the form of something like this: R:[1, -6.938148514913645e-16, 7.614916766799279e-14; 6.938148514913645e-16, 1, -1.285599231237722e-13] instead of this: R:[] Nevertheless the points are random_shuffled, aren't they? Would it help if i upload the used pictures (rotated apple-shapes from MPEG-dataset)? |
2017-06-06 03:26:44 -0600 | asked a question | Problem with estimateRigidTransform: mat dst is empty Hello everyone, I'm new to OpenCV, so it could be that it's just a understanding problem with the estimateRigidTransformation function: In the following code i find the contours of two rigid translated objects in img1 and 2, but estimateRigidTransformation seems not to work like i thought it would. It would be nice if someone has an idea why the mat dst keeps empty. Thank you! |
2017-06-04 12:52:22 -0600 | commented question | How to estimate transformation after hausdorff / shape context matching Thank you berak, that was one information i was looking for. Are there any better alternatives than the shape module, which do a matching and detection without an additional step, and if not is estimateRigidTransform the right function? |
2017-06-02 11:10:18 -0600 | commented question | How to estimate transformation after hausdorff / shape context matching Thank you for your answer, does this mean with hausdorff you can only do a classification but no detection and is it possible with the ShapeContextDistanceExtractor? |
2017-06-02 09:07:32 -0600 | asked a question | How to estimate transformation after hausdorff / shape context matching A simular question is asked here, if i understood the question right: How can you estimate the location and orientation of a rigid/affine transformed image, after you have extracted the distance and know that the compared images are simular. I tried estimateRigidTransform, after casting the vector<points> to vector<point2f> but the resulting Mat keeps empty. Thank you for your help, the Shape context demo can be found here |
2017-06-02 07:45:32 -0600 | commented question | Fast template matching Image Pyramids Thank you for your help! |
2017-06-02 07:10:16 -0600 | received badge | ● Editor (source) |
2017-06-02 04:06:05 -0600 | asked a question | Fast template matching Image Pyramids Hello everyone, for a fast template matching with with varying sizes and orientations i often found the reference to this link, which unfortunately is broken. Does someone know if this example still exists? Please forgive if the question is too specific and thanks for your help |
2017-06-02 03:30:17 -0600 | answered a question | how to overlay shapes in shape context/hausdorff matching. Not the newest question, but i also couldn't find an answer to this. How can you estimate the location and orientation of a rigid transformed image, after you have extracted the distance and know that the compared images are simular. I tryed estimateRigidTransform, but the resulting Mat keeps empty. |