How to estimate transformation after hausdorff / shape context matching

asked 2017-06-02 09:07:32 -0600

JoeBroesel gravatar image

A simular question is asked here, if i understood the question right:

How can you estimate the location and orientation of a rigid/affine transformed image, after you have extracted the distance and know that the compared images are simular. I tried estimateRigidTransform, after casting the vector<points> to vector<point2f> but the resulting Mat keeps empty.

Thank you for your help, the Shape context demo can be found here

edit retag flag offensive close merge delete

Comments

the hausdorff distance is not really a "shape context distance", no fitting between contours is applied, both contours stay "as they are", and are matched as such.

berak gravatar imageberak ( 2017-06-02 09:54:26 -0600 )edit

Thank you for your answer, does this mean with hausdorff you can only do a classification but no detection and is it possible with the ShapeContextDistanceExtractor?

JoeBroesel gravatar imageJoeBroesel ( 2017-06-02 11:10:18 -0600 )edit

i might misunderstand you here, but none of the classes in the shape module does detection, only classification.

the shape context distances do an internal "fit" from one shape to another, but you can't access the transformed output from the api in an easy way.

berak gravatar imageberak ( 2017-06-03 00:59:39 -0600 )edit

Thank you berak, that was one information i was looking for. Are there any better alternatives than the shape module, which do a matching and detection without an additional step, and if not is estimateRigidTransform the right function?

JoeBroesel gravatar imageJoeBroesel ( 2017-06-04 12:52:22 -0600 )edit

I found one approach here. But i'm not sure, why the transformation of the contours should improve the result of computeDistance. Is the <cv::shapecontextdistanceextractor> not invariant to rotation and translation? Berak mentioned the internal fit.

JoeBroesel gravatar imageJoeBroesel ( 2017-06-07 03:26:03 -0600 )edit