# How does cv::ShapeTransformer::estimateTransformation work [closed]

It seems like a simple question but I don't get the right answer from the documentation: What is the use of cv::ShapeTransformer::estimateTransformation, to be concrete the estimateTransformation- function of an AffineTransformer.

1. Is there really no output of this function?
2. What format do targetShape and transformingShape have? (Is std::vector< cv::Point>> right?)
-> vector<point2f> (or vector<point>, which will get converted internally.) (see beraks answer)
3. How can you estimate the std::vector< DMatch > from two contours?

This may sound like stupid questions, but i really don't find any examples, so please be patient with me.

edit retag reopen merge delete

### Closed for the following reason the question is answered, right answer was accepted by berak close date 2017-06-12 03:47:05.651913

Sort by » oldest newest most voted
1. in the case of an affine transform (i'll spare you the thin-plate-spline ...) from the input contours, a set of matching points is calulated, then estimateRigidTransform() is called on those corresponding points, resulting in an affine transform matrix, which is saved internally. (so, there is an output, but you're not supposed to access it)

2. vector<Point2f> (or vector<Point>, which will get converted internally.)

3. the matches are just a way to transport corresponding point indices here.

all in all, this class is part of the shapecontext matching, and probably should not be used "out-of-context". (it's also an abstract class, you cannot make an instance of it)

here is a usage example:

cv::Ptr <cv::ShapeContextDistanceExtractor> mysc = cv::createShapeContextDistanceExtractor();
mysc->setIterations(1);
// specify a distance, chisquare, emd, normhistogram, etc.
mysc->setCostExtractor(createChiHistogramCostExtractor(30,0.15f));
// specify a transformation, tps or affine. *this is your shapetransformer in action* !
mysc->setTransformAlgorithm( createThinPlateSplineShapeTransformer() );

vector <Point2f> query1; vector <Point2f> query2;  // 2 shapes
double d = mysc->computeDistance(query1, query2);  // final result.

more

Thank you, @berak one last question to the "out-of-context"-use: I found this simular question and tried the following with contourpoints img1Points and img2Points:

vector<DMatch> matches;
Ptr<AffineTransformer> transformerHD = createAffineTransformer(0);
matches.push_back(DMatch(0,0,0));
matches.push_back(DMatch(1,1,0));
matches.push_back(DMatch(2,2,0));
matches.push_back(DMatch(3,3,0));
transformerHD -> estimateTransformation(img1Points, img2Points, matches);


isn't this what you called making an instance of it? And just for my understandings: the method estimateTransformation is not for giving me a result back?

( 2017-06-09 03:42:25 -0500 )edit
1

yes, that is an instance of the interface. and no, if you wanted the affine transformation matrix, - you won't get it from this interface.

and the matches are the output of that function, they will be filled internally (not your job)

so, why use that class at all ? would not getAffineTransform or even estimateRigidTransform be an easier way ?

( 2017-06-09 05:20:25 -0500 )edit

thank you @berak, I was trying to match shapes in a rigid transformed and rotated image. Is this what you can achieve with getAffineTransform and if so, how do i have to prepare the point sets?

( 2017-06-10 12:15:03 -0500 )edit

i think, we should close this question, and that you should ask another one, describing your current situation ( code ? ) and where you're trying to get from there.

( 2017-06-10 13:01:11 -0500 )edit
1

Sounds good, I will try to split my questions to seperate topics. Again, thank you for your help, berak. Can someone close the question, please?

( 2017-06-12 03:41:25 -0500 )edit

Official site

GitHub

Wiki

Documentation