# How to increase warpPerspective Or warpAffine precision?

I would like to transform my images with rotation degree of 0.001 degree, and translate them with 0.0001 pixel in row or column. The problem with this library is that it cannot manage this kind of tiny transformation steps and the result is that the source and transformed images are identical with this tiny transformations!

Is there any way to increase this precision such that I could get correct and different images when I rotate them and translate them by 0.001 degree and 0.0001 pixels respectively ?

One approach is changing the interpolation mask size which is defined by "INTER_BITS" inside imgproc.hpp file and compiling opencv library from scratch, but it could only increase the precision only a little bit.

edit retag close merge delete

1000*cos(0.0001degree)=1000-0.0000002

translation of 0.0001 pixel

If you have a pixel with a value 256 and nearest neighbour is 0 new pixel will be 256(1-0.0001)=255.9744

Have you try with type CV_32F?

My image pixel types are CV_64F not CV_32F, consider the source image is the following: [93.0, 0.0], then translation in column with 0.1 should give me [93.0 - 9.3, 9.3], right? but it gives me the following answer: [84.28125, 8.71875]!

This is the way I create for instance one of my images:

cv::Mat srcImg = (cv::Mat_<double>(3, 3) << 0.0, 0.0, 0.0,
0.0, 256.0, 0.0,
0.0, 0.0, 0.0);

And when I translate in row by 0.1 this is the result:

[0.0, 0.0, 0.0,
0.0, 232.0, 0.0,
0.0, 24.0, 0.0]
Which is wrong!


Ok I understand your problem using this program :

cv::Mat srcImg = (cv::Mat_<double>(3, 3) << 0.0, 0.0, 0.0,
0.0, 256.0, 0.0,
0.0, 0.0, 0.0),dstImg;
cout<< srcImg<<endl;
vector<Point2f> srcTri(3), dstTri(3);
srcTri = Point2f(0,0);
srcTri = Point2f(1, 0);
srcTri = Point2f(1, 1);
for (int i = 0; i <= 10; i++)
{
float dx=0.01*i;
dstTri = Point2f(0+dx, 0);
dstTri = Point2f(1+dx, 0);
dstTri = Point2f(1+dx, 1);

cv::Mat affinite = getAffineTransform(srcTri, dstTri);
//cout<<affinite<<"\n";
warpAffine(srcImg, dstImg, affinite, Size(5,5), INTER_LINEAR);
cout<< dx<<" -> "<<dstImg.row(1)<<endl;
}


results are :

[0, 0, 0;
0, 256, 0;
0, 0, 0]
0 -> [0, 256, 0, 0, 0]
0.01 -> [0, 256, 0, 0, 0]
0.02 -> [0, 248, 8, 0, 0]
0.03 -> [0, 248, 8, 0, 0]
0.04 -> [0, 248, 8, 0, 0]
0.05 -> [0, 240, 16, 0, 0]
0.06 -> [0, 240, 16, 0, 0]
0.07 -> [0, 240, 16, 0, 0]
0.08 -> [0, 232, 24, 0, 0]
0.09 -> [0, 232, 24, 0, 0]
0.1 -> [0, 232, 24, 0, 0]


You see. Then, What do you think to do as a solution for this kind of problem? and knowing that these small images are only for test, my real images are quite large.

Sort by » oldest newest most voted

My answer : it is not possible. I tried remap but problem is same...

Only a pull request can solve this problem (rewrite code) I think problem is INTER_BITS I'm not sure)

more

It is INTER_BITS and changing it to maximum 10 (higher than 10 the compiling crashes due to exception error) increase the accuracy only a little bit. OpenCV uses look up tables for the interpolation where possible. And there's quite a bit of rounding involved.

If you're translating by 1/10,000th of a pixel, you aren't doing image processing, you're doing math. You should use the math functions. I would write a little function that takes the coordinates you want for each pixel and does linear interpolation using doubles all the way through. It's simple enough. Slow, but it'll work.

more

The problem is that I would like to do it not only for translation but also for rotation and scaling (with respect to image center) and it would make it problematic to handle all of these transformations. Also, my images are large enough which if I transform them by looping pixel by pixel it would take a long time.

As long as your scaling isn't too much (such that one pixel would contain more than 4) you can do all of these with the same function I described.

It will be slow no matter what you do to achieve the precision because you're using doubles. The only speedup would be parallelism (which you can do easily on the for loop) and vectorization, which is only 2:1 for doubles at best.

Official site

GitHub

Wiki

Documentation