# resize and remap functions utterly wrong

As far as I can tell, both remap and resize functions are implemented incorrectly (at least with default bilinear interpolation). Consider the following code:

#include <opencv2/opencv.hpp>
#include <iostream>

using namespace std;
using namespace cv;

int main(int argc, char **argv)
{

int inputSize=2;
Mat test(inputSize,inputSize,CV_32FC1);
test.at<float>(0,0)=1.0;
test.at<float>(0,1)=2.0;
test.at<float>(1,0)=3.0;
test.at<float>(1,1)=4.0;
int size=4;
Mat output;
resize(test,output,Size(size,size),0.0,0.0,INTER_LINEAR);
for(int ridx=0;ridx<size;++ridx)
{
for(int cidx=0;cidx<size;++cidx)
printf("%f ",output.at<float>(ridx,cidx));
printf("\n");
}
printf("\n");
size = 5;
Mat rowMap(size,size,CV_32FC1),colMap(size,size,CV_32FC1);
for(int ridx=0;ridx<size;++ridx)
for(int cidx=0;cidx<size;++cidx)
{
rowMap.at<float>(ridx,cidx) = ridx*(static_cast<float>(inputSize-1)/(size-1));
colMap.at<float>(ridx,cidx) = cidx*(static_cast<float>(inputSize-1)/(size-1));
}
remap(test,output,colMap,rowMap,INTER_LINEAR);
for(int ridx=0;ridx<size;++ridx)
{
for(int cidx=0;cidx<size;++cidx)
printf("%f ",rowMap.at<float>(ridx,cidx));
printf("\n");
}
printf("\n");
for(int ridx=0;ridx<size;++ridx)
{
for(int cidx=0;cidx<size;++cidx)
printf("%f ",colMap.at<float>(ridx,cidx));
printf("\n");
}
printf("\n");
for(int ridx=0;ridx<size;++ridx)
{
for(int cidx=0;cidx<size;++cidx)
printf("%f ",output.at<float>(ridx,cidx));
printf("\n");
}
return 0;
}


And output:

1.000000 1.250000 1.750000 2.000000
1.500000 1.750000 2.250000 2.500000
2.500000 2.750000 3.250000 3.500000
3.000000 3.250000 3.750000 4.000000

0.000000 0.000000 0.000000 0.000000
0.333333 0.333333 0.333333 0.333333
0.666667 0.666667 0.666667 0.666667
1.000000 1.000000 1.000000 1.000000

0.000000 0.333333 0.666667 1.000000
0.000000 0.333333 0.666667 1.000000
0.000000 0.333333 0.666667 1.000000
0.000000 0.333333 0.666667 1.000000

1.000000 1.343750 1.656250 2.000000
1.687500 2.031250 2.343750 2.687500
2.312500 2.656250 2.968750 3.312500
3.000000 3.343750 3.656250 4.000000


The results for resize and remap should both be smooth in 1/3 intervals - I don't know what the heck opencv is doing. To me, these seem completely inaccurate results. Please enlighten me!

edit retag close merge delete

Sort by » oldest newest most voted

### Situation

OpenCV is aware of this issue. It is actually on the top of their Volunteer tasks list and on the bug list as issue 3212.

### The output of cv::resize.

Firstly, when scaling an image, there is the issue of the choice of physical meaning between the coordinate mapping scheme.

A similar issue occurs in the context of the chroma subsampling process inside JPEG compression. One would think that a constant integer scaling factor would have removed all ambiguity, but that is not true. The issue of "centered" vs "co-sited" mapping is applicable to all discrete image scaling processes, regardless of scaling factors and algorithms. To have a thoughtful discussion, one is advised to study similar cases that had already been well analyzed.

I personally used INTER_AREA when scaling images, because it is one of the scaling definitions that had the least ambiguity.

### The output of cv::remap.

What you are seeing is an artifact of the cv::remap internally using an interpolation weighting table that has limited arithmetic precision.

In your particular example, the multiplication table itself is encoded in 5 bits of precision.

That means every output value is generated by
(TopLeft * weight00 + TopRight * weight01 + BottomLeft * weight10 + BottomRight * weight11)
in which the four weights can take on values in (0, 1/32, 2/32, ... , 31/32, 1).

1.343750 = 1 * (21 / 32) + 2 * (11 / 32)
1.656250 = 1 * (11 / 32) + 2 * (21 / 32)
1.687500 = 1 * (21 / 32) + 3 * (11 / 32)
2.031250 = 1 * (14 / 32) + 2 * (7 / 32) + 3 * (7 / 32) + 4 * (4 / 32)

The clue to this artifact is mentioned in the OpenCV documentation for remap and convertMaps

• remap support three data formats for its coordinate maps:
• tuple of 16SC2 and 16UC1,
• pair of two 32FC1, for X and Y respectively,
• A single 32FC2 which contains pair of (X, Y)

The first data format is not even explained in the documentation for remap. If one passes in (x,y) coordinate values for the first option, it will either fail an assertion or generate a corrupted output.

The answer is hidden in the documentation for the convertMaps function.

Quoted:

... are converted to a more compact and much faster fixed-point representation. The first output array contains the rounded coordinates and the second array (created only when nninterpolation=false ) contains indices in the interpolation tables.

It is obvious that the documentation can be improved, and that the behavior could have been deprecated or separated into a differently-named function. However, because this behavior has been made available to users for more than a whole decade, it cannot be simply taken off-line as that will break a lot of applications that depend on it.

more

1

I hear you on the resize, it certainly can be ambiguous. But I ran through the logical options, and the opencv resize results still didn't make sense.

Sounds like there needs to be a flag for remap to turn off the weighting table, and get more accurate results. I just made my own version of the remap function, with only ~30% more time taken on a 10000x10000 float image (using TBB and a 64 core machine). Another oddity, on the same run, I tried the "convertMaps" function, but the "performant" maps HURT performance by a factor of 4 (making it more than twice slower than mine)!

Not sure if/how opencv uses threads, but this suggests to me that the performance improvement(?) may not be worth it to the typical user.

( 2014-09-13 03:29:12 -0500 )edit

I share your feelings. I used OpenCV at work for its scaling and rotating functions, but eventually I learned that it was not suitable for images that are larger than what it was designed for, and so I went ahead and implemented my own. Unfortunately the work I did during office hours belong to my employer's intellectual property, so I can't contribute it unless someone else do a clean-room re-implementation. You can see my journey in my earlier question and the related issue ticket #1337.

( 2014-09-13 12:49:30 -0500 )edit

Official site

GitHub

Wiki

Documentation