Ask Your Question
0

Mapping not working as expected when using remap

asked 2017-01-14 08:07:58 -0600

MrGiskard gravatar image

updated 2017-01-14 08:09:31 -0600

I am trying to warp an 640x360 image via the OpenCV remap function (in python 2.7). The steps executed are the following

  1. Generate a curve and store its x and y coordinates in two seperate arrays, curve_x and curve_y.I am attaching the generated curve as an image(using pyplot): image description

  2. Load image via the opencv imread function

    original = cv2.imread('C:\\Users\\User\\Desktop\\alaskan-landscaps3.jpg')
    
  3. Implement a mapping function to translate the y-coordinate of each pixel upwards by a distance proportional to the curve height. As each column of y-coordinates must be squeezed in a smaller space a number of pixels are removed during the mapping.

Code:

#array to store previous y-coordinate, used as a counter during mapping process
floor_y=np.zeros((x_size),np.float32)
#for each row and column of picture
for i in range(0, y_size):
    for j in range(0,x_size): 
        #calculate distance between top of the curve at given x coordinate and top
        height_above_curve = (y_size-1) - curve_y_points[j]
        #calculated a mapping factor, using total height of picture and distance above curve
        mapping_factor = (y_size-1)/height_above_curve
        # if there was no curve at given x-coordinate then do not change the pixel coordinate
        if(curve_y_points[j]==0):
            map_y[i][j]=j
        #if this is the first time the column is traversed, save the curve y-coordinate
        elif (floor_y[j]==0):
            #the pixel is translated upwards according to the height of the curve at that point
            floor_y[j]=i+curve_y_points[j]
            map_y[i][j]=i+curve_y_points[j] # new coordinate saved
        # use a modulo operation to only translate each nth pixel where n is the mapping factor. 
        # the idea is that in order to fit all pixels from the original picture into a new smaller space
        #(because the curve squashes the picture upwards) a number of pixels must be removed 
        elif  ((math.floor(i % mapping_factor))==0):
            #increment the "floor" counter so that the next group of pixels from the original image 
            #are mapped 1 pixel higher up than the previous group in the new picture
            floor_y[j]=floor_y[j]+1
            map_y[i][j]=floor_y[j]
        else:
            #for pixels that must be skipped map them all to the last  pixel actually translated to the new image 
            map_y[i][j]=floor_y[j]
        #all x-coordinates remain unchanges as we only translate pixels upwards
        map_x[i][j] = j
#printout function to test mappings at x=383
for j in range(0, 360):
    print('At x=383,y='+str(j)+'for curve_y_points[383]='+str(curve_y_points[383])+' and floor_y[383]='+str(floor_y[383])+'  mapping is:'+str(map_y[j][383]))

The original and final pictures are shown below

image description

image description

I have two issues:

  1. As all the pixels are translated upwards I would expect the bottom part of the picture to be black - or some other background colour - and that this blank area should match the area below the curve

  2. There is a hugely exaggerated upwards warping effect in the picture which I cannot explain. For example, a pixel that in the original picture was at around y=140 is now ...

(more)
edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted
1

answered 2017-01-14 12:10:34 -0600

Tetragramm gravatar image

You're building your maps wrong. This is in C++, but it's not too far off what you need in Python.

In this case I used the function 10*log(x+1)+x/2. You replace that with your function, or in your case, curve_y_points[j].

Mat img = imread("image.jpg");
Mat mapX, mapY;
mapX.create(img.rows, img.cols, CV_32F);
for (int x = 0; x < img.cols; ++x)
{
    mapX.col(x).setTo(x);
}
mapY.create(img.rows, img.cols, CV_32F);
for (int y = 0; y < img.rows; ++y)
{
    for (int x = 0; x < img.cols; ++x)
    {
        mapY.at<float>(y, x) = (float)y*img.rows / (img.rows - (10*log((float)x + 1.0f)+x/2.0f));
        if (x == 100)
            std::cout << y << "  " << mapY.at<float>(y, x) << "\n";
    }
}
Mat remImg;
remap(img, remImg, mapX, mapY, INTER_LINEAR, BORDER_CONSTANT);
imshow("img", img);
imshow("remapped", remImg);
waitKey();

image description

edit flag offensive delete link more

Comments

That worked so thank you very much for this. I'd like to understand what I was doing wrong though. I see that with this method the entries e.g for column 383 in the map_y array go from 0 to 803. Since there are only 360 rows in the picture how is a number such as 803 interpreted? I thought that in each row of the map_y array you are supposed to insert the destination y_coordinate, in other words I thought that if you had a pixel at the middle of the top row of the original image (i.e x=320,y=359 for a 360x640 picture) and you had the number 200 at map_y[359][320] that pixel would move downwards by 159 pixels

MrGiskard gravatar imageMrGiskard ( 2017-01-14 12:44:11 -0600 )edit

So if you look at the remap function, there is the option BORDER_CONSTANT. What that means is that when picking a pixel beyond the borders of the image, it simply replaces it with (0,0,0), or whatever you put in the next parameter (which is optional and defaults to (0,0,0).

I'm not quite sure about your example, and I think you may have two errors that partly cancel out. First, you do know that the top left corner is (0,0), correct? +x is to the right of the image, and +y is to the bottom. So map_y[359][320] is the middle column, bottom row. By putting 200 as the value, it reaches up to the original image at row 200 and inserts that into the final image at [359][320]. So it does move downward, because you inserted the source coordinate, not the destination coordinate.

Tetragramm gravatar imageTetragramm ( 2017-01-14 13:46:27 -0600 )edit

I did not realize at first that the top left corner was (0,0) because, like you observed, the fact that I was inserting the source coordinate did in fact raise parts of the image and so it was not apparent that I was doing something wrong - I did suspect that I had a wrong coordinate system but inversing the rows in my loop cause the image to flip over so I wrongly deduced that (0,0) was in fact the bottom left corner.

I did read the OpenCV documentation on remap while working on this but I think the logic behind it wasn't expressed as clearly as what you posted. So again, thank you for all the assistance that you provided.

MrGiskard gravatar imageMrGiskard ( 2017-01-14 14:03:33 -0600 )edit

Question Tools

1 follower

Stats

Asked: 2017-01-14 08:07:58 -0600

Seen: 2,344 times

Last updated: Jan 14 '17