2017-01-14 14:03:33 -0500 commented answer Mapping not working as expected when using remap I did not realize at first that the top left corner was (0,0) because, like you observed, the fact that I was inserting the source coordinate did in fact raise parts of the image and so it was not apparent that I was doing something wrong - I did suspect that I had a wrong coordinate system but inversing the rows in my loop cause the image to flip over so I wrongly deduced that (0,0) was in fact the bottom left corner. I did read the OpenCV documentation on remap while working on this but I think the logic behind it wasn't expressed as clearly as what you posted. So again, thank you for all the assistance that you provided. 2017-01-14 12:44:11 -0500 commented answer Mapping not working as expected when using remap That worked so thank you very much for this. I'd like to understand what I was doing wrong though. I see that with this method the entries e.g for column 383 in the map_y array go from 0 to 803. Since there are only 360 rows in the picture how is a number such as 803 interpreted? I thought that in each row of the map_y array you are supposed to insert the destination y_coordinate, in other words I thought that if you had a pixel at the middle of the top row of the original image (i.e x=320,y=359 for a 360x640 picture) and you had the number 200 at map_y[359][320] that pixel would move downwards by 159 pixels 2017-01-14 08:09:31 -0500 received badge ● Editor (source) 2017-01-14 08:07:58 -0500 asked a question Mapping not working as expected when using remap I am trying to warp an 640x360 image via the OpenCV remap function (in python 2.7). The steps executed are the following Generate a curve and store its x and y coordinates in two seperate arrays, curve_x and curve_y.I am attaching the generated curve as an image(using pyplot): Load image via the opencv imread function original = cv2.imread('C:\\Users\\User\\Desktop\\alaskan-landscaps3.jpg')  Implement a mapping function to translate the y-coordinate of each pixel upwards by a distance proportional to the curve height. As each column of y-coordinates must be squeezed in a smaller space a number of pixels are removed during the mapping. Code: #array to store previous y-coordinate, used as a counter during mapping process floor_y=np.zeros((x_size),np.float32) #for each row and column of picture for i in range(0, y_size): for j in range(0,x_size): #calculate distance between top of the curve at given x coordinate and top height_above_curve = (y_size-1) - curve_y_points[j] #calculated a mapping factor, using total height of picture and distance above curve mapping_factor = (y_size-1)/height_above_curve # if there was no curve at given x-coordinate then do not change the pixel coordinate if(curve_y_points[j]==0): map_y[i][j]=j #if this is the first time the column is traversed, save the curve y-coordinate elif (floor_y[j]==0): #the pixel is translated upwards according to the height of the curve at that point floor_y[j]=i+curve_y_points[j] map_y[i][j]=i+curve_y_points[j] # new coordinate saved # use a modulo operation to only translate each nth pixel where n is the mapping factor. # the idea is that in order to fit all pixels from the original picture into a new smaller space #(because the curve squashes the picture upwards) a number of pixels must be removed elif ((math.floor(i % mapping_factor))==0): #increment the "floor" counter so that the next group of pixels from the original image #are mapped 1 pixel higher up than the previous group in the new picture floor_y[j]=floor_y[j]+1 map_y[i][j]=floor_y[j] else: #for pixels that must be skipped map them all to the last pixel actually translated to the new image map_y[i][j]=floor_y[j] #all x-coordinates remain unchanges as we only translate pixels upwards map_x[i][j] = j #printout function to test mappings at x=383 for j in range(0, 360): print('At x=383,y='+str(j)+'for curve_y_points[383]='+str(curve_y_points[383])+' and floor_y[383]='+str(floor_y[383])+' mapping is:'+str(map_y[j][383]))  The original and final pictures are shown below I have two issues: As all the pixels are translated upwards I would expect the bottom part of the picture to be black - or some other background colour - and that this blank area should match the area below the curve There is a hugely exaggerated upwards warping effect in the picture which I cannot explain. For example, a pixel that in the original picture was at around y=140 is now ... 2017-01-11 15:20:01 -0500 commented question Error when calling estimateTransformation of ThinPlateSplineShapeTransformer Anyone? Anyone at all? 2017-01-08 13:34:59 -0500 received badge ● Student (source) 2017-01-08 10:10:24 -0500 commented answer How to access ThinPlateSpline ShapeTransformer functions in python Thanks for your help Berak, I have managed to create the DMatch object. I'm having other difficulties with implementing the TPS class but since this post had a very specific subject I've decided to close it and post a new question specific to the problems I'm having. Thanks for your assistance Regards, Savvas 2017-01-08 10:07:42 -0500 asked a question Error when calling estimateTransformation of ThinPlateSplineShapeTransformer I am trying to implement an image warp using the ThinPlateSplineShapeTransformer in OpenCV using Python. I am using a C++ example posted earlier in the forum (link) but I am encountering various problems due to the differences in the OpenCV Python API. As in the linked example, I am working with a single image onto which I will define a small number of source points and the corresponding target points. The end result should be a warped copy of the image. tps=cv2.createThinPlateSplineShapeTransformer() sourceshape= np.array([[200,10],[400,10]],np.int32) targetshape= np.array([[250,10],[450,30]],np.int32) matches=list() matches.append(cv2.DMatch(1,1,0)) matches.append(cv2.DMatch(2,2,0)) tps.estimateTransformation(sourceshape,targetshape,matches)  But I am getting an error in the estimateTransformation method as follows: cv2.error: D:\Build\OpenCV\opencv-3.1.0\modules\shape\src\tps_trans.cpp:193: error: (-215) (pts1.channels()==2) && (pts1.cols>0) && (pts2.channels()==2) && (pts2.cols>0) in function cv::ThinPlateSplineShapeTransformerImpl::estimateTransformation  I can understand that something is incorrect in the data structures that I have passed onto estimateTransformation and I'm guessing it has to do with the channels since the rows and columns seem to be correct but I do not know how I can satisfy the assertion (pts1.channels()==2) since the parameter is an array of points which I am creating and not an array generated from an image load I'd be grateful for any pointers to a TPS implementation with Python or indeed any help on how to resolve this particular issue. I've tried to find the Python documentation for the ThinPlateShapeTransformer class but it has proved impossible - all I've found is the C++ docs and the only thing i have to go on are the results of the help() function - apologies if I am missing something obvious 2017-01-08 08:41:15 -0500 commented answer How to access ThinPlateSpline ShapeTransformer functions in python Hi Berak, I have this link which is in fact an older question posted in the forum. As you can see a number of DMatch objects were created and then passed on to the estimateTransformation method but they are "dummy" ones in the sense that they do not refer to any real matches between two images - the objective is to warp a single image by moving a set of control points on that same image 2017-01-08 08:25:03 -0500 commented answer How to access ThinPlateSpline ShapeTransformer functions in python Thank you - I was expecting my IDE to display the options since it should automatically read all members of the classes in cv2 but this did not happen - I don't know why. I am trying to execute a Thin Plate Spline transformation on a single image using two pre-calculated source and target points. I have seen that the estimateTransformation method requires a set of matches as its third argument but as I am not in fact using a second image it is not relevant. In similar C++ implementations I have seen that they explicitly create a set of "dummy" DMatch objects with identical index values which they then pass to the parameter but in Python i cannot see how I can create a DMatch object - the definition does not appear to exist. How can I call the function? 2017-01-07 19:03:53 -0500 asked a question How to access ThinPlateSpline ShapeTransformer functions in python I'd like to use the TPS shape transformer in a python project but the bindings appear to be incomplete - I can see the constructor but the other functions (applyTransformatio,warpImage) don't seem to exist. Do Python bindings exist for these? 2016-11-06 10:28:46 -0500 received badge ● Enthusiast 2016-10-30 13:13:27 -0500 commented question Shape Transformers and Interfaces 2 Hi there. I'm planning on working with the Opencv TPS function myself for a project and I was wondering whether you figured out what was the problem? 2016-10-09 16:02:57 -0500 received badge ● Scholar (source) 2016-10-09 16:02:50 -0500 commented answer Arguments of ShapeTransformer estimateTransformation method Thanks for clarifying this for me! 2016-10-09 15:45:57 -0500 commented answer Arguments of ShapeTransformer estimateTransformation method If I understand it correctly then, the DMatch array would have been more relevant if I had a second "target" image and I was trying to morph another ("source") image into it rather than the open-ended "drag and drop" scheme I am trying to implement? 2016-10-08 03:41:42 -0500 asked a question Arguments of ShapeTransformer estimateTransformation method Greetings, I'm a computer programmer who is working on a personal image processing project using opencv. The goal is to implement an image warp function which allows you to pick a point on an image and drag it along with the image distorting to fit around it. From what I've been reading I got the idea that a thin plate spline transformation can be used to achieve this effect. I have found the ThinPlateSpline ShapeTransformer class in the Opencv documentation but even after reading the algorithm description in various papers (I do not have a Maths background) I have difficulty in understanding the parameters required in the estimateTransformation method. I presume that the first one is a set of initial points and the second is a set of target points where the only change between the two will be the point which I will "drag" to a new position. Is this correct? I cannot comprehend the third argument though, which contains a set of "matches". As I only have a single image (or point set) to work on, how can I compare it with a " second " inage to find common points so that I can then pass them on as an argument? All I know is the new coordinate of the point which I will move.I have looked at the algorithm again and while I get the gist of the vector maths and tracked the relevant lines of the source code that implement them I could not find a link between that and this requirement for a set of matches Any help or pointers to sources that can clarify the way this works would be welcome. Thanks! 2016-10-06 12:49:22 -0500 received badge ● Supporter (source) 2016-04-17 03:27:00 -0500 commented question Error when compiling videoio/src/cap_ffmpeg_impl.hpp I used 3.0 because I don't have much experience with linux and the tutorials I had found used that. Turns out that using 3.1 made the problem go away - the compilation completed OK. Thanks for your help 2016-04-16 13:20:30 -0500 asked a question Error when compiling videoio/src/cap_ffmpeg_impl.hpp I am compiling opencv 3.0.0 but I get multiple "xxx was not declared in this scope" errors when compiling cap_ffmpeg_impl.hpp file. (E.g pix_fmt_yuv422p, codec_pix_fmt) I have built ffmpeg using the --enable-shared option (as advised on other forums) and ffmpeg appears to run without any problems. If a patch has been issued for this please let me know. Any help would be greatly appreciated