2014-06-12 09:30:01 -0600 | received badge | ● Student (source) |
2014-04-24 11:37:54 -0600 | commented question | map image contour defined by non-symetric trapeze to a square destination iamge well the src image I want to deform to a square shape is not a 2D or 3D transformed plane, warping the iamge according to a geometrical transformation is not what I am looking for. I want to apply the content of an image between 4 control points so that it is interpolated to a plane, just like applying a texture on a plane in OpenGL/DirectX by proviing UV custom texture coordinate of an image. |
2014-04-24 09:46:32 -0600 | received badge | ● Teacher (source) |
2014-04-23 18:44:16 -0600 | answered a question | Colour recognition after an image is captured are you referring to finding the prominent color in an image and selecting between green, red or blue? for a first step, you can get the color distribution (histogram) of your image by using calcHist docs.opencv.org/modules/imgproc/doc/histograms.html?highlight=calchist#calchist |
2014-04-23 16:01:58 -0600 | asked a question | map image contour defined by non-symetric trapeze to a square destination iamge Hi, I was wondering if there was a function in OpenCV that would allow me to deform an image in this way: take 4 points that defines the trapeze boundary of a region in an image and apply them as texture coordinate of a plane so that it is interpolated correctly on a square plane (wrap this region's inside of this new square region). I can do this in OpenGL by defining UV coordinates of a texture to be applied on a plane but I was wondering it the equivalent existed in OpenCV. Remap() seems to take as arguments a table of correlated points in both images for each axis. In my case, both images might not be of the same size and the src image points are not on a symmetrical boundary. Also, the source plane is not necessarily the destination one transformed in 2D, so no geometrical manipulation. Thank you |
2014-04-14 18:39:11 -0600 | received badge | ● Editor (source) |
2014-04-14 18:26:26 -0600 | asked a question | reframing an image using a 3D Transform - need help Hi, I am detecting a square pattern in an image and retrieving its pose using SolvePnP(), which gives me a translation vector in pixel unit and a rotation vector. I would now like to transform my source image using this translation and rotation so that I can display only the sub-part containing the pattern "flatten out" in 2D. The result would be a square image of the pattern as it is in the source image but re-alligned in 2D. I would obtain this by using warPerspective(). I tried getPerspective() + warPerspective instead and it works partially well: the sub-part is indeed retrieved but since this gets me a 2D transform and the pattern in the scene is rotated in 3D, it does not compensate properly and I get a square image of a plane at an angle. I looked at this: http://jepsonsblog.blogspot.com/2012/11/rotation-in-3d-using-opencvs.html to build the perspective transform out of the 3D translation and rotation. I get confusing results in the final image transform when building the image transformation. If I skip the translation and rotation component, using only a transformation matrix (trans) composed of:
I get a black image. To give you an idea of actual values given my calibration: camera Matrix: A2: A1: final perpective image transform (trans): there is clearly something wrong with the last column of this transformation matrix, can you point me out what it is, I don't understand? Thank you very much, |