Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

You need to inverse your rotation matrix in order to get a source pixel for every pixel in your destination image. It should not be too hard to find a matrix H with [x,y] = H*[x',y'].

Then you iterate over every pixel in your destination image and compute the corresponding source pixel coordinates with H. If the source pixel coordinates are out of bounds, you define some default color like white or black. Otherwise you take the source pixel's color and set it to you destination image. You usually won't get integer values for your coordinates. In case of a rotation the result should be ok if you just round it to the next integer, because you don't have distortions. But the results are nicer if you use an interpolation method (e.g. linear), which means that your destination pixel color is a result of multiple source pixel colors that are merged.

You need to inverse your rotation matrix in order to get a source pixel for every pixel in your destination image. It should not be too hard to find a matrix H with [x,y] = H*[x',y']. (hint: it looks very similar to your rotation matrix above).

Then you iterate over every pixel in your destination image and compute the corresponding source pixel coordinates with H. If the source pixel coordinates are out of bounds, you define some default color like white or black. Otherwise you take the source pixel's color and set it to you destination image. You usually won't get integer values for your coordinates. In case of a rotation the result should be ok if you just round it to the next integer, because you don't have distortions. But the results are nicer if you use an interpolation method (e.g. linear), which means that your destination pixel color is a result of multiple source pixel colors that are merged.

You can access image pixels for a cv::Mat img with Vec3b rgbVal = img.at<Vec3b>(y,x) for color images and uchar grayVal = img.at<uchar>(y,x) for grayscale images.