Ask Your Question
0

OpenCV: Find original coordinates of a rotated point

asked 2014-07-12 09:16:17 -0600

Vion gravatar image

updated 2014-07-14 06:23:29 -0600

berak gravatar image

I have the following problem. I'm searching for eyes within an image using HaarClassifiers. Due to the rotation of the head I'm trying to find eyes within different angles. For that, I rotate the image by different angles. For rotating the frame, I use the code (written in C++):

Point2i rotCenter;
rotCenter.x = scaledFrame.cols / 2;
rotCenter.y = scaledFrame.rows / 2;

Mat rotationMatrix = getRotationMatrix2D(rotCenter, angle, 1);

warpAffine(scaledFrame, scaledFrame, rotationMatrix, Size(scaledFrame.cols, scaledFrame.rows));

This works fine and I am able to extract two ROI Rectangles for the eyes. So, I have the top/left coordinates of each ROI as well as their width and height. However, these coordinates are the coordinates in the rotated image. I don't know how I can backproject this rectangle onto the original frame.

Assuming I have the obtaind eye pair rois for the unscaled frame (full_image), but still roated.

eye0_roi and eye1_roi

How can I rotate them back, such that they map their correct position?

Best regards, Andre

edit retag flag offensive close merge delete

2 answers

Sort by ยป oldest newest most voted
1

answered 2014-07-14 06:04:38 -0600

A piece of code I am using for backprojecting detection rectangles can be seen below. I think you will be able to apply this to your needs.

// Use a rectangle representation on the frame but warp back the coordinates
// Retrieve the 4 corners detected in the rotation image
Point p1 ( temp[j].x, temp[j].y ); // Top left
Point p2 ( (temp[j].x + temp[j].width), temp[j].y ); // Top right
Point p3 ( (temp[j].x + temp[j].width), (temp[j].y + temp[j].height) ); // Down right
Point p4 ( temp[j].x, (temp[j].y + temp[j].height) ); // Down left

// Add the 4 points to a matrix structure
Mat coordinates = (Mat_<double>(3,4) << p1.x, p2.x, p3.x, p4.x,\
                                        p1.y, p2.y, p3.y, p4.y,\
                                        1   , 1  ,  1   , 1    );

// Apply a new inverse tranformation matrix
Point2f pt(frame.cols/2., frame.rows/2.);
Mat r = getRotationMatrix2D(pt, -(degree_step*(i+1)), 1.0);
Mat result = r * coordinates;

// Retrieve the ew coordinates from the tranformed matrix
Point p1_back, p2_back, p3_back, p4_back;
p1_back.x=(int)result.at<double>(0,0);
p1_back.y=(int)result.at<double>(1,0);

p2_back.x=(int)result.at<double>(0,1);
p2_back.y=(int)result.at<double>(1,1);

p3_back.x=(int)result.at<double>(0,2);
p3_back.y=(int)result.at<double>(1,2);

p4_back.x=(int)result.at<double>(0,3);
p4_back.y=(int)result.at<double>(1,3);

// Draw a rotated rectangle by lines, using the reverse warped points
line(frame, p1_back, p2_back, color, 2);
line(frame, p2_back, p3_back, color, 2);
line(frame, p3_back, p4_back, color, 2);
line(frame, p4_back, p1_back, color, 2);

This works for detecting rotated faces for me and then transforming them back to the original, non-rotated image.

Good luck!

edit flag offensive delete link more
0

answered 2014-07-12 11:20:52 -0600

Guanta gravatar image

Just guessing here: Take the inverse of your rotation matrix (http://docs.opencv.org/modules/imgproc/doc/geometric_transformations.html?highlight=getrotationmatrix#invertaffinetransform) and multiply (or use warpAffine) the ROI-points with it -> transforms back the points.

edit flag offensive delete link more

Question Tools

Stats

Asked: 2014-07-12 09:16:17 -0600

Seen: 5,730 times

Last updated: Jul 14 '14