# multiplying homography and image matrices

I have been trying to understand how warpPerspective() is working. I understand that among other things it multiplies the homography matrix by the source image matrix. I have the source image matrix generated by

Mat img_src = imread(imageName, IMREAD_GRAYSCALE);


and the homography matrix generated by

Mat h = findHomography(pts_src, pts_dst);


where pts_src and pts_dst are filled using four point each

 pts_dst.push_back(Point2f(0,0));


This works well with warpPerspective(), but the code

Mat im=h*img_src;
imshow("Image", im);


compiles well, yet imshow("Image", im) generates an error:

Assertion failed (type == B.type() && (type == CV_32FC1 || type == CV_64FC1 || type == CV_32FC2 || type == CV_64FC2))


I guess this is probably type mismatch between img_src and h which seems to be of type double because only

cout << h.at<double>(i,j) << endl;


works on it.

How could be this sorted out?

edit retag close merge delete

you got the idea wrong,

not the pixel values but their positions are transformed, and it's also not a matrix multiplication

in the end, it's a case of remapping

( 2020-04-06 01:19:17 -0500 )edit
1

Thank you. I've just started learning OCV, yet from the introduction lesson I am under strong impression that the pixels' position (i.e. pixels' coordinates) do not change, but their values change. I.e. value of pixel(x,y) of source image is assigned to value of pixel(x',y') of destination image according to a certain rule. It is remapping (in general meaning of this word) indeed and for simple remapping like reflecting or scaling (as of your link) no matrix is needed.

Anyway, if we leave terminology aside, what is going on under warpPerspective() hood which takes source image matrix, destination image matrix and homography matrix as its arguments? And how to sort out the error I am getting?

( 2020-04-06 03:30:20 -0500 )edit

error is from matrix multiplication, not from imshow() (both args to gemm() need to be float type, also a.cols==b.rows, clearly not what you have)

and yes, once you start moving around pixels, you have to interpolate results

value of pixel(x,y) of source image is assigned to value of pixel(x',y') of destination image according to a certain rule.

exactly.

( 2020-04-06 03:58:01 -0500 )edit

U've got me here! Thanks. Indeed a.cols!==b.rows. Multiplying 3x3 homography matrix by 640x480 image matrix is a stupid mistake.

do you think that converting homography matrix h from double to float and then Ioop through the image matrix applying h * img_src.at<uint8_t>(r,c,1) should sort out this bit?

( 2020-04-06 04:49:24 -0500 )edit

no not the pixel value again, but the position (in homogeneous, 3d coords) like:

p = Point3(c, r, 1) * H;
p.x /= p.z;
p.y /= p.z;

( 2020-04-06 06:28:17 -0500 )edit

thank you. Trying to digest. I guess z is the 3rd coordinate of 3D space. Where shall I take it from? Is z=1? Has not z coordinate been already taken into account when calculating H using two sets of four points?

( 2020-04-06 14:59:57 -0500 )edit

homogeneous 2/3d coords, please consult your maths book

( 2020-04-06 17:54:17 -0500 )edit

consulted but still do not get how to calculate z in my case here. And my math book still confirms that homogeneous 2/3d coords are taken into account at the previous step when calculating homography matrix. The formula about cv::warpPerspective(): Dense perspective transform on page 313 of O'Railly Learn OpenCV by Adrian Kaehler suggests that z=1, if I read it correctly (cannot copy it here as it is a two story construction with low indeces)

( 2020-04-07 04:38:22 -0500 )edit

p.z=1 before the multiplication

( 2020-04-07 04:41:33 -0500 )edit