# ProjectPoints not working

I have an image with a disparity map that I reproject to 3D. After running some algorithms to extract the bounding box in 3D, i reprojected each corner back to 2D to find the minimum bounding box but the results I get are totally wrong. I have verified that the corners in 3D are in the right position. But when reprojected onto 2D it becomes wrong. Been trying to figure out the problem for days and have no progress.

1) Reconstruct 3d
2) Run algorithm to get bounding box in 3d
3) Reproject the corners of each bounding box in 2d (projected points error)
4) Get minimum enclosing bounding box in 2d


Does anyone have any idea what is going on ? I am only using the function projectpoints();

Original Image Size : 1392 x 512

Calibrated Image Size : 1242 x 375 (This is the image I am working with)

EDIT: These are just the relevant portions of the code I think.

// Get Q Matrix
stereoRectify(K1, D1, K2, D2, cv::Size(1392, 512), R, T, R1, R2, P1, P2, Q, cv::CALIB_ZERO_DISPARITY, 0, cv::Size(1242, 375), 0, 0);

// Project image to 3D
disparity = disparity / 500.f
cv::reprojectImageTo3D(disparity, out, Q, true);

// Do some processing
// ....

// Reproject back to image
// opencv_cloud is a vector of point3f containing the 8 corners of a bounding box
// opencv_crd is a vector of point2f containing the projected points
cv::projectPoints(opencv_cloud, rvec, tvec, K1, cv::Mat(), opencv_crd);
rect.push_back(cv::boundingRect(cv::Mat(opencv_crd)));


Thus I used the left camera matrix to reproject the points. rvec and tvec are both [0, 0, 0]. I have tried replacing cv::Mat() with the distortion matrix of the left camera but it seems like there is no effect. Could it be the image resolution ? Is it reprojecting onto the image with the original resolution ?

EDIT: After scaling the camera matrix as suggested by Tetragramm, the results I get are much better.

edit retag close merge delete

1

There are many things that could be wrong. Could you show us the section of code that does the projection and what you're passing into it? Check image size, camera matrix, distortion matrix and see if they are correct.

( 2016-10-05 18:10:17 -0500 )edit

Hi i have edited my post.Im still trying out some random stuffs

( 2016-10-05 22:04:06 -0500 )edit
2

The image size is almost certainly the problem. You can resize the camera matrix by multiplying first row by scale_x and the second by scale_y. If you did any cropping, that's just a change to the principal point either before or after the scale, depending on what exactly you did.

( 2016-10-05 22:10:18 -0500 )edit

Great it worked ! I have edited my post to include the new result. Looks like I have a lot to learn. Just to confirm, the camera matrix I pass to stereorectify() should be the original matrices right ? Because stereorectify rectifies the original images.. Why must I scale the cameraMatrix and is there a function that does this for me ? Does the OpenCV book talk about this ? Would the function stereoCalibrate be the one returning the calibrated/rectified camera matrix ? I'd like to know so I can read up on the theory

( 2016-10-05 22:58:58 -0500 )edit
2

I will be honest, I know very little about the stereo functions. I don't know why you have an image of a size different from what you started with. If you're using the left camera matrix, why aren't you using the left camera image? Looking at the docs for stereoRectify, I think you're supposed to be using the first three columns of P1 as your camera matrix.

To scale the camera matrix, it's just multiplication, so... the multiply function?

( 2016-10-05 23:14:20 -0500 )edit