Different point clouds when using cv::stereoRectify

asked 2017-12-07 19:17:33 -0600

Humam Helfawi gravatar image

updated 2017-12-08 14:25:14 -0600

I have an well-calibrated stereo camera. I did an experiment where in the first round I compute point cloud directly from the non-rectified images (I did it manually for small set of points that were marked as matched by human).

Then, I did the same by using the rectified images on the same points (Of-course I choose them from the rectified image not the original).

However, the output point clouds were no matched! There were huge translation and some rotation between them.

My question is: is this an expected behavior? In other words, should I apply some further steps on one of them to transform it to the same origin of the other one?

I did all the work in OpenCV 3.3.1 using mostly example from the repository.

P.S. I did not show any code because I just want to make sure that it is not an expected behavior before I start questioning my implementation.

edit retag flag offensive close merge delete

Comments

1

Could you please show us how you are manually computing the disparity and the depth? Then only we can be able to tell the difference! Isn't? Because if you are using the same fundamental matrix then your results should be same!

Balaji R gravatar imageBalaji R ( 2017-12-07 21:20:16 -0600 )edit

To answer this it's helpful to trace the whole process; could you post here:

  1. the left camera image without any annotation
  2. the right camera image without any annotation
  3. the left camera image annotated with the points you compared
  4. the right camera image annotated with the points you compared
  5. the left rectified image
  6. the right rectified image
  7. the stereo disparity map or the depth map
  8. the point cloud data
  9. your notes showing how you manually calculated results, and those results
opalmirror gravatar imageopalmirror ( 2017-12-08 14:23:07 -0600 )edit

Thanks guys, I will edit the post to clarify it more. However, I did not use any depth algorithm. I just picked up some matched point and triangulate them two times. One from the rectified images using cv::tringulatePoints. and another one from the original points using R and T from the calibration ( the same R,T that I compute the F matrix from)

Humam Helfawi gravatar imageHumam Helfawi ( 2017-12-08 14:28:12 -0600 )edit