Ask Your Question
0

How to triangulate Points from a Single Camera multiple Images?

asked 2017-04-10 08:55:18 -0600

updated 2017-04-10 09:24:06 -0600

I have a single calibrated camera pointing at a checkerboard at different locations with Known

  1. Camera Intrinsics. fx,fy,cx,cy
  2. Distortion Co-efficients K1,K2,K3,T1,T2,etc..
  3. Camera Rotation & Translation (R,T) from IMU

After Undistortion, I have computed Point correspondence of checkerboard points in all the images with a known camera-to-camera Rotation and Translation vectors.

How can I estimate the 3D points of the checkerboard in all the images?

I think OpenCV has a function to do this but I'm not able to understand how to use this!

1) cv::sfm::triangulatePoints

2) triangulatePoints

How to compute the 3D Points using OpenCV?

edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted
2

answered 2017-04-12 21:39:08 -0600

Tetragramm gravatar image

Take a look at THIS unfinished contrib module. It's far enough along to have what you are asking for.

You put in the 2d image point from each image, along with the camera matrix, distortion matrix, rvec and tvec, and you get out the 3d location of the point.

I should really add some sample code to the readme now that I have it working, but for now here's a sample.

vector<Mat> localt; //vector of tvecs, one for each image
vector<Mat> localr; //vector of rvecs, one for each image
vector<Point2f> trackingPts; //location of the point of interest in each image
Mat cameraMatrix;  //For a single camera application, you only need one camera matrix, for multiple cameras use vector<Mat>, one for each image
Mat distMatrix; //For a single camera application, you only need one distortion matrix, for multiple cameras use vector<Mat>, one for each image
Mat state; //Output of the calculation
Mat cov;  //Optional output, uncertainty covariance

mapping3d::calcPosition(localt, localr, trackingPts, cameraMatrix, distMatrix, state, cov);
edit flag offensive delete link more

Comments

Thank you for your awesome contribution!

Balaji R gravatar imageBalaji R ( 2017-04-12 22:23:44 -0600 )edit

It would be very useful if you add some reference(Papers) for this implementation! Any useful pointers is also welcome!

Balaji R gravatar imageBalaji R ( 2017-04-13 11:25:58 -0600 )edit
Tetragramm gravatar imageTetragramm ( 2017-04-13 15:50:31 -0600 )edit
1

@Tetragramm that module looks quite nice. Why not contribute it gradually and let users enjoy the already existing functionality?

StevenPuttemans gravatar imageStevenPuttemans ( 2017-04-14 06:51:34 -0600 )edit
1

I will definitely do that soon. Until a week ago it only had a minimum of functionality, and it's still lacking good documentation. I'll get that done soon and then contrib.

Tetragramm gravatar imageTetragramm ( 2017-04-14 16:34:43 -0600 )edit

Great! also looking forward to having this. Are you going to create also a Python interface?

Giacomo gravatar imageGiacomo ( 2017-04-25 07:37:18 -0600 )edit

I think it has a Python interface already. At least, I declared everything with CV_WRAP and put the python marker in the CMAKE file.

Sorry, I'm trying to get a machine learning run going, then I can work on this while it's running.

Tetragramm gravatar imageTetragramm ( 2017-04-25 20:31:16 -0600 )edit

I actually compiled your fork and would like to know how to call it from python. Could you help me with that?

Giacomo gravatar imageGiacomo ( 2017-05-11 05:35:21 -0600 )edit

Well, I think you do cv2.mapping3d.functionName.

I followed instructions for making python bindings, but since I don't use python... Take a look at the examples to find how you call the various parameter types.

If there's anything weird, could you post it here? I'm actually writing better documentation and examples now, so I could just include it.

Tetragramm gravatar imageTetragramm ( 2017-05-11 17:35:35 -0600 )edit

I am almost there but I get "calibration.cpp:292: error: (-201) Input matrix must be 1x3, 3x1 or 3x3 in function cvRodrigues2" for the rotation matrix vector. I am passing a numpy array of rotation matrixes (numpy matrixes of size 3x3), but it seems like it is expecting something different. The problem seems to be that I am passing an array of 33 elements, each on of size 3x3, but once passed to calcPosition, if I print the size of _rvecs it is [3 x 33]

Giacomo gravatar imageGiacomo ( 2017-05-24 10:01:05 -0600 )edit

It is expecting the same format as calibrateCamera outputs, which is not the 3x3 size. Try using Rodrigues to convert the 3x3 to the proper shape.

I'll add a check in my code that will use Rodrigues or not as appropriate so it doesn't matter. I'm not sure if the python/c++ interface is doing anything funny though. I don't see how it would end up as 3x33 in the c++ code.

Tetragramm gravatar imageTetragramm ( 2017-05-24 17:15:09 -0600 )edit

Ok thanks, that helped :) Now I got here : cameraTranslation = (-cameraRotation * tvec); but the sizes don't match. cameraRotation is 3x3 but tvec is 1x3. I could transpose tvec but its not correct. Probably the tvecs should have another shape. What shape should they have in the input function calcPosition() ?

Giacomo gravatar imageGiacomo ( 2017-05-24 18:13:35 -0600 )edit

Hmm, Another thing to check in the method. The inputs I've been using are 1 column, 3 rows, which are [X;Y;Z] in the same coordinate system calibrateCamera would output.

Tetragramm gravatar imageTetragramm ( 2017-05-24 20:05:17 -0600 )edit

Also, I just pushed the sample I've been working on. I've only tried it on Visual Studio, but I don't think there's any Windows specific stuff.

Tetragramm gravatar imageTetragramm ( 2017-05-24 20:12:27 -0600 )edit

Question Tools

1 follower

Stats

Asked: 2017-04-10 08:55:18 -0600

Seen: 2,838 times

Last updated: Apr 12 '17