Ask Your Question
0

3D reconstruction with known camera parameters

asked 2017-06-01 13:55:13 -0600

Let's say I have 8 pictures of an object from different views. For each of the 8 pics I know the corresponding camera position and orientation. What would be the best way to try a 3D reconstruction of the object?

edit retag flag offensive close merge delete

Comments

If you know intrinsic parameters and pose you can use stereoRectify. Then you retrieve 3d using stereo algorithm stereoBM or SGBM. It is only stereonot multi-view.

You can use sfm module too (no need of extrinsic parameters). It is multiview but algorithm estimates pose

LBerger gravatar imageLBerger ( 2017-06-01 14:37:57 -0600 )edit
1

what you're trying to do is called: "structure from motion"

berak gravatar imageberak ( 2017-06-01 15:55:01 -0600 )edit

Of course, SfM would be an approach. But as I already know the camera pose for each image even in a common world coordinate system, I think SfM were a waste of computing time in this case...

Syntax134 gravatar imageSyntax134 ( 2017-06-04 16:10:00 -0600 )edit

Ok it's a waste of time. You know camera pose. I'm very interesting to know ho do you process image without intrincsic parameters ?

LBerger gravatar imageLBerger ( 2017-06-04 16:34:55 -0600 )edit

1 answer

Sort by ยป oldest newest most voted
1

answered 2017-06-01 17:36:51 -0600

Tetragramm gravatar image

For multi-view triangulation, I"m working on THIS module. You find the same point in two or more of the images and it will help you find the 3d locations, given all the camera information.

It's not perfect (the inputs need to be in a pretty specific format) but it does work.

Unlike structure from motion, this module assumes you already know the camera position and orientation, so it's fast.

Right now it only has methods for single points, and can't create a 3d model from the images, but that may come.

edit flag offensive delete link more

Comments

Thanks for the link, sounds interesting!

Syntax134 gravatar imageSyntax134 ( 2017-06-04 16:10:52 -0600 )edit

Dear @Tetragram thanks for your work on the mapping3d module. I find it very interesting. But, I still cannot reach to find a way to make it working, expecially the calcObjectPosition function. Could you please paste an example of usage? In particular, what are the shapes and details of the input parameters? It seems that following the doc is not useful, I still get into some errors (different shapes of inputs, Mat types, etc.) Please, it will be very helpful!

freelist gravatar imagefreelist ( 2018-09-24 17:41:03 -0600 )edit

What you need to know is that you have N cameras with known parameters, and M points on an object.

  • imagePointsPerView is a vector with M elements, each of which is a vector with N cv::Point2f. So imagePointsPerView[m][n] is the pixel location of the mth point in the nth camera.
  • objectPoints is a vector of M cv::Point3f. objectPoints[m] is the 3d location of the mth object point, in a reference model. Just like you would input to solvePnP to find the R|t of the camera, except this function finds the R|t of the object now, relative to the model.
  • cameraMatrices is either a vector of N cameraMatrices, or a single camera matrix, just like you get out of calibrate camera. You can use a single camera matrix if your cameras are all the same, or actually the same.
Tetragramm gravatar imageTetragramm ( 2018-09-25 18:50:14 -0600 )edit
  • distortionMatrices is the same as cameraMatrices, just the distortion matrix.
    • tvecs is a vector of N cv::Mat, the location of the cameras, in the same coordinate system as the model. Just like you get from solvePnP
    • rvecs is the same as tvecs.
    • tvecObject and rvecObject are the outputs, both cv::Mat, just like you get from solvePnP
Tetragramm gravatar imageTetragramm ( 2018-09-25 19:07:16 -0600 )edit

A trivial example is to set up your cameras and calibrate them (camera matrix and distortion), and place a Charuco board in the center of your space. Use estimatePoseCharucoBoard to get all the camera tvecs and rvecs.

You'll need to do a bit of work to get the objectPoints vector, but it's there in charuco.cpp for an example.

Then when you move the board, you detect the corners the same way you do for estimatePoseCharucoBoard.

The result is the rotation and translation of the board relative to where you set it originally.

Tetragramm gravatar imageTetragramm ( 2018-09-25 19:08:56 -0600 )edit

Dear @Tetragramm, thanks for the detailed explanation. It is actually what I was trying to do, my settings are exactly the ones you detailed. imagePointsPerView is a NxM vector<vector<cv::point2f>>, objectPoints is a M vector<cv::point3f> and so on. But it continues to go into OpenCV Error: Sizes of input arguments do not match. It is basically due to wrong sizes of both tvec or the object model point. moreover I had to comment the line 196 in positionCalc.cpp, because it continued to run in the cv assertion error CV_Assert( ( tvecs.size() == pts.checkVector( 2, CV_64F, true ) ) ); but my elements in tvecs (1 for each camera pose) are both CV_64F elements arranged as 3x1 cv::Mat. So I cannot figure out while this assertion returns false.

freelist gravatar imagefreelist ( 2018-09-26 04:52:08 -0600 )edit

Line 196 is, I believe, correct. It verifies that there are as many elements of the vector<cv::mat> named tvecs as there are of pts.

Which shows me where my comment was incorrect. I will post the correction here, then edit the original to help anyone who finds this with a search.

  • imagePointsPerView is a vector with M elements, each of which is a vector with N cv::Point2f. So imagePointsPerView[m][n] is the pixel location of the mth point in the nth camera. Negative valued points are ignored, as they cannot be real pixels.

This reverses n and m from what I originally said. My apologies. I did this to make it easier to discard model points if they don't have enough views.

Tetragramm gravatar imageTetragramm ( 2018-09-26 18:18:20 -0600 )edit

Dear @Tetragramm thanks for your R&D and contributions. Now a few years later, I just wanted to check in with you on whether you see your mapping3D package as still the best way to go here. Basically, aiming to do enable 3D reconstruction from ChArUco coordinates. I also see that a pipeline could be developed that uses ChArUco sub-pixel localizations as input to the OpenCV Camera Motion module in the Structure from Motion branch. I think both packages could potentially deliver the matrices needed to be able to transform point clouds from camera coordinate system into a joint shared world coordinate system, but maybe you could give an update / comment on any tradeoffs + (e.g.) if mapping3d works w/ Open CV 4?

legel gravatar imagelegel ( 2020-08-03 14:18:40 -0600 )edit

Question Tools

1 follower

Stats

Asked: 2017-06-01 13:55:13 -0600

Seen: 1,454 times

Last updated: Jun 01 '17