Ask Your Question
1

Transformation from 2D image point to 3D world coordinate

asked 2014-06-27 09:29:41 -0600

Juergen gravatar image

updated 2014-06-27 10:55:58 -0600

Hi guys,

I have some troubles to "re-project" an 2D image point to 3D world coordinate system. Actually I have done an camera calibration with 15 pictures (using the example given by opencv) to find the intrinsic parameters. For the extrinsic parameters I did following:

I created an pattern where the metric position (in millimeters) of the feature points are known. Then I used cv::solvePnP with image points (feature points) and the related (feature points) in world coordinates (z=0 <=> I used a plane pattern). I got rotation and translation vector. With cv::Rodrigues I created a 3x3 rotation matrix. When I look on the values, they look fine, but when I try to find related 3D point from 2D image point, only z-coordinate fits. I tried following to get my 3D point: 3dP = R^T((K^(-1)2dP)-t)

R:=3x3 rotationmatrix K:=3x3 cameramatix(intrinsic parameters) t:=3x1 translationvector 3dp:= world coordinates 2dp:= image coordinates

Is there some stupid failure?

Thanks a lot! Juergen

edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted
0

answered 2014-06-30 03:05:36 -0600

Juergen gravatar image

Has nobody an idea?

Is there another way to get object points (3D) from an image point (2D), when I know the cameramatrix, rotationmatrix, distortion coefficients and translation vector?

edit flag offensive delete link more

Comments

I think that the remark of mathieu in this topic kind of answers your problem. it seems using only 15 images for calibration could yield wrong calibration matrices.

StevenPuttemans gravatar imageStevenPuttemans ( 2014-06-30 09:13:26 -0600 )edit

Question Tools

Stats

Asked: 2014-06-27 09:29:41 -0600

Seen: 1,762 times

Last updated: Jun 30 '14