Ask Your Question
0

Project Image Plane on Object Plane

asked 2016-04-28 12:37:53 -0600

Fivezero0505 gravatar image

I am trying to find the coördinates of a point on my image (Xpixel, Ypixel) on a object plane... like the reverse of (cv::Projectpoints)

My camera is already calibrated , so i know the :

  1. intrinsic matrix

  2. distortion matrix

  3. translation vector

  4. rotation matrix

I want to find the corners of my image ( so (0,0) ; (0,height)....) on the object plane. Object plane with Z = 0 is calibrated with a chessboard.

Eventualy i want to use the equation for the line from the camera centre (in world coordinates), through the image plane (through the correct pixel). Since the camera's don't move, i will use the calibration data for all the frames for that camera.

What i have tried :

  1. M *W with W = [R,t] (M = camera matrix, R= rotation matrix, t = translation vector)
  2. findHomography , coordinate in image * H = coordinate in world coordinates
  3. tried by adjusting 3D coordinates to eventually find a match with the right pixel (i was desperate)

I really need to find that line, camera ray if you wil. Startpoint should be the position of the camera in world coördinates and the Endpoint should be on a plane I choose (any plane will do but with a variable Z would be a bonus). Or if possible, just the equation in world coördinates for the line starting from the camera centre through the right pixel in the image ...

Every time i try something and project my 3D coordinate back (with projectpoints); the result isnt the same ...

Some questions just in case i understand something wrong :

  1. the rvecs from calibratecamera can be 'converted' to the rotationmatrix with cv::Rodrigues
  2. the tvecs from calibratecamera is the position of the camera in world coördinates from the origin (determined by the chessboard)
  3. pixelcoordinates are from 0 to width or height, no negative values (when used for projection)
edit retag flag offensive close merge delete

2 answers

Sort by » oldest newest most voted
0

answered 2016-05-22 07:35:11 -0600

Fivezero0505 gravatar image

Welp, the answer is quite simple

Take more images. When using multiple images in multiple poses, a better intrinsic matrix can be calculated.

So two steps :

  1. First image is the main image and is used to determine the x,y,z axes for all the cameras (so one image visible by all)
  2. More images for each camera. This will improve the estimation for the intrinsic matrix and in turn will improve the projection of 3D points to the image
edit flag offensive delete link more
1

answered 2016-04-28 23:05:02 -0600

Tetragramm gravatar image

Your assumption number 2 is wrong. tvecs is the position of the chessboard in camera coordinates. To convert from one to the other, get the rotation matrix with cv::Rodrigues, then t = -R.t() * tvecs.

If that's not the problem, can you include a sample image with the four things you mentioned

edit flag offensive delete link more

Question Tools

1 follower

Stats

Asked: 2016-04-28 12:37:53 -0600

Seen: 2,482 times

Last updated: May 22 '16