# How to get real world 2D coordinates of rectified pixels?

Hello, I have a system with a camera looking vertical on a table and there is a robot with a target. I already have the camera calibrated and i have the extrinsic and intrinsic values and get a rectified image. Now i compute the mass center of the target and get its values (u,v).

My question now is how do i get the real 2D (x,y) world coordinates? I don't need the z value because it has always the same z value (10 cm). Could anyone explain it to me please?

edit retag close merge delete

Sort by ยป oldest newest most voted

I tend to do this as simply as I can, so this may not be exactly what you're looking for.

Use your camera distortion to undistort an image, and find known world points (say, the corners of the table) in the un-distorted image. Calculate the perspective matrix from undistorted pixel to world xy location.

Find the center of mass of your target, run the pixel location through the undistortPoints function and apply the perspective matrix. It should now be in real world xy coordinates.

I don't think there's a function that does what you're asking, necessarily. The image alone doesn't contain enough information to know how far away the object is, until you add that it's always at z =10 and the camera extrinsic is blah. It might be a good idea to add one at some point though, this comes up a lot.

more

Oh, right. A thought. You can just use the projectPoints function to do the first section. So that turns the 3d world points (x,y,10cm) into image coordinates. Then you find the perspective matrix from image to world xy.

( 2016-01-16 18:03:53 -0500 )edit

Official site

GitHub

Wiki

Documentation

## Stats

Asked: 2016-01-16 13:50:17 -0500

Seen: 1,371 times

Last updated: Jan 16 '16