Ask Your Question
0

one point (u,v) to actual (x,y) w.r.t camera frame?

asked 2019-05-27 09:57:28 -0600

opencvr gravatar image

Hello,

I am detecting center of circle and getting (u,v) of that coordinates in Image. Now I want this (u,v) into actual world coordinates (x,y) w.r.t to camera_frame. I do not need z coordinate.

How can I get this? what is the possible way to do this?

I could not found anything relevant to this. I have my camera calibration matrix.

thanks.

edit retag flag offensive close merge delete

Comments

Do you not need z coordinate because you know how far it is? Or at least the location of the camera relative to some surface? If so, you're good.

If you know the location and orientation of a camera relative to a plane you can find the location on the plane easily. To know the location and orientation you must either have external info, or be able to see a minimum of four known locations on the plane to use solvePnP.

Tetragramm gravatar imageTetragramm ( 2019-05-27 17:43:25 -0600 )edit

@Tetragramm for z I can tell robot arm(camera on hand) every time to look at plane from above at certain height. so z, and orientation is fixed every time. I have also camera intrinsics cx,cy,fx,fy. so I can get actual (x,y) = (u-cx/fx, v-cy/fy) is this correct method?

opencvr gravatar imageopencvr ( 2019-05-28 04:08:34 -0600 )edit

1 answer

Sort by ยป oldest newest most voted
0

answered 2019-05-28 19:16:33 -0600

Tetragramm gravatar image

Based on your clarification, you can do what you need. First you calculate the vector in space. If you don't have the distortion matrix, you can ignore that part.

Mat tvec = _tvec.getMat();
Mat rvec = _rvec.getMat();
Mat camera_matrix = _cameraMatrix.getMat();
const Mat distortion_matrix = _distortionMatrix.getMat();

std::vector< Point2f > pts_in, pts_out;
pts_in.push_back( _pt );
undistortPoints( pts_in, pts_out, camera_matrix, distortion_matrix, noArray(), camera_matrix );

Mat los( 3, 1,CV_64F );

los.at< double >( 0 ) = pts_out[0].x;
los.at< double >( 1 ) = pts_out[0].y;
los.at< double >( 2 ) = 1;

if ( camera_matrix.type() != CV_64F )
    camera_matrix.convertTo( camera_matrix, CV_64F );
if ( rvec.type() != CV_64F )
    rvec.convertTo( rvec, CV_64F );
if ( tvec.type() != CV_64F )
    tvec.convertTo( tvec, CV_64F );

los = camera_matrix.inv() * los;

This gives you a vector pointing out from the center of the camera. You need to rotate it by the orientation of the camera, and use the position of the camera as the origin of the vector. If you calibrated your camera using OpenCV, use this function as a guide: HERE. Otherwise you'll need to use the information you have (presumably from the robot arm). From there it's just basic geometry.

edit flag offensive delete link more

Comments

@Tetragramm but what should I input as _rvec and _tvec?, what I have is, _camera_matrix(fx,cx,fy,cy), input points(u,v) so _pt, and _distortionMatrix.

opencvr gravatar imageopencvr ( 2019-05-29 03:16:26 -0600 )edit

camera_calibration with chessboard is giving me only _camera_matrix and _distortion_matrix.

opencvr gravatar imageopencvr ( 2019-05-29 09:30:41 -0600 )edit

You said you know the orientation and position of the camera. Is that from an opencv function that gives you rvec and tvec? Then use those.

Otherwise, it's not OpenCV, and I can't help.

Oh, except there's apparently a function call calibrateHandEye just for the task of matching robot arms to the camera.

Tetragramm gravatar imageTetragramm ( 2019-05-29 18:04:08 -0600 )edit

@Tetragramm now I see camera calibration giving me rvec, tvec tuples for every chessboard pattern images, whose location is changing w.r.t camera. it is huge 25 * 6 matrix, but for calculation I need 3*4 matrix? I am still not getting it.

opencvr gravatar imageopencvr ( 2019-05-30 04:12:09 -0600 )edit

This sort of question is best answered by the documentation for the function HERE. calibrateCamera treats the corner of the chessboard as the center of the world, and tells you where you camera is located in each of the calibration images. Check out this tutorial to see how to take the information and the video stream to see where the camera is at any given point.

Tetragramm gravatar imageTetragramm ( 2019-05-30 22:18:28 -0600 )edit

@Tetragramm as my object is placed on table, so distance to table from camera is z, known. as I do not need orientation, from equation of this attached photo link text I omitted first equation (so R,t) as I do not need complete POSE.

then using detected point (u,v) and fx, cx, fy, cy and if say(x,y) is real world point in camera_frame,

x= ((u-cx/fx) z), y = ((v-cy/fy) z)

can I use like this? is this correct method or still am I missing something or understood wrong?

opencvr gravatar imageopencvr ( 2019-06-03 08:48:48 -0600 )edit

That would work as long as the camera is pointed straight down at the table. And as long as you're ok with the center of the camera being the (0,0) of your coordinate system. And as long as you don't have any distortion.

If you do have distortion, use what's in my post, but multiply the final result by z/los.at<double>(2), and then first two values of los are your x and y.

Tetragramm gravatar imageTetragramm ( 2019-06-03 19:53:16 -0600 )edit

@Tetragramm as I have attached images of my setup, camera is pointed exactly straight down at the table.

link text

link text

link text

"the center of the camera being the (0,0) of your coordinate system"-----is this mean by *camera coordinate system? as in my case it should be, realsense left imager (0,0) right?*

so after getting (x,y,z) in above camera_frame I am transforming It to robot base_link, and giving that POSE to end effector. so sometime it goes to that position and sometime there is random error of around 20mm,10 mm in x,y.

opencvr gravatar imageopencvr ( 2019-06-04 08:24:22 -0600 )edit

I am transforming this (x,y,z) value directly to base_link as camera frame is attached to robot(ROS URDF and TF).

that is why I think camera extrinsic parameter(R,t) is not required (like others have to do this using known coordinates on plane and they get camera Pose , but in my case I do not need world coordinates on plane). or am I understanding still wrong for R,t? is this can be a justification for omitting 1st equation from thislink text?

is this error because of I omitted first equation ,where R,t also there? but according to my setup, it should work without R,t?

opencvr gravatar imageopencvr ( 2019-06-04 08:54:44 -0600 )edit

First a question. You wish to move to an X,Y. Is it true that the farther from the destination you are, the farther the error? Or is it unconnected to the distance the X,Y is from the center of the camera?

Tetragramm gravatar imageTetragramm ( 2019-06-04 18:25:54 -0600 )edit

Question Tools

1 follower

Stats

Asked: 2019-05-27 09:57:28 -0600

Seen: 2,314 times

Last updated: May 28 '19