OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Sat, 06 Jul 2019 13:38:29 -0500one point (u,v) to actual (x,y) w.r.t camera frame?http://answers.opencv.org/question/213559/one-point-uv-to-actual-xy-wrt-camera-frame/ Hello,
I am detecting center of circle and getting (u,v) of that coordinates in Image. Now I want this (u,v) into actual world coordinates (x,y) w.r.t to camera_frame. I do not need z coordinate.
How can I get this? what is the possible way to do this?
I could not found anything relevant to this. I have my camera calibration matrix.
thanks.Mon, 27 May 2019 09:57:28 -0500http://answers.opencv.org/question/213559/one-point-uv-to-actual-xy-wrt-camera-frame/Comment by opencvr for <p>Hello,</p>
<p>I am detecting center of circle and getting (u,v) of that coordinates in Image. Now I want this (u,v) into actual world coordinates (x,y) w.r.t to camera_frame. I do not need z coordinate.</p>
<p>How can I get this? what is the possible way to do this? </p>
<p>I could not found anything relevant to this. I have my camera calibration matrix.</p>
<p>thanks.</p>
http://answers.opencv.org/question/213559/one-point-uv-to-actual-xy-wrt-camera-frame/?comment=213586#post-id-213586@Tetragramm for z I can tell robot arm(camera on hand) every time to look at plane from above at certain height. so z, and orientation is fixed every time. I have also camera intrinsics cx,cy,fx,fy. so I can get actual (x,y) = (u-cx/fx, v-cy/fy)
is this correct method?Tue, 28 May 2019 04:08:34 -0500http://answers.opencv.org/question/213559/one-point-uv-to-actual-xy-wrt-camera-frame/?comment=213586#post-id-213586Comment by Tetragramm for <p>Hello,</p>
<p>I am detecting center of circle and getting (u,v) of that coordinates in Image. Now I want this (u,v) into actual world coordinates (x,y) w.r.t to camera_frame. I do not need z coordinate.</p>
<p>How can I get this? what is the possible way to do this? </p>
<p>I could not found anything relevant to this. I have my camera calibration matrix.</p>
<p>thanks.</p>
http://answers.opencv.org/question/213559/one-point-uv-to-actual-xy-wrt-camera-frame/?comment=213576#post-id-213576Do you not need z coordinate because you know how far it is? Or at least the location of the camera relative to some surface? If so, you're good.
If you know the location and orientation of a camera relative to a plane you can find the location on the plane easily. To know the location and orientation you must either have external info, or be able to see a minimum of four known locations on the plane to use solvePnP.Mon, 27 May 2019 17:43:25 -0500http://answers.opencv.org/question/213559/one-point-uv-to-actual-xy-wrt-camera-frame/?comment=213576#post-id-213576Comment by berak for <p>Hello,</p>
<p>I am detecting center of circle and getting (u,v) of that coordinates in Image. Now I want this (u,v) into actual world coordinates (x,y) w.r.t to camera_frame. I do not need z coordinate.</p>
<p>How can I get this? what is the possible way to do this? </p>
<p>I could not found anything relevant to this. I have my camera calibration matrix.</p>
<p>thanks.</p>
http://answers.opencv.org/question/213559/one-point-uv-to-actual-xy-wrt-camera-frame/?comment=213560#post-id-213560https://en.wikipedia.org/wiki/Pinhole_camera_modelMon, 27 May 2019 10:19:12 -0500http://answers.opencv.org/question/213559/one-point-uv-to-actual-xy-wrt-camera-frame/?comment=213560#post-id-213560Answer by Tetragramm for <p>Hello,</p>
<p>I am detecting center of circle and getting (u,v) of that coordinates in Image. Now I want this (u,v) into actual world coordinates (x,y) w.r.t to camera_frame. I do not need z coordinate.</p>
<p>How can I get this? what is the possible way to do this? </p>
<p>I could not found anything relevant to this. I have my camera calibration matrix.</p>
<p>thanks.</p>
http://answers.opencv.org/question/213559/one-point-uv-to-actual-xy-wrt-camera-frame/?answer=213628#post-id-213628Based on your clarification, you can do what you need. First you calculate the vector in space. If you don't have the distortion matrix, you can ignore that part.
Mat tvec = _tvec.getMat();
Mat rvec = _rvec.getMat();
Mat camera_matrix = _cameraMatrix.getMat();
const Mat distortion_matrix = _distortionMatrix.getMat();
std::vector< Point2f > pts_in, pts_out;
pts_in.push_back( _pt );
undistortPoints( pts_in, pts_out, camera_matrix, distortion_matrix, noArray(), camera_matrix );
Mat los( 3, 1,CV_64F );
los.at< double >( 0 ) = pts_out[0].x;
los.at< double >( 1 ) = pts_out[0].y;
los.at< double >( 2 ) = 1;
if ( camera_matrix.type() != CV_64F )
camera_matrix.convertTo( camera_matrix, CV_64F );
if ( rvec.type() != CV_64F )
rvec.convertTo( rvec, CV_64F );
if ( tvec.type() != CV_64F )
tvec.convertTo( tvec, CV_64F );
los = camera_matrix.inv() * los;
This gives you a vector pointing out from the center of the camera. You need to rotate it by the orientation of the camera, and use the position of the camera as the origin of the vector. If you calibrated your camera using OpenCV, use this function as a guide: [HERE](https://github.com/Tetragramm/opencv_contrib/blob/master/modules/mapping3d/src/positionCalc.cpp#L156). Otherwise you'll need to use the information you have (presumably from the robot arm). From there it's just basic geometry.Tue, 28 May 2019 19:16:33 -0500http://answers.opencv.org/question/213559/one-point-uv-to-actual-xy-wrt-camera-frame/?answer=213628#post-id-213628Comment by Tetragramm for <p>Based on your clarification, you can do what you need. First you calculate the vector in space. If you don't have the distortion matrix, you can ignore that part.</p>
<pre><code>Mat tvec = _tvec.getMat();
Mat rvec = _rvec.getMat();
Mat camera_matrix = _cameraMatrix.getMat();
const Mat distortion_matrix = _distortionMatrix.getMat();
std::vector< Point2f > pts_in, pts_out;
pts_in.push_back( _pt );
undistortPoints( pts_in, pts_out, camera_matrix, distortion_matrix, noArray(), camera_matrix );
Mat los( 3, 1,CV_64F );
los.at< double >( 0 ) = pts_out[0].x;
los.at< double >( 1 ) = pts_out[0].y;
los.at< double >( 2 ) = 1;
if ( camera_matrix.type() != CV_64F )
camera_matrix.convertTo( camera_matrix, CV_64F );
if ( rvec.type() != CV_64F )
rvec.convertTo( rvec, CV_64F );
if ( tvec.type() != CV_64F )
tvec.convertTo( tvec, CV_64F );
los = camera_matrix.inv() * los;
</code></pre>
<p>This gives you a vector pointing out from the center of the camera. You need to rotate it by the orientation of the camera, and use the position of the camera as the origin of the vector. If you calibrated your camera using OpenCV, use this function as a guide: <a href="https://github.com/Tetragramm/opencv_contrib/blob/master/modules/mapping3d/src/positionCalc.cpp#L156">HERE</a>. Otherwise you'll need to use the information you have (presumably from the robot arm). From there it's just basic geometry.</p>
http://answers.opencv.org/question/213559/one-point-uv-to-actual-xy-wrt-camera-frame/?comment=215248#post-id-215248It is much more accurate to get the depth at the particular pixel. That accounts for all the error in whether the table is perpendicular, or if you're actually looking at something raised off the table.Sat, 06 Jul 2019 13:38:29 -0500http://answers.opencv.org/question/213559/one-point-uv-to-actual-xy-wrt-camera-frame/?comment=215248#post-id-215248Comment by opencvr for <p>Based on your clarification, you can do what you need. First you calculate the vector in space. If you don't have the distortion matrix, you can ignore that part.</p>
<pre><code>Mat tvec = _tvec.getMat();
Mat rvec = _rvec.getMat();
Mat camera_matrix = _cameraMatrix.getMat();
const Mat distortion_matrix = _distortionMatrix.getMat();
std::vector< Point2f > pts_in, pts_out;
pts_in.push_back( _pt );
undistortPoints( pts_in, pts_out, camera_matrix, distortion_matrix, noArray(), camera_matrix );
Mat los( 3, 1,CV_64F );
los.at< double >( 0 ) = pts_out[0].x;
los.at< double >( 1 ) = pts_out[0].y;
los.at< double >( 2 ) = 1;
if ( camera_matrix.type() != CV_64F )
camera_matrix.convertTo( camera_matrix, CV_64F );
if ( rvec.type() != CV_64F )
rvec.convertTo( rvec, CV_64F );
if ( tvec.type() != CV_64F )
tvec.convertTo( tvec, CV_64F );
los = camera_matrix.inv() * los;
</code></pre>
<p>This gives you a vector pointing out from the center of the camera. You need to rotate it by the orientation of the camera, and use the position of the camera as the origin of the vector. If you calibrated your camera using OpenCV, use this function as a guide: <a href="https://github.com/Tetragramm/opencv_contrib/blob/master/modules/mapping3d/src/positionCalc.cpp#L156">HERE</a>. Otherwise you'll need to use the information you have (presumably from the robot arm). From there it's just basic geometry.</p>
http://answers.opencv.org/question/213559/one-point-uv-to-actual-xy-wrt-camera-frame/?comment=215229#post-id-215229@Tetragramm Now I understand. I am getting depth directly from the camera. so, getting depth **at particular (x,y) pixel** from the camera is necessary or general depth at table perpendicular to camera(which I think realsense camera give) will also work? which is correct?Fri, 05 Jul 2019 16:26:58 -0500http://answers.opencv.org/question/213559/one-point-uv-to-actual-xy-wrt-camera-frame/?comment=215229#post-id-215229Comment by Tetragramm for <p>Based on your clarification, you can do what you need. First you calculate the vector in space. If you don't have the distortion matrix, you can ignore that part.</p>
<pre><code>Mat tvec = _tvec.getMat();
Mat rvec = _rvec.getMat();
Mat camera_matrix = _cameraMatrix.getMat();
const Mat distortion_matrix = _distortionMatrix.getMat();
std::vector< Point2f > pts_in, pts_out;
pts_in.push_back( _pt );
undistortPoints( pts_in, pts_out, camera_matrix, distortion_matrix, noArray(), camera_matrix );
Mat los( 3, 1,CV_64F );
los.at< double >( 0 ) = pts_out[0].x;
los.at< double >( 1 ) = pts_out[0].y;
los.at< double >( 2 ) = 1;
if ( camera_matrix.type() != CV_64F )
camera_matrix.convertTo( camera_matrix, CV_64F );
if ( rvec.type() != CV_64F )
rvec.convertTo( rvec, CV_64F );
if ( tvec.type() != CV_64F )
tvec.convertTo( tvec, CV_64F );
los = camera_matrix.inv() * los;
</code></pre>
<p>This gives you a vector pointing out from the center of the camera. You need to rotate it by the orientation of the camera, and use the position of the camera as the origin of the vector. If you calibrated your camera using OpenCV, use this function as a guide: <a href="https://github.com/Tetragramm/opencv_contrib/blob/master/modules/mapping3d/src/positionCalc.cpp#L156">HERE</a>. Otherwise you'll need to use the information you have (presumably from the robot arm). From there it's just basic geometry.</p>
http://answers.opencv.org/question/213559/one-point-uv-to-actual-xy-wrt-camera-frame/?comment=215191#post-id-215191They have depth. You do not.
If you know the distance to the table (t), and the table is parallel to the plane(R), you can calculate the depth at any point on the table. So really, you do know R,t. It's just that they're simple.
So saying that it's not parallel is saying you don't know R. Now you need something else to fill that gap.Thu, 04 Jul 2019 23:12:37 -0500http://answers.opencv.org/question/213559/one-point-uv-to-actual-xy-wrt-camera-frame/?comment=215191#post-id-215191Comment by opencvr for <p>Based on your clarification, you can do what you need. First you calculate the vector in space. If you don't have the distortion matrix, you can ignore that part.</p>
<pre><code>Mat tvec = _tvec.getMat();
Mat rvec = _rvec.getMat();
Mat camera_matrix = _cameraMatrix.getMat();
const Mat distortion_matrix = _distortionMatrix.getMat();
std::vector< Point2f > pts_in, pts_out;
pts_in.push_back( _pt );
undistortPoints( pts_in, pts_out, camera_matrix, distortion_matrix, noArray(), camera_matrix );
Mat los( 3, 1,CV_64F );
los.at< double >( 0 ) = pts_out[0].x;
los.at< double >( 1 ) = pts_out[0].y;
los.at< double >( 2 ) = 1;
if ( camera_matrix.type() != CV_64F )
camera_matrix.convertTo( camera_matrix, CV_64F );
if ( rvec.type() != CV_64F )
rvec.convertTo( rvec, CV_64F );
if ( tvec.type() != CV_64F )
tvec.convertTo( tvec, CV_64F );
los = camera_matrix.inv() * los;
</code></pre>
<p>This gives you a vector pointing out from the center of the camera. You need to rotate it by the orientation of the camera, and use the position of the camera as the origin of the vector. If you calibrated your camera using OpenCV, use this function as a guide: <a href="https://github.com/Tetragramm/opencv_contrib/blob/master/modules/mapping3d/src/positionCalc.cpp#L156">HERE</a>. Otherwise you'll need to use the information you have (presumably from the robot arm). From there it's just basic geometry.</p>
http://answers.opencv.org/question/213559/one-point-uv-to-actual-xy-wrt-camera-frame/?comment=215170#post-id-215170@Tetragramm just small question, if you look at this link function [link text](https://github.com/IntelRealSense/librealsense/blob/5e73f7bb906a3cbec8ae43e888f182cc56c18692/include/librealsense2/rsutil.h#L46) , here they are using depth, pixel point 2d (x,y) and known camera intrinsics and getting 3d real world point (so It can project any 2d point to 3d point). and this is w.r.t camera regardless of camera position without parallel to image plane (or table). so why we need R,t extrinsics of camera when camera is not parallel to image plane?Thu, 04 Jul 2019 08:58:47 -0500http://answers.opencv.org/question/213559/one-point-uv-to-actual-xy-wrt-camera-frame/?comment=215170#post-id-215170Comment by opencvr for <p>Based on your clarification, you can do what you need. First you calculate the vector in space. If you don't have the distortion matrix, you can ignore that part.</p>
<pre><code>Mat tvec = _tvec.getMat();
Mat rvec = _rvec.getMat();
Mat camera_matrix = _cameraMatrix.getMat();
const Mat distortion_matrix = _distortionMatrix.getMat();
std::vector< Point2f > pts_in, pts_out;
pts_in.push_back( _pt );
undistortPoints( pts_in, pts_out, camera_matrix, distortion_matrix, noArray(), camera_matrix );
Mat los( 3, 1,CV_64F );
los.at< double >( 0 ) = pts_out[0].x;
los.at< double >( 1 ) = pts_out[0].y;
los.at< double >( 2 ) = 1;
if ( camera_matrix.type() != CV_64F )
camera_matrix.convertTo( camera_matrix, CV_64F );
if ( rvec.type() != CV_64F )
rvec.convertTo( rvec, CV_64F );
if ( tvec.type() != CV_64F )
tvec.convertTo( tvec, CV_64F );
los = camera_matrix.inv() * los;
</code></pre>
<p>This gives you a vector pointing out from the center of the camera. You need to rotate it by the orientation of the camera, and use the position of the camera as the origin of the vector. If you calibrated your camera using OpenCV, use this function as a guide: <a href="https://github.com/Tetragramm/opencv_contrib/blob/master/modules/mapping3d/src/positionCalc.cpp#L156">HERE</a>. Otherwise you'll need to use the information you have (presumably from the robot arm). From there it's just basic geometry.</p>
http://answers.opencv.org/question/213559/one-point-uv-to-actual-xy-wrt-camera-frame/?comment=214248#post-id-214248@Tetragramm t :[0,0,z] I am taking from depth directly as above we have discussed los *= distance.Thu, 13 Jun 2019 07:59:25 -0500http://answers.opencv.org/question/213559/one-point-uv-to-actual-xy-wrt-camera-frame/?comment=214248#post-id-214248Comment by Tetragramm for <p>Based on your clarification, you can do what you need. First you calculate the vector in space. If you don't have the distortion matrix, you can ignore that part.</p>
<pre><code>Mat tvec = _tvec.getMat();
Mat rvec = _rvec.getMat();
Mat camera_matrix = _cameraMatrix.getMat();
const Mat distortion_matrix = _distortionMatrix.getMat();
std::vector< Point2f > pts_in, pts_out;
pts_in.push_back( _pt );
undistortPoints( pts_in, pts_out, camera_matrix, distortion_matrix, noArray(), camera_matrix );
Mat los( 3, 1,CV_64F );
los.at< double >( 0 ) = pts_out[0].x;
los.at< double >( 1 ) = pts_out[0].y;
los.at< double >( 2 ) = 1;
if ( camera_matrix.type() != CV_64F )
camera_matrix.convertTo( camera_matrix, CV_64F );
if ( rvec.type() != CV_64F )
rvec.convertTo( rvec, CV_64F );
if ( tvec.type() != CV_64F )
tvec.convertTo( tvec, CV_64F );
los = camera_matrix.inv() * los;
</code></pre>
<p>This gives you a vector pointing out from the center of the camera. You need to rotate it by the orientation of the camera, and use the position of the camera as the origin of the vector. If you calibrated your camera using OpenCV, use this function as a guide: <a href="https://github.com/Tetragramm/opencv_contrib/blob/master/modules/mapping3d/src/positionCalc.cpp#L156">HERE</a>. Otherwise you'll need to use the information you have (presumably from the robot arm). From there it's just basic geometry.</p>
http://answers.opencv.org/question/213559/one-point-uv-to-actual-xy-wrt-camera-frame/?comment=214205#post-id-214205When the camera is parallel to the table, yes, you can make the assumption that R is zero. t must obviously be [0,0,z], to account for the height.Wed, 12 Jun 2019 09:00:24 -0500http://answers.opencv.org/question/213559/one-point-uv-to-actual-xy-wrt-camera-frame/?comment=214205#post-id-214205Comment by opencvr for <p>Based on your clarification, you can do what you need. First you calculate the vector in space. If you don't have the distortion matrix, you can ignore that part.</p>
<pre><code>Mat tvec = _tvec.getMat();
Mat rvec = _rvec.getMat();
Mat camera_matrix = _cameraMatrix.getMat();
const Mat distortion_matrix = _distortionMatrix.getMat();
std::vector< Point2f > pts_in, pts_out;
pts_in.push_back( _pt );
undistortPoints( pts_in, pts_out, camera_matrix, distortion_matrix, noArray(), camera_matrix );
Mat los( 3, 1,CV_64F );
los.at< double >( 0 ) = pts_out[0].x;
los.at< double >( 1 ) = pts_out[0].y;
los.at< double >( 2 ) = 1;
if ( camera_matrix.type() != CV_64F )
camera_matrix.convertTo( camera_matrix, CV_64F );
if ( rvec.type() != CV_64F )
rvec.convertTo( rvec, CV_64F );
if ( tvec.type() != CV_64F )
tvec.convertTo( tvec, CV_64F );
los = camera_matrix.inv() * los;
</code></pre>
<p>This gives you a vector pointing out from the center of the camera. You need to rotate it by the orientation of the camera, and use the position of the camera as the origin of the vector. If you calibrated your camera using OpenCV, use this function as a guide: <a href="https://github.com/Tetragramm/opencv_contrib/blob/master/modules/mapping3d/src/positionCalc.cpp#L156">HERE</a>. Otherwise you'll need to use the information you have (presumably from the robot arm). From there it's just basic geometry.</p>
http://answers.opencv.org/question/213559/one-point-uv-to-actual-xy-wrt-camera-frame/?comment=214190#post-id-214190@Tetragramm should I consider this as right method?Wed, 12 Jun 2019 05:47:30 -0500http://answers.opencv.org/question/213559/one-point-uv-to-actual-xy-wrt-camera-frame/?comment=214190#post-id-214190Comment by opencvr for <p>Based on your clarification, you can do what you need. First you calculate the vector in space. If you don't have the distortion matrix, you can ignore that part.</p>
<pre><code>Mat tvec = _tvec.getMat();
Mat rvec = _rvec.getMat();
Mat camera_matrix = _cameraMatrix.getMat();
const Mat distortion_matrix = _distortionMatrix.getMat();
std::vector< Point2f > pts_in, pts_out;
pts_in.push_back( _pt );
undistortPoints( pts_in, pts_out, camera_matrix, distortion_matrix, noArray(), camera_matrix );
Mat los( 3, 1,CV_64F );
los.at< double >( 0 ) = pts_out[0].x;
los.at< double >( 1 ) = pts_out[0].y;
los.at< double >( 2 ) = 1;
if ( camera_matrix.type() != CV_64F )
camera_matrix.convertTo( camera_matrix, CV_64F );
if ( rvec.type() != CV_64F )
rvec.convertTo( rvec, CV_64F );
if ( tvec.type() != CV_64F )
tvec.convertTo( tvec, CV_64F );
los = camera_matrix.inv() * los;
</code></pre>
<p>This gives you a vector pointing out from the center of the camera. You need to rotate it by the orientation of the camera, and use the position of the camera as the origin of the vector. If you calibrated your camera using OpenCV, use this function as a guide: <a href="https://github.com/Tetragramm/opencv_contrib/blob/master/modules/mapping3d/src/positionCalc.cpp#L156">HERE</a>. Otherwise you'll need to use the information you have (presumably from the robot arm). From there it's just basic geometry.</p>
http://answers.opencv.org/question/213559/one-point-uv-to-actual-xy-wrt-camera-frame/?comment=214114#post-id-214114@Tetragramm thank you for your assistance. In most of the position it is working correctly except in some case when y is negative. so I am assuming that **taking R,t as 0 WHOULD BE RIGHT METHOD,** when camera is parallel to table. may be some error is coming because of calibration. 1. camera 2. hand-eyeMon, 10 Jun 2019 09:17:11 -0500http://answers.opencv.org/question/213559/one-point-uv-to-actual-xy-wrt-camera-frame/?comment=214114#post-id-214114Comment by Tetragramm for <p>Based on your clarification, you can do what you need. First you calculate the vector in space. If you don't have the distortion matrix, you can ignore that part.</p>
<pre><code>Mat tvec = _tvec.getMat();
Mat rvec = _rvec.getMat();
Mat camera_matrix = _cameraMatrix.getMat();
const Mat distortion_matrix = _distortionMatrix.getMat();
std::vector< Point2f > pts_in, pts_out;
pts_in.push_back( _pt );
undistortPoints( pts_in, pts_out, camera_matrix, distortion_matrix, noArray(), camera_matrix );
Mat los( 3, 1,CV_64F );
los.at< double >( 0 ) = pts_out[0].x;
los.at< double >( 1 ) = pts_out[0].y;
los.at< double >( 2 ) = 1;
if ( camera_matrix.type() != CV_64F )
camera_matrix.convertTo( camera_matrix, CV_64F );
if ( rvec.type() != CV_64F )
rvec.convertTo( rvec, CV_64F );
if ( tvec.type() != CV_64F )
tvec.convertTo( tvec, CV_64F );
los = camera_matrix.inv() * los;
</code></pre>
<p>This gives you a vector pointing out from the center of the camera. You need to rotate it by the orientation of the camera, and use the position of the camera as the origin of the vector. If you calibrated your camera using OpenCV, use this function as a guide: <a href="https://github.com/Tetragramm/opencv_contrib/blob/master/modules/mapping3d/src/positionCalc.cpp#L156">HERE</a>. Otherwise you'll need to use the information you have (presumably from the robot arm). From there it's just basic geometry.</p>
http://answers.opencv.org/question/213559/one-point-uv-to-actual-xy-wrt-camera-frame/?comment=213999#post-id-213999Nope, that's not the right calculation. And yes, it could also be the non-flatness of the table. But unless that slope is pretty big, or you're really close I wouldn't expect that to make as much difference as you see.
los.at< double >( 0 ) = (xr-center.x)/focal_length1;
los.at< double >( 1 ) = (yr-center.y)/focal_length2;
los.at< double >( 2 ) = 1;
los *= distance;Thu, 06 Jun 2019 19:03:04 -0500http://answers.opencv.org/question/213559/one-point-uv-to-actual-xy-wrt-camera-frame/?comment=213999#post-id-213999Comment by opencvr for <p>Based on your clarification, you can do what you need. First you calculate the vector in space. If you don't have the distortion matrix, you can ignore that part.</p>
<pre><code>Mat tvec = _tvec.getMat();
Mat rvec = _rvec.getMat();
Mat camera_matrix = _cameraMatrix.getMat();
const Mat distortion_matrix = _distortionMatrix.getMat();
std::vector< Point2f > pts_in, pts_out;
pts_in.push_back( _pt );
undistortPoints( pts_in, pts_out, camera_matrix, distortion_matrix, noArray(), camera_matrix );
Mat los( 3, 1,CV_64F );
los.at< double >( 0 ) = pts_out[0].x;
los.at< double >( 1 ) = pts_out[0].y;
los.at< double >( 2 ) = 1;
if ( camera_matrix.type() != CV_64F )
camera_matrix.convertTo( camera_matrix, CV_64F );
if ( rvec.type() != CV_64F )
rvec.convertTo( rvec, CV_64F );
if ( tvec.type() != CV_64F )
tvec.convertTo( tvec, CV_64F );
los = camera_matrix.inv() * los;
</code></pre>
<p>This gives you a vector pointing out from the center of the camera. You need to rotate it by the orientation of the camera, and use the position of the camera as the origin of the vector. If you calibrated your camera using OpenCV, use this function as a guide: <a href="https://github.com/Tetragramm/opencv_contrib/blob/master/modules/mapping3d/src/positionCalc.cpp#L156">HERE</a>. Otherwise you'll need to use the information you have (presumably from the robot arm). From there it's just basic geometry.</p>
http://answers.opencv.org/question/213559/one-point-uv-to-actual-xy-wrt-camera-frame/?comment=213966#post-id-213966I have also noticed that in negative y direction there is little slope between ground and table surface. so camera plane and table plane is not exactly parallel in negative -y direction.
assuming both plane is parallel,**but can I say that method applied is right to take R,t as zero?** .Thu, 06 Jun 2019 07:03:03 -0500http://answers.opencv.org/question/213559/one-point-uv-to-actual-xy-wrt-camera-frame/?comment=213966#post-id-213966Comment by opencvr for <p>Based on your clarification, you can do what you need. First you calculate the vector in space. If you don't have the distortion matrix, you can ignore that part.</p>
<pre><code>Mat tvec = _tvec.getMat();
Mat rvec = _rvec.getMat();
Mat camera_matrix = _cameraMatrix.getMat();
const Mat distortion_matrix = _distortionMatrix.getMat();
std::vector< Point2f > pts_in, pts_out;
pts_in.push_back( _pt );
undistortPoints( pts_in, pts_out, camera_matrix, distortion_matrix, noArray(), camera_matrix );
Mat los( 3, 1,CV_64F );
los.at< double >( 0 ) = pts_out[0].x;
los.at< double >( 1 ) = pts_out[0].y;
los.at< double >( 2 ) = 1;
if ( camera_matrix.type() != CV_64F )
camera_matrix.convertTo( camera_matrix, CV_64F );
if ( rvec.type() != CV_64F )
rvec.convertTo( rvec, CV_64F );
if ( tvec.type() != CV_64F )
tvec.convertTo( tvec, CV_64F );
los = camera_matrix.inv() * los;
</code></pre>
<p>This gives you a vector pointing out from the center of the camera. You need to rotate it by the orientation of the camera, and use the position of the camera as the origin of the vector. If you calibrated your camera using OpenCV, use this function as a guide: <a href="https://github.com/Tetragramm/opencv_contrib/blob/master/modules/mapping3d/src/positionCalc.cpp#L156">HERE</a>. Otherwise you'll need to use the information you have (presumably from the robot arm). From there it's just basic geometry.</p>
http://answers.opencv.org/question/213559/one-point-uv-to-actual-xy-wrt-camera-frame/?comment=213964#post-id-213964@Tetragramm real sense camera is factory calibrated. although I have done calibration and have distortion matrix and I am using it. actually my circular object is too flat and hollow. [link text](https://ibb.co/LSSTy0Q) so I can take center as object center when lying on flat plane.
I am also calculating x,y at z depth.
los.at< double >( 0 ) = (((xr-center.x)/focal_length1)*distance);
los.at< double >( 1 ) = (((yr-center.y)/focal_length2)*distance);
los.at< double >( 2 ) = distance;
also, I have noticed that it is always error in y position when object is placed in negative y direction to camera. in positive y direction, there is no error in both x and y. can we say that this is problem because of camera plane and table plane is not exactly parallel?Thu, 06 Jun 2019 06:58:17 -0500http://answers.opencv.org/question/213559/one-point-uv-to-actual-xy-wrt-camera-frame/?comment=213964#post-id-213964Comment by Tetragramm for <p>Based on your clarification, you can do what you need. First you calculate the vector in space. If you don't have the distortion matrix, you can ignore that part.</p>
<pre><code>Mat tvec = _tvec.getMat();
Mat rvec = _rvec.getMat();
Mat camera_matrix = _cameraMatrix.getMat();
const Mat distortion_matrix = _distortionMatrix.getMat();
std::vector< Point2f > pts_in, pts_out;
pts_in.push_back( _pt );
undistortPoints( pts_in, pts_out, camera_matrix, distortion_matrix, noArray(), camera_matrix );
Mat los( 3, 1,CV_64F );
los.at< double >( 0 ) = pts_out[0].x;
los.at< double >( 1 ) = pts_out[0].y;
los.at< double >( 2 ) = 1;
if ( camera_matrix.type() != CV_64F )
camera_matrix.convertTo( camera_matrix, CV_64F );
if ( rvec.type() != CV_64F )
rvec.convertTo( rvec, CV_64F );
if ( tvec.type() != CV_64F )
tvec.convertTo( tvec, CV_64F );
los = camera_matrix.inv() * los;
</code></pre>
<p>This gives you a vector pointing out from the center of the camera. You need to rotate it by the orientation of the camera, and use the position of the camera as the origin of the vector. If you calibrated your camera using OpenCV, use this function as a guide: <a href="https://github.com/Tetragramm/opencv_contrib/blob/master/modules/mapping3d/src/positionCalc.cpp#L156">HERE</a>. Otherwise you'll need to use the information you have (presumably from the robot arm). From there it's just basic geometry.</p>
http://answers.opencv.org/question/213559/one-point-uv-to-actual-xy-wrt-camera-frame/?comment=213950#post-id-213950There are two main possibilities. It could be either, or both.
First is camera distortion. Distortion typically gets worse farther away from the center. You just calibrate your camera as in the opencv tutorial, and save the distortion matrix. Then use the undistortPoints function.
Second is how you are calculating your x and y. If you simply take a vector of length z in the direction of your point, you will fall short, because the point is on a sphere. You need to multiply the vector so that the z coordinate is of length z.Wed, 05 Jun 2019 19:17:29 -0500http://answers.opencv.org/question/213559/one-point-uv-to-actual-xy-wrt-camera-frame/?comment=213950#post-id-213950Comment by opencvr for <p>Based on your clarification, you can do what you need. First you calculate the vector in space. If you don't have the distortion matrix, you can ignore that part.</p>
<pre><code>Mat tvec = _tvec.getMat();
Mat rvec = _rvec.getMat();
Mat camera_matrix = _cameraMatrix.getMat();
const Mat distortion_matrix = _distortionMatrix.getMat();
std::vector< Point2f > pts_in, pts_out;
pts_in.push_back( _pt );
undistortPoints( pts_in, pts_out, camera_matrix, distortion_matrix, noArray(), camera_matrix );
Mat los( 3, 1,CV_64F );
los.at< double >( 0 ) = pts_out[0].x;
los.at< double >( 1 ) = pts_out[0].y;
los.at< double >( 2 ) = 1;
if ( camera_matrix.type() != CV_64F )
camera_matrix.convertTo( camera_matrix, CV_64F );
if ( rvec.type() != CV_64F )
rvec.convertTo( rvec, CV_64F );
if ( tvec.type() != CV_64F )
tvec.convertTo( tvec, CV_64F );
los = camera_matrix.inv() * los;
</code></pre>
<p>This gives you a vector pointing out from the center of the camera. You need to rotate it by the orientation of the camera, and use the position of the camera as the origin of the vector. If you calibrated your camera using OpenCV, use this function as a guide: <a href="https://github.com/Tetragramm/opencv_contrib/blob/master/modules/mapping3d/src/positionCalc.cpp#L156">HERE</a>. Otherwise you'll need to use the information you have (presumably from the robot arm). From there it's just basic geometry.</p>
http://answers.opencv.org/question/213559/one-point-uv-to-actual-xy-wrt-camera-frame/?comment=213925#post-id-213925@Tetragramm here is attached photo https://ibb.co/09Y9fJM ,which is my workspace coordinates.
also from this link [link text](https://docs.opencv.org/master/d9/dab/tutorial_homography.html) i found that
homography matrix is transformations between two planes. but can we say that two planes in my case is parallel so, we are assuming it as identity matrix?Wed, 05 Jun 2019 06:34:18 -0500http://answers.opencv.org/question/213559/one-point-uv-to-actual-xy-wrt-camera-frame/?comment=213925#post-id-213925Comment by opencvr for <p>Based on your clarification, you can do what you need. First you calculate the vector in space. If you don't have the distortion matrix, you can ignore that part.</p>
<pre><code>Mat tvec = _tvec.getMat();
Mat rvec = _rvec.getMat();
Mat camera_matrix = _cameraMatrix.getMat();
const Mat distortion_matrix = _distortionMatrix.getMat();
std::vector< Point2f > pts_in, pts_out;
pts_in.push_back( _pt );
undistortPoints( pts_in, pts_out, camera_matrix, distortion_matrix, noArray(), camera_matrix );
Mat los( 3, 1,CV_64F );
los.at< double >( 0 ) = pts_out[0].x;
los.at< double >( 1 ) = pts_out[0].y;
los.at< double >( 2 ) = 1;
if ( camera_matrix.type() != CV_64F )
camera_matrix.convertTo( camera_matrix, CV_64F );
if ( rvec.type() != CV_64F )
rvec.convertTo( rvec, CV_64F );
if ( tvec.type() != CV_64F )
tvec.convertTo( tvec, CV_64F );
los = camera_matrix.inv() * los;
</code></pre>
<p>This gives you a vector pointing out from the center of the camera. You need to rotate it by the orientation of the camera, and use the position of the camera as the origin of the vector. If you calibrated your camera using OpenCV, use this function as a guide: <a href="https://github.com/Tetragramm/opencv_contrib/blob/master/modules/mapping3d/src/positionCalc.cpp#L156">HERE</a>. Otherwise you'll need to use the information you have (presumably from the robot arm). From there it's just basic geometry.</p>
http://answers.opencv.org/question/213559/one-point-uv-to-actual-xy-wrt-camera-frame/?comment=213916#post-id-213916I have tested almost 10 different position. when object is near camera center (ex. x= 13 mm, y=58 mm) there is no error in tool position, but if I put object away from camera center , it is not reaching exact position where every time position error in y is more and less in x.
I have read book **"multiple view geometry"** where I have found that, if camera plane and table plane where object is lying is parallel, then we can directly convert object (u,v) to camera_frame (x,y) **using R,t as identity matrix**. can I consider this as a proof that my **applied method is right?**
then what should be wrong?Wed, 05 Jun 2019 03:42:25 -0500http://answers.opencv.org/question/213559/one-point-uv-to-actual-xy-wrt-camera-frame/?comment=213916#post-id-213916Comment by Tetragramm for <p>Based on your clarification, you can do what you need. First you calculate the vector in space. If you don't have the distortion matrix, you can ignore that part.</p>
<pre><code>Mat tvec = _tvec.getMat();
Mat rvec = _rvec.getMat();
Mat camera_matrix = _cameraMatrix.getMat();
const Mat distortion_matrix = _distortionMatrix.getMat();
std::vector< Point2f > pts_in, pts_out;
pts_in.push_back( _pt );
undistortPoints( pts_in, pts_out, camera_matrix, distortion_matrix, noArray(), camera_matrix );
Mat los( 3, 1,CV_64F );
los.at< double >( 0 ) = pts_out[0].x;
los.at< double >( 1 ) = pts_out[0].y;
los.at< double >( 2 ) = 1;
if ( camera_matrix.type() != CV_64F )
camera_matrix.convertTo( camera_matrix, CV_64F );
if ( rvec.type() != CV_64F )
rvec.convertTo( rvec, CV_64F );
if ( tvec.type() != CV_64F )
tvec.convertTo( tvec, CV_64F );
los = camera_matrix.inv() * los;
</code></pre>
<p>This gives you a vector pointing out from the center of the camera. You need to rotate it by the orientation of the camera, and use the position of the camera as the origin of the vector. If you calibrated your camera using OpenCV, use this function as a guide: <a href="https://github.com/Tetragramm/opencv_contrib/blob/master/modules/mapping3d/src/positionCalc.cpp#L156">HERE</a>. Otherwise you'll need to use the information you have (presumably from the robot arm). From there it's just basic geometry.</p>
http://answers.opencv.org/question/213559/one-point-uv-to-actual-xy-wrt-camera-frame/?comment=213897#post-id-213897First a question. You wish to move to an X,Y. Is it true that the farther from the destination you are, the farther the error? Or is it unconnected to the distance the X,Y is from the center of the camera?Tue, 04 Jun 2019 18:25:54 -0500http://answers.opencv.org/question/213559/one-point-uv-to-actual-xy-wrt-camera-frame/?comment=213897#post-id-213897Comment by opencvr for <p>Based on your clarification, you can do what you need. First you calculate the vector in space. If you don't have the distortion matrix, you can ignore that part.</p>
<pre><code>Mat tvec = _tvec.getMat();
Mat rvec = _rvec.getMat();
Mat camera_matrix = _cameraMatrix.getMat();
const Mat distortion_matrix = _distortionMatrix.getMat();
std::vector< Point2f > pts_in, pts_out;
pts_in.push_back( _pt );
undistortPoints( pts_in, pts_out, camera_matrix, distortion_matrix, noArray(), camera_matrix );
Mat los( 3, 1,CV_64F );
los.at< double >( 0 ) = pts_out[0].x;
los.at< double >( 1 ) = pts_out[0].y;
los.at< double >( 2 ) = 1;
if ( camera_matrix.type() != CV_64F )
camera_matrix.convertTo( camera_matrix, CV_64F );
if ( rvec.type() != CV_64F )
rvec.convertTo( rvec, CV_64F );
if ( tvec.type() != CV_64F )
tvec.convertTo( tvec, CV_64F );
los = camera_matrix.inv() * los;
</code></pre>
<p>This gives you a vector pointing out from the center of the camera. You need to rotate it by the orientation of the camera, and use the position of the camera as the origin of the vector. If you calibrated your camera using OpenCV, use this function as a guide: <a href="https://github.com/Tetragramm/opencv_contrib/blob/master/modules/mapping3d/src/positionCalc.cpp#L156">HERE</a>. Otherwise you'll need to use the information you have (presumably from the robot arm). From there it's just basic geometry.</p>
http://answers.opencv.org/question/213559/one-point-uv-to-actual-xy-wrt-camera-frame/?comment=213885#post-id-213885I am transforming this (x,y,z) value directly to base_link as camera frame is attached to robot(ROS URDF and TF).
that is why I think camera extrinsic parameter(R,t) is not required (like others have to do this using known coordinates on plane and they get camera Pose , but in my case I do not need world coordinates on plane). or am I understanding still wrong for R,t? **is this can be a justification for omitting 1st equation from this** [link text](https://ibb.co/SXZCQRH)?
is this error because of I omitted first equation ,where R,t also there? but according to my setup, it should work without R,t?Tue, 04 Jun 2019 08:54:44 -0500http://answers.opencv.org/question/213559/one-point-uv-to-actual-xy-wrt-camera-frame/?comment=213885#post-id-213885Comment by opencvr for <p>Based on your clarification, you can do what you need. First you calculate the vector in space. If you don't have the distortion matrix, you can ignore that part.</p>
<pre><code>Mat tvec = _tvec.getMat();
Mat rvec = _rvec.getMat();
Mat camera_matrix = _cameraMatrix.getMat();
const Mat distortion_matrix = _distortionMatrix.getMat();
std::vector< Point2f > pts_in, pts_out;
pts_in.push_back( _pt );
undistortPoints( pts_in, pts_out, camera_matrix, distortion_matrix, noArray(), camera_matrix );
Mat los( 3, 1,CV_64F );
los.at< double >( 0 ) = pts_out[0].x;
los.at< double >( 1 ) = pts_out[0].y;
los.at< double >( 2 ) = 1;
if ( camera_matrix.type() != CV_64F )
camera_matrix.convertTo( camera_matrix, CV_64F );
if ( rvec.type() != CV_64F )
rvec.convertTo( rvec, CV_64F );
if ( tvec.type() != CV_64F )
tvec.convertTo( tvec, CV_64F );
los = camera_matrix.inv() * los;
</code></pre>
<p>This gives you a vector pointing out from the center of the camera. You need to rotate it by the orientation of the camera, and use the position of the camera as the origin of the vector. If you calibrated your camera using OpenCV, use this function as a guide: <a href="https://github.com/Tetragramm/opencv_contrib/blob/master/modules/mapping3d/src/positionCalc.cpp#L156">HERE</a>. Otherwise you'll need to use the information you have (presumably from the robot arm). From there it's just basic geometry.</p>
http://answers.opencv.org/question/213559/one-point-uv-to-actual-xy-wrt-camera-frame/?comment=213877#post-id-213877@Tetragramm as I have attached images of my setup, camera is pointed exactly straight down at the table.
[link text](https://ibb.co/5kr2b4f)
[link text](https://ibb.co/Bq6XN7f)
[link text](https://ibb.co/4mrVd84)
**"the center of the camera being the (0,0) of *your coordinate system*"-----is this mean by **camera coordinate system?** as in my case it should be, realsense left imager (0,0) right?****
so after getting (x,y,z) in above camera_frame I am transforming It to robot base_link, and giving that POSE to end effector. so sometime it goes to that position and sometime there is random error of around 20mm,10 mm in x,y.Tue, 04 Jun 2019 08:24:22 -0500http://answers.opencv.org/question/213559/one-point-uv-to-actual-xy-wrt-camera-frame/?comment=213877#post-id-213877Comment by Tetragramm for <p>Based on your clarification, you can do what you need. First you calculate the vector in space. If you don't have the distortion matrix, you can ignore that part.</p>
<pre><code>Mat tvec = _tvec.getMat();
Mat rvec = _rvec.getMat();
Mat camera_matrix = _cameraMatrix.getMat();
const Mat distortion_matrix = _distortionMatrix.getMat();
std::vector< Point2f > pts_in, pts_out;
pts_in.push_back( _pt );
undistortPoints( pts_in, pts_out, camera_matrix, distortion_matrix, noArray(), camera_matrix );
Mat los( 3, 1,CV_64F );
los.at< double >( 0 ) = pts_out[0].x;
los.at< double >( 1 ) = pts_out[0].y;
los.at< double >( 2 ) = 1;
if ( camera_matrix.type() != CV_64F )
camera_matrix.convertTo( camera_matrix, CV_64F );
if ( rvec.type() != CV_64F )
rvec.convertTo( rvec, CV_64F );
if ( tvec.type() != CV_64F )
tvec.convertTo( tvec, CV_64F );
los = camera_matrix.inv() * los;
</code></pre>
<p>This gives you a vector pointing out from the center of the camera. You need to rotate it by the orientation of the camera, and use the position of the camera as the origin of the vector. If you calibrated your camera using OpenCV, use this function as a guide: <a href="https://github.com/Tetragramm/opencv_contrib/blob/master/modules/mapping3d/src/positionCalc.cpp#L156">HERE</a>. Otherwise you'll need to use the information you have (presumably from the robot arm). From there it's just basic geometry.</p>
http://answers.opencv.org/question/213559/one-point-uv-to-actual-xy-wrt-camera-frame/?comment=213849#post-id-213849That would work as long as the camera is pointed straight down at the table. And as long as you're ok with the center of the camera being the (0,0) of your coordinate system. And as long as you don't have any distortion.
If you do have distortion, use what's in my post, but multiply the final result by z/los.at<double>(2), and then first two values of los are your x and y.Mon, 03 Jun 2019 19:53:16 -0500http://answers.opencv.org/question/213559/one-point-uv-to-actual-xy-wrt-camera-frame/?comment=213849#post-id-213849Comment by opencvr for <p>Based on your clarification, you can do what you need. First you calculate the vector in space. If you don't have the distortion matrix, you can ignore that part.</p>
<pre><code>Mat tvec = _tvec.getMat();
Mat rvec = _rvec.getMat();
Mat camera_matrix = _cameraMatrix.getMat();
const Mat distortion_matrix = _distortionMatrix.getMat();
std::vector< Point2f > pts_in, pts_out;
pts_in.push_back( _pt );
undistortPoints( pts_in, pts_out, camera_matrix, distortion_matrix, noArray(), camera_matrix );
Mat los( 3, 1,CV_64F );
los.at< double >( 0 ) = pts_out[0].x;
los.at< double >( 1 ) = pts_out[0].y;
los.at< double >( 2 ) = 1;
if ( camera_matrix.type() != CV_64F )
camera_matrix.convertTo( camera_matrix, CV_64F );
if ( rvec.type() != CV_64F )
rvec.convertTo( rvec, CV_64F );
if ( tvec.type() != CV_64F )
tvec.convertTo( tvec, CV_64F );
los = camera_matrix.inv() * los;
</code></pre>
<p>This gives you a vector pointing out from the center of the camera. You need to rotate it by the orientation of the camera, and use the position of the camera as the origin of the vector. If you calibrated your camera using OpenCV, use this function as a guide: <a href="https://github.com/Tetragramm/opencv_contrib/blob/master/modules/mapping3d/src/positionCalc.cpp#L156">HERE</a>. Otherwise you'll need to use the information you have (presumably from the robot arm). From there it's just basic geometry.</p>
http://answers.opencv.org/question/213559/one-point-uv-to-actual-xy-wrt-camera-frame/?comment=213827#post-id-213827@Tetragramm as my object is placed on table, **so distance to table from camera is z, known**. as I do not need orientation, from equation of this attached photo [link text](https://ibb.co/SXZCQRH) I omitted first equation (so R,t) as I do not need complete POSE.
then using detected point (u,v) and fx, cx, fy, cy and if say(x,y) is real world point in camera_frame,
x= ((u-cx/fx) ***** z), y = ((v-cy/fy) ***** z)
can I use like this? is this correct method or still am I missing something or understood wrong?Mon, 03 Jun 2019 08:48:48 -0500http://answers.opencv.org/question/213559/one-point-uv-to-actual-xy-wrt-camera-frame/?comment=213827#post-id-213827Comment by Tetragramm for <p>Based on your clarification, you can do what you need. First you calculate the vector in space. If you don't have the distortion matrix, you can ignore that part.</p>
<pre><code>Mat tvec = _tvec.getMat();
Mat rvec = _rvec.getMat();
Mat camera_matrix = _cameraMatrix.getMat();
const Mat distortion_matrix = _distortionMatrix.getMat();
std::vector< Point2f > pts_in, pts_out;
pts_in.push_back( _pt );
undistortPoints( pts_in, pts_out, camera_matrix, distortion_matrix, noArray(), camera_matrix );
Mat los( 3, 1,CV_64F );
los.at< double >( 0 ) = pts_out[0].x;
los.at< double >( 1 ) = pts_out[0].y;
los.at< double >( 2 ) = 1;
if ( camera_matrix.type() != CV_64F )
camera_matrix.convertTo( camera_matrix, CV_64F );
if ( rvec.type() != CV_64F )
rvec.convertTo( rvec, CV_64F );
if ( tvec.type() != CV_64F )
tvec.convertTo( tvec, CV_64F );
los = camera_matrix.inv() * los;
</code></pre>
<p>This gives you a vector pointing out from the center of the camera. You need to rotate it by the orientation of the camera, and use the position of the camera as the origin of the vector. If you calibrated your camera using OpenCV, use this function as a guide: <a href="https://github.com/Tetragramm/opencv_contrib/blob/master/modules/mapping3d/src/positionCalc.cpp#L156">HERE</a>. Otherwise you'll need to use the information you have (presumably from the robot arm). From there it's just basic geometry.</p>
http://answers.opencv.org/question/213559/one-point-uv-to-actual-xy-wrt-camera-frame/?comment=213718#post-id-213718This sort of question is best answered by the documentation for the function [HERE](https://docs.opencv.org/4.1.0/d9/d0c/group__calib3d.html#ga3207604e4b1a1758aa66acb6ed5aa65d). calibrateCamera treats the corner of the chessboard as the center of the world, and tells you where you camera is located in each of the calibration images. Check out [this tutorial](https://docs.opencv.org/4.1.0/d7/d53/tutorial_py_pose.html) to see how to take the information and the video stream to see where the camera is at any given point.Thu, 30 May 2019 22:18:28 -0500http://answers.opencv.org/question/213559/one-point-uv-to-actual-xy-wrt-camera-frame/?comment=213718#post-id-213718Comment by opencvr for <p>Based on your clarification, you can do what you need. First you calculate the vector in space. If you don't have the distortion matrix, you can ignore that part.</p>
<pre><code>Mat tvec = _tvec.getMat();
Mat rvec = _rvec.getMat();
Mat camera_matrix = _cameraMatrix.getMat();
const Mat distortion_matrix = _distortionMatrix.getMat();
std::vector< Point2f > pts_in, pts_out;
pts_in.push_back( _pt );
undistortPoints( pts_in, pts_out, camera_matrix, distortion_matrix, noArray(), camera_matrix );
Mat los( 3, 1,CV_64F );
los.at< double >( 0 ) = pts_out[0].x;
los.at< double >( 1 ) = pts_out[0].y;
los.at< double >( 2 ) = 1;
if ( camera_matrix.type() != CV_64F )
camera_matrix.convertTo( camera_matrix, CV_64F );
if ( rvec.type() != CV_64F )
rvec.convertTo( rvec, CV_64F );
if ( tvec.type() != CV_64F )
tvec.convertTo( tvec, CV_64F );
los = camera_matrix.inv() * los;
</code></pre>
<p>This gives you a vector pointing out from the center of the camera. You need to rotate it by the orientation of the camera, and use the position of the camera as the origin of the vector. If you calibrated your camera using OpenCV, use this function as a guide: <a href="https://github.com/Tetragramm/opencv_contrib/blob/master/modules/mapping3d/src/positionCalc.cpp#L156">HERE</a>. Otherwise you'll need to use the information you have (presumably from the robot arm). From there it's just basic geometry.</p>
http://answers.opencv.org/question/213559/one-point-uv-to-actual-xy-wrt-camera-frame/?comment=213698#post-id-213698@Tetragramm now I see camera calibration giving me rvec, tvec tuples for every chessboard pattern images, whose location is changing w.r.t camera. it is **huge** 25 * 6 matrix, but for calculation I need 3*4 matrix? I am still not getting it.Thu, 30 May 2019 04:12:09 -0500http://answers.opencv.org/question/213559/one-point-uv-to-actual-xy-wrt-camera-frame/?comment=213698#post-id-213698Comment by Tetragramm for <p>Based on your clarification, you can do what you need. First you calculate the vector in space. If you don't have the distortion matrix, you can ignore that part.</p>
<pre><code>Mat tvec = _tvec.getMat();
Mat rvec = _rvec.getMat();
Mat camera_matrix = _cameraMatrix.getMat();
const Mat distortion_matrix = _distortionMatrix.getMat();
std::vector< Point2f > pts_in, pts_out;
pts_in.push_back( _pt );
undistortPoints( pts_in, pts_out, camera_matrix, distortion_matrix, noArray(), camera_matrix );
Mat los( 3, 1,CV_64F );
los.at< double >( 0 ) = pts_out[0].x;
los.at< double >( 1 ) = pts_out[0].y;
los.at< double >( 2 ) = 1;
if ( camera_matrix.type() != CV_64F )
camera_matrix.convertTo( camera_matrix, CV_64F );
if ( rvec.type() != CV_64F )
rvec.convertTo( rvec, CV_64F );
if ( tvec.type() != CV_64F )
tvec.convertTo( tvec, CV_64F );
los = camera_matrix.inv() * los;
</code></pre>
<p>This gives you a vector pointing out from the center of the camera. You need to rotate it by the orientation of the camera, and use the position of the camera as the origin of the vector. If you calibrated your camera using OpenCV, use this function as a guide: <a href="https://github.com/Tetragramm/opencv_contrib/blob/master/modules/mapping3d/src/positionCalc.cpp#L156">HERE</a>. Otherwise you'll need to use the information you have (presumably from the robot arm). From there it's just basic geometry.</p>
http://answers.opencv.org/question/213559/one-point-uv-to-actual-xy-wrt-camera-frame/?comment=213686#post-id-213686You said you know the orientation and position of the camera. Is that from an opencv function that gives you rvec and tvec? Then use those.
Otherwise, it's not OpenCV, and I can't help.
Oh, except there's apparently a function call [calibrateHandEye](https://docs.opencv.org/4.1.0/d9/d0c/group__calib3d.html#gaebfc1c9f7434196a374c382abf43439b) just for the task of matching robot arms to the camera.Wed, 29 May 2019 18:04:08 -0500http://answers.opencv.org/question/213559/one-point-uv-to-actual-xy-wrt-camera-frame/?comment=213686#post-id-213686Comment by opencvr for <p>Based on your clarification, you can do what you need. First you calculate the vector in space. If you don't have the distortion matrix, you can ignore that part.</p>
<pre><code>Mat tvec = _tvec.getMat();
Mat rvec = _rvec.getMat();
Mat camera_matrix = _cameraMatrix.getMat();
const Mat distortion_matrix = _distortionMatrix.getMat();
std::vector< Point2f > pts_in, pts_out;
pts_in.push_back( _pt );
undistortPoints( pts_in, pts_out, camera_matrix, distortion_matrix, noArray(), camera_matrix );
Mat los( 3, 1,CV_64F );
los.at< double >( 0 ) = pts_out[0].x;
los.at< double >( 1 ) = pts_out[0].y;
los.at< double >( 2 ) = 1;
if ( camera_matrix.type() != CV_64F )
camera_matrix.convertTo( camera_matrix, CV_64F );
if ( rvec.type() != CV_64F )
rvec.convertTo( rvec, CV_64F );
if ( tvec.type() != CV_64F )
tvec.convertTo( tvec, CV_64F );
los = camera_matrix.inv() * los;
</code></pre>
<p>This gives you a vector pointing out from the center of the camera. You need to rotate it by the orientation of the camera, and use the position of the camera as the origin of the vector. If you calibrated your camera using OpenCV, use this function as a guide: <a href="https://github.com/Tetragramm/opencv_contrib/blob/master/modules/mapping3d/src/positionCalc.cpp#L156">HERE</a>. Otherwise you'll need to use the information you have (presumably from the robot arm). From there it's just basic geometry.</p>
http://answers.opencv.org/question/213559/one-point-uv-to-actual-xy-wrt-camera-frame/?comment=213676#post-id-213676camera_calibration with chessboard is giving me only _camera_matrix and _distortion_matrix.Wed, 29 May 2019 09:30:41 -0500http://answers.opencv.org/question/213559/one-point-uv-to-actual-xy-wrt-camera-frame/?comment=213676#post-id-213676Comment by opencvr for <p>Based on your clarification, you can do what you need. First you calculate the vector in space. If you don't have the distortion matrix, you can ignore that part.</p>
<pre><code>Mat tvec = _tvec.getMat();
Mat rvec = _rvec.getMat();
Mat camera_matrix = _cameraMatrix.getMat();
const Mat distortion_matrix = _distortionMatrix.getMat();
std::vector< Point2f > pts_in, pts_out;
pts_in.push_back( _pt );
undistortPoints( pts_in, pts_out, camera_matrix, distortion_matrix, noArray(), camera_matrix );
Mat los( 3, 1,CV_64F );
los.at< double >( 0 ) = pts_out[0].x;
los.at< double >( 1 ) = pts_out[0].y;
los.at< double >( 2 ) = 1;
if ( camera_matrix.type() != CV_64F )
camera_matrix.convertTo( camera_matrix, CV_64F );
if ( rvec.type() != CV_64F )
rvec.convertTo( rvec, CV_64F );
if ( tvec.type() != CV_64F )
tvec.convertTo( tvec, CV_64F );
los = camera_matrix.inv() * los;
</code></pre>
<p>This gives you a vector pointing out from the center of the camera. You need to rotate it by the orientation of the camera, and use the position of the camera as the origin of the vector. If you calibrated your camera using OpenCV, use this function as a guide: <a href="https://github.com/Tetragramm/opencv_contrib/blob/master/modules/mapping3d/src/positionCalc.cpp#L156">HERE</a>. Otherwise you'll need to use the information you have (presumably from the robot arm). From there it's just basic geometry.</p>
http://answers.opencv.org/question/213559/one-point-uv-to-actual-xy-wrt-camera-frame/?comment=213647#post-id-213647@Tetragramm but what should I input as _rvec and _tvec?, what I have is, _camera_matrix(fx,cx,fy,cy),
input points(u,v) so _pt, and _distortionMatrix.Wed, 29 May 2019 03:16:26 -0500http://answers.opencv.org/question/213559/one-point-uv-to-actual-xy-wrt-camera-frame/?comment=213647#post-id-213647