Ask Your Question

bendikiv's profile - activity

2021-02-27 04:35:35 -0600 received badge  Popular Question (source)
2018-04-14 08:33:41 -0600 commented question [OPENCV GPU] How can I convert GpuMat and Vector<Point2f>

I'm trying to do the exactly same thing as you, @kjoshi. Have you solved it?

2018-04-06 12:03:41 -0600 commented answer Stereo camera pose estimation from solvePnPRansac using 3D points given wrt. the camera coordinate system

It works! The concatenated position estimates drifts quite alot, but the whole thing works now, finally! The (major) fix

2018-04-05 11:27:55 -0600 commented answer Stereo camera pose estimation from solvePnPRansac using 3D points given wrt. the camera coordinate system

Original 2D feature = [73, 149], the projected point from 3D = [149, 73]. Ok, except vector is flipped. But, if origina

2018-04-05 11:26:57 -0600 commented answer Stereo camera pose estimation from solvePnPRansac using 3D points given wrt. the camera coordinate system

Original 2D feature = [73, 149], the projected point from 3D = [149, 73]. Ok, except vector is flipped. But, if origina

2018-04-05 11:17:31 -0600 commented answer Stereo camera pose estimation from solvePnPRansac using 3D points given wrt. the camera coordinate system

Nice. Ok, I seem to be making some progress now, most of the projected 2D points (from the 3D features using cv::project

2018-04-05 08:29:25 -0600 commented answer Stereo camera pose estimation from solvePnPRansac using 3D points given wrt. the camera coordinate system

I see, thanks. And then I guess that it's the same for cv::projectPoints(), using P1 and a zeroed distCoeffs matrix? I w

2018-04-05 08:29:10 -0600 commented answer Stereo camera pose estimation from solvePnPRansac using 3D points given wrt. the camera coordinate system

I see, thanks. And then I guess that it's the same for cv::projectPoints(), using P1 and a zeroed distCoeffs matrix? I w

2018-04-05 08:28:32 -0600 commented answer Stereo camera pose estimation from solvePnPRansac using 3D points given wrt. the camera coordinate system

I see, thanks. And then I guess that it's the same for cv::projectPoints(), using P1 and a zeroed distCoeffs matrix? I w

2018-04-05 08:27:29 -0600 commented answer Stereo camera pose estimation from solvePnPRansac using 3D points given wrt. the camera coordinate system

I see, thanks. And then I guess that it's the same for cv::projectPoints(), using P1 and a zeroed distCoeffs matrix? I w

2018-04-05 08:26:58 -0600 commented answer Stereo camera pose estimation from solvePnPRansac using 3D points given wrt. the camera coordinate system

I see, thanks. And then I guess that it's the same for 'cv::projectPoints()', using P1 and a zeroed 'distCoeffs' matrix?

2018-04-05 05:18:52 -0600 commented answer Stereo camera pose estimation from solvePnPRansac using 3D points given wrt. the camera coordinate system

struct DUO_STEREO { double M1[9], M2[9]; // 3x3 - Camera matrices (left, right) double D1[8], D2[8];

2018-04-05 05:18:36 -0600 commented answer Stereo camera pose estimation from solvePnPRansac using 3D points given wrt. the camera coordinate system

struct DUO_STEREO { double M1[9], M2[9]; // 3x3 - Camera matrices (left, right) double D1[8], D2[8]

2018-04-05 05:18:22 -0600 commented answer Stereo camera pose estimation from solvePnPRansac using 3D points given wrt. the camera coordinate system

struct DUO_STEREO { double M1[9], M2[9]; // 3x3 - Camera matrices (left, right) double D1[8], D2[8];

2018-04-05 05:17:52 -0600 commented answer Stereo camera pose estimation from solvePnPRansac using 3D points given wrt. the camera coordinate system

struct DUO_STEREO { double M1[9], M2[9]; // 3x3 - Camera matrices (left, right) double D1[8], D2[8];

2018-04-05 05:17:38 -0600 commented answer Stereo camera pose estimation from solvePnPRansac using 3D points given wrt. the camera coordinate system

{ struct DUO_STEREO { double M1[9], M2[9]; // 3x3 - Camera matrices (left, right) double D1[8], D2[8];

2018-04-05 05:17:18 -0600 commented answer Stereo camera pose estimation from solvePnPRansac using 3D points given wrt. the camera coordinate system

struct DUO_STEREO { double M1[9], M2[9]; // 3x3 - Camera matrices (left, right) double D1[8], D2[8];

2018-04-05 05:16:17 -0600 commented answer Stereo camera pose estimation from solvePnPRansac using 3D points given wrt. the camera coordinate system

struct DUO_STEREO { double M1[9], M2[9]; // 3x3 - Camera matrices (left, right) double D1[8], D2[8];

2018-04-05 05:15:58 -0600 commented answer Stereo camera pose estimation from solvePnPRansac using 3D points given wrt. the camera coordinate system

struct DUO_STEREO { double M1[9], M2[9]; // 3x3 - Camera matrices (left, right) double D1[8], D2[8];

2018-04-05 05:15:51 -0600 commented answer Stereo camera pose estimation from solvePnPRansac using 3D points given wrt. the camera coordinate system

struct DUO_STEREO { double M1[9], M2[9]; // 3x3 - Camera matrices (left, right) double D1[8], D2[8];

2018-04-05 05:15:14 -0600 commented answer Stereo camera pose estimation from solvePnPRansac using 3D points given wrt. the camera coordinate system

struct DUO_STEREO { double M1[9], M2[9]; // 3x3 - Camera matrices (left, right) double D1[8], D2[8];

2018-04-05 05:14:00 -0600 commented answer Stereo camera pose estimation from solvePnPRansac using 3D points given wrt. the camera coordinate system

Shouldn't the cameraMatrix in solvePnP be the 3x3 cameraMatrix from stereoRectify()? Anyhow, I get the stereo parameters

2018-04-04 07:08:23 -0600 commented answer Stereo camera pose estimation from solvePnPRansac using 3D points given wrt. the camera coordinate system

I've become aware of that the cv::solvePnP function, when having non-empty Q and M input variables, assumes that the fea

2018-04-04 07:08:11 -0600 commented answer Stereo camera pose estimation from solvePnPRansac using 3D points given wrt. the camera coordinate system

I've become aware of that the cv::solvePnP function, when having non-empty Q and M input variables, assumes that the fea

2018-04-04 07:04:13 -0600 commented answer Stereo camera pose estimation from solvePnPRansac using 3D points given wrt. the camera coordinate system

I've become aware of that the cv::solvePnP function, when having non-empty Q and M input variables, assumes that the fea

2018-03-21 15:16:51 -0600 commented answer Stereo camera pose estimation from solvePnPRansac using 3D points given wrt. the camera coordinate system

Thanks! I still struggle with getting sensible results from my algorithm (the tvec from solvePnP still doesn't make sens

2018-03-20 09:11:16 -0600 received badge  Supporter (source)
2018-03-20 09:11:06 -0600 marked best answer Stereo camera pose estimation from solvePnPRansac using 3D points given wrt. the camera coordinate system

I know that there exists many posts regarding pose estimation using solvePnP/solvePnPRansac, and I've read most of them, but my case differs slightly from what seems to be the standard scenario, and even if I think I've got it I just can't seem to get it to work and would like someone to correct me if I'm doing anything wrong. This post became quite long, but please bear with me.

I'm trying to use solvePnPRansac to calculate the motion of a stereo camera from one frame/time instance to the next. I detect features in the first frame and track them to the next frame. I'm also using a stereo camera that comes with a SDK which provides me with the corresponding 3D coordinates of the detected features. These 3D points are wrt. the camera coordinate system. In other words, the 2D/3D-points in the two consecutive frames are corresponding to the same features, but if the camera moves between the frames, the coordinates change (even the 3D points, since they are relative to the camera origin).

I believe that the 3D input points of solvePnPRansac should be wrt a world frame, but since I don't have a world frame, I try to do the following:

1) For the very first frame: I set the initial camera pose as the world frame, since I need a constant reference for computation of relative movement. This means that the 3D points calculated in this frame now equals the world points, and that the movement of the camera is relative to the initial camera pose.

2) Call solvePnPRansac with the world points from the first frame together with the 2D features detected in the second frame as inputs. It returns rvec and tvec

Now for my first question: Is tvec the vector from the camera origin (/the second frame) to the world origin (/the first frame), given in the camera's coordinates system?

Second question: I want the vector from the world frame to the camera/second frame given in world frame coordinates (this should be equal to the translation of the camera relative to the original pose=world frame), so I need to use translation = -(R)^T * tvec, where R is the rotation matrix given by rvec?

Now I'm a little confused as to which 3D points I should use in the further calculations. Should I transform the 3D points detected in the second frame (which is given wrt the camera) to the world frame? If I combine the tvec and rvec into a homogeneous-transformation matrix T (which would represent the homogeneous transformation from the world frame to the second frame), the transformation should be 3Dpoints_at_frame2_in_worldcoordinates = T^(-1) * 3Dpoints_at_frame2_in_cameracoordinates

If I do this, I can capture a new image (third frame), track the 2D features detected in the second frame to the third frame, compute the corresponding 3D points (which is given wrt the third frame) and call solvePnPRansac with "3Dpoints_at_frame2_in_worldcoordinates" and the ... (more)

2018-03-20 07:48:20 -0600 commented answer composeRT input/output question

I see! Thanks!

2018-03-20 07:45:35 -0600 marked best answer composeRT input/output question

I have two frame transformations, one from frame 0 to frame 1, and one from frame 1 to frame 2, and I would like to concatenate these into the transformation from frame 0 to frame 2. I'm computing the pose of a moving camera, and the first transformation (0-1) represent the previous pose of the camera (transformation from its initial pose) and the 1-2 transformation is the newest change of pose, represented by tvec and rvec gotten from solvePnPRansac.

However, as I cannot just try out different inputs in my code and see if the output seems correct, since my system currently consists of lots of noise, I would like to have the math check out before I implement it into my application. But while I try to use the formulas given in the documentation with different rvecs and tvecs, I can't get the outputs (rvec3/tvec3) I want. These are the formulas:

image description

I've tried with the following rvecs/tvecs:

  • rvec1: Rotation from frame 1 to frame 0
  • tvec1: vector from the origin of frame 0 to the origin of frame 1, given in frame 0 coordinates
  • rvec2: Rotation from frame 2 to frame 1
  • tvec2: vector from the origin of frame 1 to the origin of frame 2, given in frame 1 coordinates

I want to end up with:

  • rvec3: Rotation from frame 0 to frame 2 (or inversed, doesn't matter)
  • tvec3: vector from frame 0 to frame 2 given in frame 0 coordinates (or negated)

However, with these vectors, I can't get the formula in the documentation to make sense. The rvec3-formula makes sense, and with rvec1/rvec2 I get rvec3=(rvec2*rvec1)⁻¹ to equal the rotation from frame 2 to frame 0. However, the computation of tvec3 doesn't add up:

The formula says tvec3 = rvec2 * tvec1 + tvec2, but rvec2 * tvec1 doesn't make sense with my vectors. I mean, it says to rotate a vector given in frame 0 coordinates from frame 2 to frame 1. A vector given in frame 0 coordinates need to be multiplied with a rotation matrix representing the rotation from frame 0 to some other frame, but this is not the case here. And it haven't made sense with any other vectors I've tried as well, for that matter.

Someone that could help me with these calculcations? Thanks!

2018-03-20 07:45:35 -0600 received badge  Scholar (source)
2018-03-15 08:23:20 -0600 commented answer Stereo camera pose estimation from solvePnPRansac using 3D points given wrt. the camera coordinate system

I see, but how should I model/compute that? The paper suggests Gaussian noise, but in that case, with what parameters? B

2018-03-14 17:03:30 -0600 commented answer Stereo camera pose estimation from solvePnPRansac using 3D points given wrt. the camera coordinate system

Ok, so numStdDev is an algorithm parameter that is chosen by the user, but what exactly is the pixelError? Is it the ave