OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Mon, 03 Apr 2017 17:47:27 -0500solvePnP issue with sudden rotation change, with occluded point/shttp://answers.opencv.org/question/137636/solvepnp-issue-with-sudden-rotation-change-with-occluded-points/ I have a problem where i am getting drastically different poses from solvepnp, if the points list includes a point that is in reality meant to be occluded, but sometimes peaks through (it's an optical sensor, very sensitive).
I have a camera matrix that is just an identity matrix, and a bunch of known 3D model coordinate points for a bunch of sensors on a solid object. The sensors provide their positions in the 'camera's image perspective.
My sensors are arranged in two rings, 440mm apart (parallel plane rings). Our 'camera' sees sensors from a certain direction only, meaning only the sensors along the ring closest to the 'camera' are normally visible.
my data below shows cycle '6' has the following image points, and 3D points (mm units), the unique sensor ID is the first number:
format: ID, img points, model points.
1,[0.122001, 0.0337334],[-56.31, -27.12, 0]
2,[0.135507, 0.0344581],[-38.97, -48.86, 0]
3,[0.0428851, 0.0347298],[13.91, 60.93, 0]
4,[0.0472973, 0.0344505],[-13.91, 60.93, 0]
5,[0.0595242, 0.0333484],[-38.97, 48.86, 0]
6,[0.0791165, 0.0331144],[-56.31, 27.12, 0]
8,[0.0790406, 0.033673],[56.31, 27.12, 0]
15,[0.141493, 0.389969],[-13.91, -60.93, -440]
16,[0.136751, 0.397388],[-38.97, -48.86, -440]
17,[0.101998, 0.407393],[-62.5, 0, -440]
26,[0.0415029, 0.387616],[13.91, 60.93, -440]
Sensors ready: 11
T-vec:
[121.287298603187;
43.82786025370395;
1268.803812947211]
R-Vec after conversion to euler using rodrigues doing this:
cv::Rodrigues(RotVec, rot);
rvec_euler = cv::RQDecomp3x3(rot, output_a, output_b);
rvec_euler = [-2.22604, -86.8052, 92.9033]
SolvePNP output pose (units as metres and degrees), also applied a negative sign to roll and yaw, if you notice:
x: 0.121287, y: -0.043828, z: 1.268804, r: 2.226044, p: -86.805202, y: -92.903265
Then i have cycle '7', which in this cycle the data doesn't contain sensor ID 8 which happens to be "behind" the other sensors, on the other side of the object facing away from the 'camera' in this scenario.
1,[0.122055, 0.0337258],[-56.31, -27.12, 0]
2,[0.135553, 0.0344731],[-38.97, -48.86, 0]
3,[0.0430438, 0.0347223],[13.91, 60.93, 0]
4,[0.0471538, 0.0344656],[-13.91, 60.93, 0]
5,[0.0595696, 0.0333635],[-38.97, 48.86, 0]
6,[0.0790861, 0.0330465],[-56.31, 27.12, 0]
15,[0.141408, 0.389986],[-13.91, -60.93, -440]
16,[0.136812, 0.397423],[-38.97, -48.86, -440]
17,[0.101968, 0.407419],[-62.5, 0, -440]
26,[0.0415104, 0.387521],[13.91, 60.93, -440]
Sensors ready: 10
t-vec:
[116.5373520148447;
44.7917891685647;
1274.362770182497]
rvec_euler = [-58.9083, -82.1685, 149.584]
pose for cycle 7, units as metres and degrees:
x: 0.116537, y: -0.044792, z: 1.274363, r: 58.908338, p: -82.168507, y: -149.583959
I noticed exactly 56.6 degrees different in both the X rotation and the Z rotation axis. How or why does this happen with sensor 8 appears in the image? What could cause such significant changes to the pose? Myself and my colleague have both checked over the 3D coordinates and the sensor IDs etc to confirm the low-end data and it seems fine.
Is there some trick to the pose output or the way i'm doing the rodrigues that is causing a sign inversion of ambiguity issue? Is it better to somehow logically exclude sensors that are occluded from view of the 'camera'? The X Y Z positions are fine by the way, it's just the crazy rotation spam that we are having issues with.
kyranfMon, 03 Apr 2017 17:47:27 -0500http://answers.opencv.org/question/137636/Finding 3D coordinate when all 3 coordinates can vary in the object coordinate systemhttp://answers.opencv.org/question/29067/finding-3d-coordinate-when-all-3-coordinates-can-vary-in-the-object-coordinate-system/I have the 3D coordinates of 4 coplanar points of my target in the object coordinate system.I also have their the 2D coordinates in every frame of a video.I have also calculated the intrinsic parameters (M) for the camera, the R (rotation) and t (translation) matrices between the object coordinate system and the camera coordinate system using solvepnp(). I have read [from here](http://stackoverflow.com/questions/12299870/computing-x-y-coordinate-3d-from-image-point) the complete process,which is very clear.It is also similar to the process I followed.Therefore I wanted to use the same equation
s [u v 1]<sup>T</sup> = M ( R [X Y Z]<sup>T</sup> + t)
for calculating my 3D coordinates but I have no constant as the link explains for calculating s.My target rotates about the x axis in the OpenCV coordinate system.My questions are -
1. Can anyone suggest me a way to find
s? Is it definitely mandatory for
this calculation or can i use s=1?
2.
Is there any other methods for
calculating the 3d point with what
parameters I have?silentvalleyWed, 26 Feb 2014 01:47:22 -0600http://answers.opencv.org/question/29067/extruct 3D pointshttp://answers.opencv.org/question/13222/extruct-3d-points/
Hi
Is there any function in opencv API to get the 3D coordonate of the 3D model to use it in the solvePNP function? How can i get the 3d coordonate then?pocahentezSun, 12 May 2013 10:02:32 -0500http://answers.opencv.org/question/13222/