Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

solvePnP issue with sudden rotation change, with occluded point/s

I have a problem where i am getting drastically different poses from solvepnp, if the points list includes a point that is in reality meant to be occluded, but sometimes peaks through (it's an optical sensor, very sensitive).

I have a camera matrix that is just an identity matrix, and a bunch of known 3D model coordinate points for a bunch of sensors on a solid object. The sensors provide their positions in the 'camera's image perspective.

my data below shows cycle '6' has the following image points, and 3D points (mm units), the unique sensor ID is the first number:

format: ID, img points, model points.

    1,[0.122001, 0.0337334],[-56.31, -27.12, 0]
    2,[0.135507, 0.0344581],[-38.97, -48.86, 0]
    3,[0.0428851, 0.0347298],[13.91, 60.93, 0]
    4,[0.0472973, 0.0344505],[-13.91, 60.93, 0]
    5,[0.0595242, 0.0333484],[-38.97, 48.86, 0]
    6,[0.0791165, 0.0331144],[-56.31, 27.12, 0]
    8,[0.0790406, 0.033673],[56.31, 27.12, 0]
    15,[0.141493, 0.389969],[-13.91, -60.93, -440]
    16,[0.136751, 0.397388],[-38.97, -48.86, -440]
    17,[0.101998, 0.407393],[-62.5, 0, -440]
    26,[0.0415029, 0.387616],[13.91, 60.93, -440]
    Sensors ready: 11

T-vec:

[121.287298603187;
 43.82786025370395;
 1268.803812947211]

R-Vec after conversion to euler using rodrigues doing this:

cv::Rodrigues(RotVec, rot);
rvec_euler = cv::RQDecomp3x3(rot, output_a, output_b);

rvec_euler = [-2.22604, -86.8052, 92.9033]

SolvePNP output pose (units as metres and degrees), also applied a negative sign to roll and yaw, if you notice:

    x: 0.121287, y: -0.043828, z: 1.268804, r: 2.226044, p: -86.805202, y: -92.903265

Then i have cycle '7', which in this cycle the data doesn't contain sensor ID 8 which happens to be "behind" the other sensors, on the other side of the object facing away from the 'camera' in this scenario.

    1,[0.122055, 0.0337258],[-56.31, -27.12, 0]
    2,[0.135553, 0.0344731],[-38.97, -48.86, 0]
    3,[0.0430438, 0.0347223],[13.91, 60.93, 0]
    4,[0.0471538, 0.0344656],[-13.91, 60.93, 0]
    5,[0.0595696, 0.0333635],[-38.97, 48.86, 0]
    6,[0.0790861, 0.0330465],[-56.31, 27.12, 0]
    15,[0.141408, 0.389986],[-13.91, -60.93, -440]
    16,[0.136812, 0.397423],[-38.97, -48.86, -440]
    17,[0.101968, 0.407419],[-62.5, 0, -440]
    26,[0.0415104, 0.387521],[13.91, 60.93, -440]
    Sensors ready: 10

t-vec:

[116.5373520148447;
 44.7917891685647;
 1274.362770182497]

  rvec_euler =  [-58.9083, -82.1685, 149.584]

pose for cycle 7, units as metres and degrees:

x: 0.116537, y: -0.044792, z: 1.274363, r: 58.908338, p: -82.168507, y: -149.583959

I notice exactly 56.6 different in both the X rotation and the Z rotation axis. How or why does this happen with sensor 8 appears in the image? What could cause such significant changes to the pose? Myself and my colleague have both checked over the 3D coordinates and the sensor IDs etc to confirm the low-end data and it seems fine.

Is there some trick to the pose output or the way i'm doing the rodrigues that is causing a sign inversion of ambiguity issue? Is it better to somehow logically exclude sensors that are occluded from view of the 'camera'? The X Y Z positions are fine by the way, it's just the crazy rotation spam that we are having issues with.

solvePnP issue with sudden rotation change, with occluded point/s

I have a problem where i am getting drastically different poses from solvepnp, if the points list includes a point that is in reality meant to be occluded, but sometimes peaks through (it's an optical sensor, very sensitive).

I have a camera matrix that is just an identity matrix, and a bunch of known 3D model coordinate points for a bunch of sensors on a solid object. The sensors provide their positions in the 'camera's image perspective.

my data below shows cycle '6' has the following image points, and 3D points (mm units), the unique sensor ID is the first number:

format: ID, img points, model points.

    1,[0.122001, 0.0337334],[-56.31, -27.12, 0]
    2,[0.135507, 0.0344581],[-38.97, -48.86, 0]
    3,[0.0428851, 0.0347298],[13.91, 60.93, 0]
    4,[0.0472973, 0.0344505],[-13.91, 60.93, 0]
    5,[0.0595242, 0.0333484],[-38.97, 48.86, 0]
    6,[0.0791165, 0.0331144],[-56.31, 27.12, 0]
    8,[0.0790406, 0.033673],[56.31, 27.12, 0]
    15,[0.141493, 0.389969],[-13.91, -60.93, -440]
    16,[0.136751, 0.397388],[-38.97, -48.86, -440]
    17,[0.101998, 0.407393],[-62.5, 0, -440]
    26,[0.0415029, 0.387616],[13.91, 60.93, -440]
    Sensors ready: 11

T-vec:

[121.287298603187;
 43.82786025370395;
 1268.803812947211]

R-Vec after conversion to euler using rodrigues doing this:

cv::Rodrigues(RotVec, rot);
rvec_euler = cv::RQDecomp3x3(rot, output_a, output_b);

rvec_euler = [-2.22604, -86.8052, 92.9033]

SolvePNP output pose (units as metres and degrees), also applied a negative sign to roll and yaw, if you notice:

    x: 0.121287, y: -0.043828, z: 1.268804, r: 2.226044, p: -86.805202, y: -92.903265

Then i have cycle '7', which in this cycle the data doesn't contain sensor ID 8 which happens to be "behind" the other sensors, on the other side of the object facing away from the 'camera' in this scenario.

    1,[0.122055, 0.0337258],[-56.31, -27.12, 0]
    2,[0.135553, 0.0344731],[-38.97, -48.86, 0]
    3,[0.0430438, 0.0347223],[13.91, 60.93, 0]
    4,[0.0471538, 0.0344656],[-13.91, 60.93, 0]
    5,[0.0595696, 0.0333635],[-38.97, 48.86, 0]
    6,[0.0790861, 0.0330465],[-56.31, 27.12, 0]
    15,[0.141408, 0.389986],[-13.91, -60.93, -440]
    16,[0.136812, 0.397423],[-38.97, -48.86, -440]
    17,[0.101968, 0.407419],[-62.5, 0, -440]
    26,[0.0415104, 0.387521],[13.91, 60.93, -440]
    Sensors ready: 10

t-vec:

[116.5373520148447;
 44.7917891685647;
 1274.362770182497]

  rvec_euler =  [-58.9083, -82.1685, 149.584]

pose for cycle 7, units as metres and degrees:

x: 0.116537, y: -0.044792, z: 1.274363, r: 58.908338, p: -82.168507, y: -149.583959

I notice noticed exactly 56.6 degrees different in both the X rotation and the Z rotation axis. How or why does this happen with sensor 8 appears in the image? What could cause such significant changes to the pose? Myself and my colleague have both checked over the 3D coordinates and the sensor IDs etc to confirm the low-end data and it seems fine.

Is there some trick to the pose output or the way i'm doing the rodrigues that is causing a sign inversion of ambiguity issue? Is it better to somehow logically exclude sensors that are occluded from view of the 'camera'? The X Y Z positions are fine by the way, it's just the crazy rotation spam that we are having issues with.

solvePnP issue with sudden rotation change, with occluded point/s

I have a problem where i am getting drastically different poses from solvepnp, if the points list includes a point that is in reality meant to be occluded, but sometimes peaks through (it's an optical sensor, very sensitive).

I have a camera matrix that is just an identity matrix, and a bunch of known 3D model coordinate points for a bunch of sensors on a solid object. The sensors provide their positions in the 'camera's image perspective.

My sensors are arranged in two rings, 440mm apart (parallel plane rings). Our 'camera' sees sensors from a certain direction only, meaning only the sensors along the ring closest to the 'camera' are normally visible.

my data below shows cycle '6' has the following image points, and 3D points (mm units), the unique sensor ID is the first number:

format: ID, img points, model points.

    1,[0.122001, 0.0337334],[-56.31, -27.12, 0]
    2,[0.135507, 0.0344581],[-38.97, -48.86, 0]
    3,[0.0428851, 0.0347298],[13.91, 60.93, 0]
    4,[0.0472973, 0.0344505],[-13.91, 60.93, 0]
    5,[0.0595242, 0.0333484],[-38.97, 48.86, 0]
    6,[0.0791165, 0.0331144],[-56.31, 27.12, 0]
    8,[0.0790406, 0.033673],[56.31, 27.12, 0]
    15,[0.141493, 0.389969],[-13.91, -60.93, -440]
    16,[0.136751, 0.397388],[-38.97, -48.86, -440]
    17,[0.101998, 0.407393],[-62.5, 0, -440]
    26,[0.0415029, 0.387616],[13.91, 60.93, -440]
    Sensors ready: 11

T-vec:

[121.287298603187;
 43.82786025370395;
 1268.803812947211]

R-Vec after conversion to euler using rodrigues doing this:

cv::Rodrigues(RotVec, rot);
rvec_euler = cv::RQDecomp3x3(rot, output_a, output_b);

rvec_euler = [-2.22604, -86.8052, 92.9033]

SolvePNP output pose (units as metres and degrees), also applied a negative sign to roll and yaw, if you notice:

    x: 0.121287, y: -0.043828, z: 1.268804, r: 2.226044, p: -86.805202, y: -92.903265

Then i have cycle '7', which in this cycle the data doesn't contain sensor ID 8 which happens to be "behind" the other sensors, on the other side of the object facing away from the 'camera' in this scenario.

    1,[0.122055, 0.0337258],[-56.31, -27.12, 0]
    2,[0.135553, 0.0344731],[-38.97, -48.86, 0]
    3,[0.0430438, 0.0347223],[13.91, 60.93, 0]
    4,[0.0471538, 0.0344656],[-13.91, 60.93, 0]
    5,[0.0595696, 0.0333635],[-38.97, 48.86, 0]
    6,[0.0790861, 0.0330465],[-56.31, 27.12, 0]
    15,[0.141408, 0.389986],[-13.91, -60.93, -440]
    16,[0.136812, 0.397423],[-38.97, -48.86, -440]
    17,[0.101968, 0.407419],[-62.5, 0, -440]
    26,[0.0415104, 0.387521],[13.91, 60.93, -440]
    Sensors ready: 10

t-vec:

[116.5373520148447;
 44.7917891685647;
 1274.362770182497]

  rvec_euler =  [-58.9083, -82.1685, 149.584]

pose for cycle 7, units as metres and degrees:

x: 0.116537, y: -0.044792, z: 1.274363, r: 58.908338, p: -82.168507, y: -149.583959

I noticed exactly 56.6 degrees different in both the X rotation and the Z rotation axis. How or why does this happen with sensor 8 appears in the image? What could cause such significant changes to the pose? Myself and my colleague have both checked over the 3D coordinates and the sensor IDs etc to confirm the low-end data and it seems fine.

Is there some trick to the pose output or the way i'm doing the rodrigues that is causing a sign inversion of ambiguity issue? Is it better to somehow logically exclude sensors that are occluded from view of the 'camera'? The X Y Z positions are fine by the way, it's just the crazy rotation spam that we are having issues with.

solvePnP issue with sudden rotation change, with occluded point/s

I have a problem where i am getting drastically different poses from solvepnp, if the points list includes a point that is in reality meant to be occluded, but sometimes peaks through (it's an optical sensor, very sensitive).

I have a camera matrix that is just an identity matrix, and a bunch of known 3D model coordinate points for a bunch of sensors on a solid object. The sensors provide their positions in the 'camera's image perspective.

My sensors are arranged in two rings, 440mm apart (parallel plane rings). Our 'camera' sees sensors from a certain direction only, meaning only the sensors along the ring closest to the 'camera' are normally visible.

my data below shows cycle '6' has the following image points, and 3D points (mm units), the unique sensor ID is the first number:

format: ID, img points, model points.

    1,[0.122001, 0.0337334],[-56.31, -27.12, 0]
    2,[0.135507, 0.0344581],[-38.97, -48.86, 0]
    3,[0.0428851, 0.0347298],[13.91, 60.93, 0]
    4,[0.0472973, 0.0344505],[-13.91, 60.93, 0]
    5,[0.0595242, 0.0333484],[-38.97, 48.86, 0]
    6,[0.0791165, 0.0331144],[-56.31, 27.12, 0]
    8,[0.0790406, 0.033673],[56.31, 27.12, 0]
    15,[0.141493, 0.389969],[-13.91, -60.93, -440]
    16,[0.136751, 0.397388],[-38.97, -48.86, -440]
    17,[0.101998, 0.407393],[-62.5, 0, -440]
    26,[0.0415029, 0.387616],[13.91, 60.93, -440]
    Sensors ready: 11

T-vec:

[121.287298603187;
 43.82786025370395;
 1268.803812947211]

R-Vec after conversion to euler using rodrigues doing this:

cv::Rodrigues(RotVec, rot);
rvec_euler = cv::RQDecomp3x3(rot, output_a, output_b);

rvec_euler = [-2.22604, -86.8052, 92.9033]

SolvePNP output pose (units as metres and degrees), also applied a negative sign to roll and yaw, if you notice:

    x: 0.121287, y: -0.043828, z: 1.268804, r: 2.226044, p: -86.805202, y: -92.903265

Then i have cycle '7', which in this cycle the data doesn't contain sensor ID 8 which happens to be "behind" the other sensors, on the other side of the object facing away from the 'camera' in this scenario.

    1,[0.122055, 0.0337258],[-56.31, -27.12, 0]
    2,[0.135553, 0.0344731],[-38.97, -48.86, 0]
    3,[0.0430438, 0.0347223],[13.91, 60.93, 0]
    4,[0.0471538, 0.0344656],[-13.91, 60.93, 0]
    5,[0.0595696, 0.0333635],[-38.97, 48.86, 0]
    6,[0.0790861, 0.0330465],[-56.31, 27.12, 0]
    15,[0.141408, 0.389986],[-13.91, -60.93, -440]
    16,[0.136812, 0.397423],[-38.97, -48.86, -440]
    17,[0.101968, 0.407419],[-62.5, 0, -440]
    26,[0.0415104, 0.387521],[13.91, 60.93, -440]
    Sensors ready: 10

t-vec:

[116.5373520148447;
 44.7917891685647;
 1274.362770182497]

  rvec_euler =  [-58.9083, -82.1685, 149.584]

pose for cycle 7, units as metres and degrees:

x: 0.116537, y: -0.044792, z: 1.274363, r: 58.908338, p: -82.168507, y: -149.583959

I noticed exactly 56.6 degrees different in both the X rotation and the Z rotation axis. How or why does this happen with sensor 8 appears in the image? What could cause such significant changes to the pose? Myself and my colleague have both checked over the 3D coordinates and the sensor IDs etc to confirm the low-end data and it seems fine.

Is there some trick to the pose output or the way i'm doing the rodrigues that is causing a sign inversion of ambiguity issue? Is it better to somehow logically exclude sensors that are occluded from view of the 'camera'? The X Y Z positions are fine by the way, it's just the crazy rotation spam that we are having issues with.