Ask Your Question

kyranf's profile - activity

2017-04-07 04:10:37 -0500 received badge  Enthusiast
2017-04-04 08:54:30 -0500 commented question solvePnP issue with sudden rotation change, with occluded point/s

thanks Eduardo, and yeah technically it's 2 * 14, for a total of 28 sensors. my sketch was not accurate on the exact number of sensors per ring. I've double checked the 2D -> 3D coordinate matching, that all looks fine. I'll check out the 3P3 method, shouldn't be too much effort to find top and bottom sensors furthest distance apart to provide it.

2017-04-04 08:07:30 -0500 received badge  Student (source)
2017-04-04 06:17:46 -0500 commented question solvePnP issue with sudden rotation change, with occluded point/s

Images: sensor setup, to show you what i mean by rings.

Cycle 6 debug image, which includes sensor 8 (it is shown basically right on top of sensor 6 in the image plot)

Cycle 7 debug image, does not have sensor 8, and shows a vastly different rotation vector

2017-04-04 06:06:24 -0500 commented question solvePnP issue with sudden rotation change, with occluded point/s

@Eduardo thanks, i'd like to add some images that show the 'image points' and show you visually the difference between cycle 6 and 7, were sensor 8 is seen basically 'behind' one of the other sensor points. In no situation is a sensor point able to exist behind the camera, by the way. sensor points only exist in front in the field of view (theoretically 0-180 degrees both x and y FOV). I can't add images directly to the question because I don't have enough karma yet. I will upload to Imgur.

2017-04-04 03:30:42 -0500 commented answer Using cv::solvePnP on Lighthouse Data

Okay, well i just made a question that is related to my second part of the comment (here)

2017-04-03 17:55:50 -0500 received badge  Editor (source)
2017-04-03 17:47:27 -0500 asked a question solvePnP issue with sudden rotation change, with occluded point/s

I have a problem where i am getting drastically different poses from solvepnp, if the points list includes a point that is in reality meant to be occluded, but sometimes peaks through (it's an optical sensor, very sensitive).

I have a camera matrix that is just an identity matrix, and a bunch of known 3D model coordinate points for a bunch of sensors on a solid object. The sensors provide their positions in the 'camera's image perspective.

My sensors are arranged in two rings, 440mm apart (parallel plane rings). Our 'camera' sees sensors from a certain direction only, meaning only the sensors along the ring closest to the 'camera' are normally visible.

my data below shows cycle '6' has the following image points, and 3D points (mm units), the unique sensor ID is the first number:

format: ID, img points, model points.

    1,[0.122001, 0.0337334],[-56.31, -27.12, 0]
    2,[0.135507, 0.0344581],[-38.97, -48.86, 0]
    3,[0.0428851, 0.0347298],[13.91, 60.93, 0]
    4,[0.0472973, 0.0344505],[-13.91, 60.93, 0]
    5,[0.0595242, 0.0333484],[-38.97, 48.86, 0]
    6,[0.0791165, 0.0331144],[-56.31, 27.12, 0]
    8,[0.0790406, 0.033673],[56.31, 27.12, 0]
    15,[0.141493, 0.389969],[-13.91, -60.93, -440]
    16,[0.136751, 0.397388],[-38.97, -48.86, -440]
    17,[0.101998, 0.407393],[-62.5, 0, -440]
    26,[0.0415029, 0.387616],[13.91, 60.93, -440]
    Sensors ready: 11



R-Vec after conversion to euler using rodrigues doing this:

cv::Rodrigues(RotVec, rot);
rvec_euler = cv::RQDecomp3x3(rot, output_a, output_b);

rvec_euler = [-2.22604, -86.8052, 92.9033]

SolvePNP output pose (units as metres and degrees), also applied a negative sign to roll and yaw, if you notice:

    x: 0.121287, y: -0.043828, z: 1.268804, r: 2.226044, p: -86.805202, y: -92.903265

Then i have cycle '7', which in this cycle the data doesn't contain sensor ID 8 which happens to be "behind" the other sensors, on the other side of the object facing away from the 'camera' in this scenario.

    1,[0.122055, 0.0337258],[-56.31, -27.12, 0]
    2,[0.135553, 0.0344731],[-38.97, -48.86, 0]
    3,[0.0430438, 0.0347223],[13.91, 60.93, 0]
    4,[0.0471538, 0.0344656],[-13.91, 60.93, 0]
    5,[0.0595696, 0.0333635],[-38.97, 48.86, 0]
    6,[0.0790861, 0.0330465],[-56.31, 27.12, 0]
    15,[0.141408, 0.389986],[-13.91, -60.93, -440]
    16,[0.136812, 0.397423],[-38.97, -48.86, -440]
    17,[0.101968, 0.407419],[-62.5, 0, -440]
    26,[0.0415104, 0.387521],[13.91, 60.93, -440]
    Sensors ready: 10



  rvec_euler =  [-58.9083, -82.1685, 149.584]

pose for cycle 7, units as metres and degrees:

x: 0.116537, y: -0.044792, z: 1 ...
2017-04-03 05:34:22 -0500 commented answer Using cv::solvePnP on Lighthouse Data

@Tetragramm, okay so with my X and Y angle (elev and azimuth) i do those equations, then follow your answer and I should get reasonable values? My current method is actually working now but it's probably overly complicated, Also, solvePNP seems to give very "noisy" results with data that appears quite stable. Is it normally quite sensitive? what are good known solutions to filtering a pose like what comes out of solvePNP?

2017-03-30 18:09:00 -0500 commented answer Using cv::solvePnP on Lighthouse Data

Tetragramm (or Rupert) could you please describe to me how I come up with this "line of sight" vector? If I get sweep data from the lighthouse that represents 0-180 degrees, (times, with known rotation speed), how do I make this into something usable for solvePNP as discussed in this question/answer?

2017-03-30 15:21:33 -0500 commented question Using cv::solvePnP on Lighthouse Data

@RupertVanDaCow, I am working on almost exactly the same thing as you, i'm getting crazy Z values that are way off in distance, but the general X/Y positions appear good to me.. Rotations I haven't even looked at yet, but they look crazy too. How are you converting the laser sweep time to an "image point" for feeding into SolvePNP? I'm basically saying the "camera" is 0-180 degree field of view, and tick times from sync pulse to laser are some small value, like, 90.0868 (out of 180) for about the "middle" of the image. Can you help me with getting the data from sweep times to solvePNP?