OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Tue, 04 Apr 2017 03:30:42 -0500Using cv::solvePnP on Lighthouse Datahttp://answers.opencv.org/question/129892/using-cvsolvepnp-on-lighthouse-data/Hello,
I've got a project going, were I try to get the pose of a 3d-tracker, which utilizes the lighthouse basestations from Valve.
The basestations provide laser-sweeps across the tracking-volume and my tracker records the timings when a laser-plane hits one of its ir-sensors. These timings can then be converted into degrees, based on the fact that the laser-planes rotate at exactly 3600RPM.
Since I know exactly where my sensors are placed on the tracker I should be able to get the pose using the `cv::solvePnP` function.
But I can't figure out what kind of camera-matrix and distortion coefficients I should use.
Since a basestation has neither a lens nor a 2d-image-sensor I can't think of a way to calculate the focal-length needed for the camera-matrix.
First I've tried the `imagewidth/2 * cot(fov/2)` formula, assuming an "image width" of 120, since this is the "domain" of my readings, which leads to a focal-length of 34.641px. But the results were completely off.
I've then tried to calculate a focal length for a given scenario (tracker 1m infront of the basestation) which gave me a focal-length of 56.62px. If I place my tracker about 1 meter in front of a basestation the results are plausible but if I move away from that "sweetspot" the results are again completely off.
But since I have no lens there should be no distortion, or am I wrong about that?
If anyone could give me a hint I would be very grateful.
Thu, 23 Feb 2017 16:17:19 -0600http://answers.opencv.org/question/129892/using-cvsolvepnp-on-lighthouse-data/Comment by kyranf for <p>Hello, <br/>
I've got a project going, were I try to get the pose of a 3d-tracker, which utilizes the lighthouse basestations from Valve. </p>
<p>The basestations provide laser-sweeps across the tracking-volume and my tracker records the timings when a laser-plane hits one of its ir-sensors. These timings can then be converted into degrees, based on the fact that the laser-planes rotate at exactly 3600RPM. </p>
<p>Since I know exactly where my sensors are placed on the tracker I should be able to get the pose using the <code>cv::solvePnP</code> function. </p>
<p>But I can't figure out what kind of camera-matrix and distortion coefficients I should use.
Since a basestation has neither a lens nor a 2d-image-sensor I can't think of a way to calculate the focal-length needed for the camera-matrix. <br/>
First I've tried the <code>imagewidth/2 * cot(fov/2)</code> formula, assuming an "image width" of 120, since this is the "domain" of my readings, which leads to a focal-length of 34.641px. But the results were completely off.
I've then tried to calculate a focal length for a given scenario (tracker 1m infront of the basestation) which gave me a focal-length of 56.62px. If I place my tracker about 1 meter in front of a basestation the results are plausible but if I move away from that "sweetspot" the results are again completely off.</p>
<p>But since I have no lens there should be no distortion, or am I wrong about that?</p>
<p>If anyone could give me a hint I would be very grateful.</p>
http://answers.opencv.org/question/129892/using-cvsolvepnp-on-lighthouse-data/?comment=136874#post-id-136874@RupertVanDaCow, I am working on almost exactly the same thing as you, i'm getting crazy Z values that are way off in distance, but the general X/Y positions appear good to me.. Rotations I haven't even looked at yet, but they look crazy too. How are you converting the laser sweep time to an "image point" for feeding into SolvePNP? I'm basically saying the "camera" is 0-180 degree field of view, and tick times from sync pulse to laser are some small value, like, 90.0868 (out of 180) for about the "middle" of the image. Can you help me with getting the data from sweep times to solvePNP?Thu, 30 Mar 2017 15:16:46 -0500http://answers.opencv.org/question/129892/using-cvsolvepnp-on-lighthouse-data/?comment=136874#post-id-136874Comment by Tetragramm for <p>Hello, <br/>
I've got a project going, were I try to get the pose of a 3d-tracker, which utilizes the lighthouse basestations from Valve. </p>
<p>The basestations provide laser-sweeps across the tracking-volume and my tracker records the timings when a laser-plane hits one of its ir-sensors. These timings can then be converted into degrees, based on the fact that the laser-planes rotate at exactly 3600RPM. </p>
<p>Since I know exactly where my sensors are placed on the tracker I should be able to get the pose using the <code>cv::solvePnP</code> function. </p>
<p>But I can't figure out what kind of camera-matrix and distortion coefficients I should use.
Since a basestation has neither a lens nor a 2d-image-sensor I can't think of a way to calculate the focal-length needed for the camera-matrix. <br/>
First I've tried the <code>imagewidth/2 * cot(fov/2)</code> formula, assuming an "image width" of 120, since this is the "domain" of my readings, which leads to a focal-length of 34.641px. But the results were completely off.
I've then tried to calculate a focal length for a given scenario (tracker 1m infront of the basestation) which gave me a focal-length of 56.62px. If I place my tracker about 1 meter in front of a basestation the results are plausible but if I move away from that "sweetspot" the results are again completely off.</p>
<p>But since I have no lens there should be no distortion, or am I wrong about that?</p>
<p>If anyone could give me a hint I would be very grateful.</p>
http://answers.opencv.org/question/129892/using-cvsolvepnp-on-lighthouse-data/?comment=129902#post-id-129902If you have the direction in degrees, you're already "past" the camera matrix, as it were. I'll say more later, sorry.Thu, 23 Feb 2017 18:09:37 -0600http://answers.opencv.org/question/129892/using-cvsolvepnp-on-lighthouse-data/?comment=129902#post-id-129902Answer by Tetragramm for <p>Hello, <br/>
I've got a project going, were I try to get the pose of a 3d-tracker, which utilizes the lighthouse basestations from Valve. </p>
<p>The basestations provide laser-sweeps across the tracking-volume and my tracker records the timings when a laser-plane hits one of its ir-sensors. These timings can then be converted into degrees, based on the fact that the laser-planes rotate at exactly 3600RPM. </p>
<p>Since I know exactly where my sensors are placed on the tracker I should be able to get the pose using the <code>cv::solvePnP</code> function. </p>
<p>But I can't figure out what kind of camera-matrix and distortion coefficients I should use.
Since a basestation has neither a lens nor a 2d-image-sensor I can't think of a way to calculate the focal-length needed for the camera-matrix. <br/>
First I've tried the <code>imagewidth/2 * cot(fov/2)</code> formula, assuming an "image width" of 120, since this is the "domain" of my readings, which leads to a focal-length of 34.641px. But the results were completely off.
I've then tried to calculate a focal length for a given scenario (tracker 1m infront of the basestation) which gave me a focal-length of 56.62px. If I place my tracker about 1 meter in front of a basestation the results are plausible but if I move away from that "sweetspot" the results are again completely off.</p>
<p>But since I have no lens there should be no distortion, or am I wrong about that?</p>
<p>If anyone could give me a hint I would be very grateful.</p>
http://answers.opencv.org/question/129892/using-cvsolvepnp-on-lighthouse-data/?answer=129907#post-id-129907Ok, so take your vector (and your two angles make a 3D line of sight, which is a vector). It's a Mat(3,1,CV_64F)
Arbitrarily define your camera matrix as the identity matrix. (If you have an actual camera, you'd use that here) Mat(3,3,CV_64F)
Multiply your vector by the camera matrix. LOS = camMat*LOS;
Now divide your LOS by LOS.at(double)(2).
LOS.at(double)(0) is now your x value, and (1) is your y.
You can put these into solvePnP with the identity camera matrix and I think you'll get good results.Thu, 23 Feb 2017 19:50:34 -0600http://answers.opencv.org/question/129892/using-cvsolvepnp-on-lighthouse-data/?answer=129907#post-id-129907Comment by RupertVanDaCow for <p>Ok, so take your vector (and your two angles make a 3D line of sight, which is a vector). It's a Mat(3,1,CV_64F)</p>
<p>Arbitrarily define your camera matrix as the identity matrix. (If you have an actual camera, you'd use that here) Mat(3,3,CV_64F)</p>
<p>Multiply your vector by the camera matrix. LOS = camMat*LOS;</p>
<p>Now divide your LOS by LOS.at(double)(2). </p>
<p>LOS.at(double)(0) is now your x value, and (1) is your y. </p>
<p>You can put these into solvePnP with the identity camera matrix and I think you'll get good results.</p>
http://answers.opencv.org/question/129892/using-cvsolvepnp-on-lighthouse-data/?comment=130079#post-id-130079Sorry if I sound a bit repetitive, but I really can't understand what's wrong with tan...
If you think of [this image](https://upload.wikimedia.org/wikipedia/commons/4/45/Unitcircledefs.svg) as the view along the y-axis, with the z-axis pointing to the right and the x-axis pointing up and I am interested in the x-offset for x-degrees (which is Theta in this image) at z=1 the image tells me this length is tan(Theta). I don't rotate my point, but rather slide it on the projection plane, maintaining its z=1.
And If I would change my view to be oriented along the x-axis the same scheme applies for getting the y-offset at z=1.Fri, 24 Feb 2017 19:33:55 -0600http://answers.opencv.org/question/129892/using-cvsolvepnp-on-lighthouse-data/?comment=130079#post-id-130079Comment by kyranf for <p>Ok, so take your vector (and your two angles make a 3D line of sight, which is a vector). It's a Mat(3,1,CV_64F)</p>
<p>Arbitrarily define your camera matrix as the identity matrix. (If you have an actual camera, you'd use that here) Mat(3,3,CV_64F)</p>
<p>Multiply your vector by the camera matrix. LOS = camMat*LOS;</p>
<p>Now divide your LOS by LOS.at(double)(2). </p>
<p>LOS.at(double)(0) is now your x value, and (1) is your y. </p>
<p>You can put these into solvePnP with the identity camera matrix and I think you'll get good results.</p>
http://answers.opencv.org/question/129892/using-cvsolvepnp-on-lighthouse-data/?comment=137527#post-id-137527@tetragramm, okay so with my X and Y angle (elev and azimuth) i do those equations, then follow your answer and I should get reasonable values? My current method is actually working now but it's probably overly complicated, Also, solvePNP seems to give very "noisy" results with data that appears quite stable. Is it normally quite sensitive? what are good known solutions to filtering a pose like what comes out of solvePNP?Mon, 03 Apr 2017 05:34:22 -0500http://answers.opencv.org/question/129892/using-cvsolvepnp-on-lighthouse-data/?comment=137527#post-id-137527Comment by kyranf for <p>Ok, so take your vector (and your two angles make a 3D line of sight, which is a vector). It's a Mat(3,1,CV_64F)</p>
<p>Arbitrarily define your camera matrix as the identity matrix. (If you have an actual camera, you'd use that here) Mat(3,3,CV_64F)</p>
<p>Multiply your vector by the camera matrix. LOS = camMat*LOS;</p>
<p>Now divide your LOS by LOS.at(double)(2). </p>
<p>LOS.at(double)(0) is now your x value, and (1) is your y. </p>
<p>You can put these into solvePnP with the identity camera matrix and I think you'll get good results.</p>
http://answers.opencv.org/question/129892/using-cvsolvepnp-on-lighthouse-data/?comment=136877#post-id-136877Tetragramm (or Rupert) could you please describe to me how I come up with this "line of sight" vector? If I get sweep data from the lighthouse that represents 0-180 degrees, (times, with known rotation speed), how do I make this into something usable for solvePNP as discussed in this question/answer?Thu, 30 Mar 2017 15:37:49 -0500http://answers.opencv.org/question/129892/using-cvsolvepnp-on-lighthouse-data/?comment=136877#post-id-136877Comment by Tetragramm for <p>Ok, so take your vector (and your two angles make a 3D line of sight, which is a vector). It's a Mat(3,1,CV_64F)</p>
<p>Arbitrarily define your camera matrix as the identity matrix. (If you have an actual camera, you'd use that here) Mat(3,3,CV_64F)</p>
<p>Multiply your vector by the camera matrix. LOS = camMat*LOS;</p>
<p>Now divide your LOS by LOS.at(double)(2). </p>
<p>LOS.at(double)(0) is now your x value, and (1) is your y. </p>
<p>You can put these into solvePnP with the identity camera matrix and I think you'll get good results.</p>
http://answers.opencv.org/question/129892/using-cvsolvepnp-on-lighthouse-data/?comment=136896#post-id-136896Basic geometry.
x = cos(elevation) .* cos(azimuth)
y = cos(elevation) .* sin(azimuth)
z = sin(elevation)Thu, 30 Mar 2017 18:11:26 -0500http://answers.opencv.org/question/129892/using-cvsolvepnp-on-lighthouse-data/?comment=136896#post-id-136896Comment by Tetragramm for <p>Ok, so take your vector (and your two angles make a 3D line of sight, which is a vector). It's a Mat(3,1,CV_64F)</p>
<p>Arbitrarily define your camera matrix as the identity matrix. (If you have an actual camera, you'd use that here) Mat(3,3,CV_64F)</p>
<p>Multiply your vector by the camera matrix. LOS = camMat*LOS;</p>
<p>Now divide your LOS by LOS.at(double)(2). </p>
<p>LOS.at(double)(0) is now your x value, and (1) is your y. </p>
<p>You can put these into solvePnP with the identity camera matrix and I think you'll get good results.</p>
http://answers.opencv.org/question/129892/using-cvsolvepnp-on-lighthouse-data/?comment=130086#post-id-130086Oops. You're correct. I made a mistake in my scratch program I was testing with and had the wrong angles, so of course the tan didn't match.
Now remember that this only works for the identity camera matrix. You're not really using a camera model here.Fri, 24 Feb 2017 20:34:52 -0600http://answers.opencv.org/question/129892/using-cvsolvepnp-on-lighthouse-data/?comment=130086#post-id-130086Comment by Tetragramm for <p>Ok, so take your vector (and your two angles make a 3D line of sight, which is a vector). It's a Mat(3,1,CV_64F)</p>
<p>Arbitrarily define your camera matrix as the identity matrix. (If you have an actual camera, you'd use that here) Mat(3,3,CV_64F)</p>
<p>Multiply your vector by the camera matrix. LOS = camMat*LOS;</p>
<p>Now divide your LOS by LOS.at(double)(2). </p>
<p>LOS.at(double)(0) is now your x value, and (1) is your y. </p>
<p>You can put these into solvePnP with the identity camera matrix and I think you'll get good results.</p>
http://answers.opencv.org/question/129892/using-cvsolvepnp-on-lighthouse-data/?comment=130069#post-id-130069Nope. You're forgetting that the lengths of the vectors are affected by the rotations. Basically, the length of your hypotenuse changes with both x and y, as does the "adjacent" you're using for tan.
Look [HERE](http://docs.opencv.org/master/d9/d0c/group__calib3d.html) for an explanation of the model. Or google "Pinhole Camera model"Fri, 24 Feb 2017 17:48:39 -0600http://answers.opencv.org/question/129892/using-cvsolvepnp-on-lighthouse-data/?comment=130069#post-id-130069Comment by RupertVanDaCow for <p>Ok, so take your vector (and your two angles make a 3D line of sight, which is a vector). It's a Mat(3,1,CV_64F)</p>
<p>Arbitrarily define your camera matrix as the identity matrix. (If you have an actual camera, you'd use that here) Mat(3,3,CV_64F)</p>
<p>Multiply your vector by the camera matrix. LOS = camMat*LOS;</p>
<p>Now divide your LOS by LOS.at(double)(2). </p>
<p>LOS.at(double)(0) is now your x value, and (1) is your y. </p>
<p>You can put these into solvePnP with the identity camera matrix and I think you'll get good results.</p>
http://answers.opencv.org/question/129892/using-cvsolvepnp-on-lighthouse-data/?comment=129949#post-id-129949Ok, but can you tell me what/where my error is on this thought:
I want the intersection of my LOS-ray at z=1. I know that the LOS is the intersection of the yz-plane rotated around the y-axis by x-degrees and the xz-pane rotated around the x-axis by y-degrees.
The rotation around the y-axis should therefore not affect the y-component of my intersection-point. The x-coordinate should be the distance from the yz-plane, which (correct me if I am wrong here) is tan(x-degrees) * z. Since this should be analogous for the rotation around the x-axis, it should boil down to:
Intersection-point for x-degrees, y-degrees and z=1 is (tan(y-degrees), tan(x-degrees), 1).Fri, 24 Feb 2017 04:29:14 -0600http://answers.opencv.org/question/129892/using-cvsolvepnp-on-lighthouse-data/?comment=129949#post-id-129949Comment by Tetragramm for <p>Ok, so take your vector (and your two angles make a 3D line of sight, which is a vector). It's a Mat(3,1,CV_64F)</p>
<p>Arbitrarily define your camera matrix as the identity matrix. (If you have an actual camera, you'd use that here) Mat(3,3,CV_64F)</p>
<p>Multiply your vector by the camera matrix. LOS = camMat*LOS;</p>
<p>Now divide your LOS by LOS.at(double)(2). </p>
<p>LOS.at(double)(0) is now your x value, and (1) is your y. </p>
<p>You can put these into solvePnP with the identity camera matrix and I think you'll get good results.</p>
http://answers.opencv.org/question/129892/using-cvsolvepnp-on-lighthouse-data/?comment=129912#post-id-129912It is redundant in this case, but I'm describing the general case if someone else comes along.
The x/y angles are not the tangent of the x and y. It's also not the tangents of the normalized unit LOS, though that's closer. Your identity camera matrix makes the focal length 1 pixel. So you take the vector intersecting the plane at 1 pixel distance (that's the 1 in the z component) and you get the location on the screen in pixel units.Thu, 23 Feb 2017 21:45:13 -0600http://answers.opencv.org/question/129892/using-cvsolvepnp-on-lighthouse-data/?comment=129912#post-id-129912Comment by RupertVanDaCow for <p>Ok, so take your vector (and your two angles make a 3D line of sight, which is a vector). It's a Mat(3,1,CV_64F)</p>
<p>Arbitrarily define your camera matrix as the identity matrix. (If you have an actual camera, you'd use that here) Mat(3,3,CV_64F)</p>
<p>Multiply your vector by the camera matrix. LOS = camMat*LOS;</p>
<p>Now divide your LOS by LOS.at(double)(2). </p>
<p>LOS.at(double)(0) is now your x value, and (1) is your y. </p>
<p>You can put these into solvePnP with the identity camera matrix and I think you'll get good results.</p>
http://answers.opencv.org/question/129892/using-cvsolvepnp-on-lighthouse-data/?comment=129910#post-id-129910Ok, first of all thanks for your help.
> Multiply your vector by the camera matrix. LOS = camMat*LOS;
But isn't this redundant if I define the camera matrix as the identity matrix?
And about that angle conversion:
If I take my x/y angles and translate them into a LOS vector and then dividing by the z-component, wouldn't that be equal to simply taking the tangens of x and y as the new x and y?Thu, 23 Feb 2017 21:19:23 -0600http://answers.opencv.org/question/129892/using-cvsolvepnp-on-lighthouse-data/?comment=129910#post-id-129910Comment by Tetragramm for <p>Ok, so take your vector (and your two angles make a 3D line of sight, which is a vector). It's a Mat(3,1,CV_64F)</p>
<p>Arbitrarily define your camera matrix as the identity matrix. (If you have an actual camera, you'd use that here) Mat(3,3,CV_64F)</p>
<p>Multiply your vector by the camera matrix. LOS = camMat*LOS;</p>
<p>Now divide your LOS by LOS.at(double)(2). </p>
<p>LOS.at(double)(0) is now your x value, and (1) is your y. </p>
<p>You can put these into solvePnP with the identity camera matrix and I think you'll get good results.</p>
http://answers.opencv.org/question/129892/using-cvsolvepnp-on-lighthouse-data/?comment=137647#post-id-137647I have no idea what you're doing. Why don't you start a new question with context and code snippets and more details about what exactly isn't working?Mon, 03 Apr 2017 18:20:10 -0500http://answers.opencv.org/question/129892/using-cvsolvepnp-on-lighthouse-data/?comment=137647#post-id-137647Comment by kyranf for <p>Ok, so take your vector (and your two angles make a 3D line of sight, which is a vector). It's a Mat(3,1,CV_64F)</p>
<p>Arbitrarily define your camera matrix as the identity matrix. (If you have an actual camera, you'd use that here) Mat(3,3,CV_64F)</p>
<p>Multiply your vector by the camera matrix. LOS = camMat*LOS;</p>
<p>Now divide your LOS by LOS.at(double)(2). </p>
<p>LOS.at(double)(0) is now your x value, and (1) is your y. </p>
<p>You can put these into solvePnP with the identity camera matrix and I think you'll get good results.</p>
http://answers.opencv.org/question/129892/using-cvsolvepnp-on-lighthouse-data/?comment=137746#post-id-137746Okay, well i just made a question that is related to my second part of the comment [here](http://answers.opencv.org/question/137636/solvepnp-issue-with-sudden-rotation-change-with-occluded-points/)Tue, 04 Apr 2017 03:30:42 -0500http://answers.opencv.org/question/129892/using-cvsolvepnp-on-lighthouse-data/?comment=137746#post-id-137746