OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Thu, 23 Feb 2017 16:17:19 -0600Using cv::solvePnP on Lighthouse Datahttp://answers.opencv.org/question/129892/using-cvsolvepnp-on-lighthouse-data/Hello,
I've got a project going, were I try to get the pose of a 3d-tracker, which utilizes the lighthouse basestations from Valve.
The basestations provide laser-sweeps across the tracking-volume and my tracker records the timings when a laser-plane hits one of its ir-sensors. These timings can then be converted into degrees, based on the fact that the laser-planes rotate at exactly 3600RPM.
Since I know exactly where my sensors are placed on the tracker I should be able to get the pose using the `cv::solvePnP` function.
But I can't figure out what kind of camera-matrix and distortion coefficients I should use.
Since a basestation has neither a lens nor a 2d-image-sensor I can't think of a way to calculate the focal-length needed for the camera-matrix.
First I've tried the `imagewidth/2 * cot(fov/2)` formula, assuming an "image width" of 120, since this is the "domain" of my readings, which leads to a focal-length of 34.641px. But the results were completely off.
I've then tried to calculate a focal length for a given scenario (tracker 1m infront of the basestation) which gave me a focal-length of 56.62px. If I place my tracker about 1 meter in front of a basestation the results are plausible but if I move away from that "sweetspot" the results are again completely off.
But since I have no lens there should be no distortion, or am I wrong about that?
If anyone could give me a hint I would be very grateful.
RupertVanDaCowThu, 23 Feb 2017 16:17:19 -0600http://answers.opencv.org/question/129892/What does projection matrix provided by the calibration represent?http://answers.opencv.org/question/26596/what-does-projection-matrix-provided-by-the-calibration-represent/I'm using such a tool from ROS/OpenCV in order to perform the [camera calibration](http://wiki.ros.org/camera_calibration/Tutorials/MonocularCalibration). The procedure ends up providing: camera matrix, distortion parameters, rectification matrix and projection matrix.
As far as I know the projection matrix contains the intrinsic parameter matrix of the camera multiplied by the extrinsic parameters matrix of the matrix.
The extrinsic parameter matrix itself provides the roto-translation of the camera frame with respect to the world frame.
If these assumptions are correct...how the projection matrix is computed by Opencv? I,m not defining any world frame!
camera matrix
414.287922 0.000000 382.549277
0.000000 414.306025 230.875006
0.000000 0.000000 1.000000
distortion
-0.278237 0.063338 -0.001382 0.000732 0.000000
rectification
1.000000 0.000000 0.000000
0.000000 1.000000 0.000000
0.000000 0.000000 1.000000
projection
297.051453 0.000000 387.628900 0.000000
0.000000 369.280731 227.051305 0.000000
0.000000 0.000000 1.000000 0.000000matteoThu, 16 Jan 2014 10:44:50 -0600http://answers.opencv.org/question/26596/The coordinate system of pinhole camera modelhttp://answers.opencv.org/question/31470/the-coordinate-system-of-pinhole-camera-model/Recently, I have been studying the pinhole camera model for several days but I was confused with the model provided by OpenCV and "Multiple View geometry in computer vision" which is a famous textbook.
I know that the following photo is a simplified model which switches the position of the image plane and the camera frame. Basically,for better illustration and understanding and Taking consideration of the principal point (u0,v0), the relation between two frames is
x=f(X/Z)+u0 and
y=f(Y/Z)+vo.
![image description](/upfiles/1397050375379081.png)
However,I was really confused because normally the image coordinate is in the form of the 4th quadrant coordinate as the following one!
Could I directly substitute the (x,y) in the following definition to the above "equivalent" pinhole model which is not really persuasive?
![image description](/upfiles/13970504447802913.gif)
Besides, If an object is in the region (+X,+Y) quadrant in the camera coordinate (of course, Z>f), in the equivalent model, it should appear on the right-half plane of the image coordinate. However, such object in the image taken by a normal camera, it is supposed to be located on the left-half. Therefore, for me this model is not reasonable.
Finally, I tried to derive based on the original model as the following one.
![image description](/upfiles/13970504813232063.png)
The result is
x1=-f(X/Z) and
y1=-f(Y/Z). Then, I tried to find the relation between (x2,y2)-coordinate and the camera coordinate. The result is
x2=-f(X/Z)+u0 and
y2=-f(Y/Z)+vo.
Between (x3,y3)-coordinate and the camera coordinate, the result is
x3=-f(X/Z)+u0 and
y3=f(Y/Z)+vo.
no matter which coordinate system i tried, none of them is in the form of
x=f(X/Z)+u0 and
y=f(Y/Z)+vo, which are provided by some CV textbooks.
Besides, the projection results on (x2,y2)-coordinate or (x3,y3)-coordinate are also not reasonable because of the same reason- an object in the (+X,+Y,+Z) region in the camera coordinate should "appear" on the left-half plane of the image taken by a camera.
Could anyone indicate what I was misunderstood with and I will try to derive several times more and post the answer when someone else help me figure this issue out.
Thank you in advance!!
AlexAlexofNTUWed, 09 Apr 2014 08:37:31 -0500http://answers.opencv.org/question/31470/checkerboard depth location and parameter uncertaintyhttp://answers.opencv.org/question/27648/checkerboard-depth-location-and-parameter-uncertainty/When I use Camera Calibration Toolbox for Matlab, I found checkerboard placement has some impact on the variance of the measured parameters. Specifically, if the checkerboard is placed relatively far away from the camera, the uncertainty of both focal length and principal axis increase. The placement is far away enough that the checkerboard only occupies maybe a quarter of the image.
My questions are how the toolbox compute the variances, why placing the checkerboard far away can increase the variances.
Also, even the checkerboard is placed relatively close to the camera, there is still depth variation due to the orientation of the plane and some small travel of the plane in depth direction. If the focus does not change, I would assume that the calibration works for all depth besides the calibration depth, since the physical property of the camera has not changed at all. Is that a safe assumption?
Thank you in advance.small_potatoMon, 03 Feb 2014 17:15:34 -0600http://answers.opencv.org/question/27648/Will this work... Camera calibration without human interventionhttp://answers.opencv.org/question/19462/will-this-work-camera-calibration-without-human-intervention/Instead of having a physical checkerboard.
If I was to create a small clear plastic grill with a bunch of holes filled with black plastic, stick a really bright LED behind to cast shadows/circles on a wall, can these sharp cast shadows be used as the calibration pattern? hbtSun, 25 Aug 2013 23:50:12 -0500http://answers.opencv.org/question/19462/