OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Wed, 07 Jun 2017 01:56:07 -0500Estimation of ball position (3d reconstruction)http://answers.opencv.org/question/157326/estimation-of-ball-position-3d-reconstruction/ I have a white small ball in camera view. The ball is in front of camera but higher than camera (Yball > Ycam). View is compensated by undistort(). I try to calculate X and Y position of the ball relative to camera position. What I know:<br>
- know exact distance from the optical center of the camera to the ball,<br>
- know shift dX,dY of my ball in pixels from center of cam view,<br>
- know focal length in pixels (fx and fy from camera matrix determined by camera calibration).<br>
<br>
I try calculate physical X and Y position relative to camera using dx,dy,fx,fy,distance. If the position of the ball is near the center of the screen then I can accurately calculate the position using fx and fy. But I noticed, that if the ball is on the side, near the edge of the view (but physicial Y position and distance from camera are the same as previous), then the ball in image is higher. Here's an example:
1) Ball X coordinate is in center<br>
<a href='http://wstaw.org/w/4vQf/'><img src='http://wstaw.org/m/2017/06/07/point_x_center_jpg_300x300_q85.jpg'></a>
<a href='http://wstaw.org/w/4vQk/'><img src='http://wstaw.org/m/2017/06/07/point_x_center_top_1_jpg_300x300_q85.jpg'></a><br>
<br>
2) Ball X coordinate high (near right side of image)<br>
<a href='http://wstaw.org/w/4vQh/'><img src='http://wstaw.org/m/2017/06/07/point_x_side_jpg_300x300_q85.jpg'></a>
<a href='http://wstaw.org/w/4vQl/'><img src='http://wstaw.org/m/2017/06/07/point_x_side_top_1_jpg_300x300_q85.jpg'></a><br>
<br>
On the second example, 'top' coordinate changed from 433px to 396px, but ball is on the same physical Y as in previous example and at the same distance from the optical center. So if I use 'fy' (focal length y) to calculate Y position, my estimated position will be different.<br>
Could You please help me, what am I doing wrong? How to calculate the position of the ball, what parameter did not take into account?
MarcinWed, 07 Jun 2017 01:56:07 -0500http://answers.opencv.org/question/157326/Is undistortPoints so noisy or is calibration the problem?http://answers.opencv.org/question/53682/is-undistortpoints-so-noisy-or-is-calibration-the-problem/ Hi,
I am trying to do a sparse stereo reconstruction.
I am currently trying out a "synthetic" experiment to find out how much noise I get in the 3D reconstruction given "perfect" landmark detection data.
The steps:
1. I take intrinsics I got from 2 real cameras. (The cameras have a resolution of 1600 x 1200)
2. I place the cameras "virtually" 40 cm apart. Both look in the same direction (Identity matrix on rot)
3. I define a couple of 3d points around 1.30m in front of the cameras.
4. I use projectpoints to get 2d image points for each camera.
5. I use undistort points on the 2d image points. (I put in the intrinsics as last argument of the function to get the 2d points in "pixel coordinates")
6. I use the undistorted 2d points to triangulate 3d points.
Then I measure the the error between the original points 3d points and the triangulated (l2 distance).
Results (in mm):
Given the following distortion coefficients of the two cameras:
distL = [-0.5844; 0,401; 0 ; 0 -0,2274]
distR = [-0,6186; 0,2751; 0; 0; 0,03579]
I get **error of 16 to 22 mm in the 3d reconstruction** of points close to the frame of one of the camera images.
I get 3 mm on points whose imagepoint counterparts lie a little more near the center.
When I make **distortion coefficients** all **zero** the reconstruction is more or less **perfect**.
This leads me to the assumption that undistortPoints is very instable and major source of noise.
If you get that much nosie after single reconstruction step, I can not imagine how SfM can work.
Especially if you think that in a real system also landmark detection will be a major source of error.
Furthermore there are many published results on stereo reconstruction that have nearly micro meter precision.
Are the distortion coefficients shown above somehow degenerate?
The calibration values I got, came from a real calibration procedure. I got a rms of 2.1 pixels (which is probably bad). But I thought that if I use these values in a "virtual environment" it should not matter what values they have, isnt it?
![image description](/upfiles/14220122233019551.png)
In the picture you can see the view of the "right" camera (where the points lie more towards the frame of the camera)
Blue points are projected perfect 3d coordinates with lense distortion
Green points are the projections of the triangulated points with lense distortion
Thick red are projected perfect 3d coordinates WHITOUT lense distortion (a ground truth for undistort).
Thin red are projected perfect 3d coordinates after undistortion.
wolfomaniacFri, 23 Jan 2015 04:47:13 -0600http://answers.opencv.org/question/53682/