Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

3D coordinates of a colored tracked object with stereo vision

Hello, there is a lot of topics about 2D to 3D but I couldn't find my problem in those.

So I use the stereo_camera_calibration to find the parameters of my cameras. ( I followed this blog : [http://blog.martinperis.com/2011/01/opencv-stereo-camera-calibration.html] ) Then I am using this relation to deduce the 3D coordinates of my object :

vect=[[x],[y],[dx],[1]]
result = dot(self.Q, vect)
print "X=", result[0]/result[3]," Y= ",result[1]/result[3]," Z= ", result[2]/result[3]

where x and y are the coordinate on the image and dx is the difference between the x of the 2 cameras and Q the opencv matrix

What I get : X= [-81.16746711] Y= [ 87.00418513] Z= [-826.69658138] I don't understand how to use those results When moving the object coordinates "follows" the increase or the decrases

At the moment I am just focusing on trying to set up the Z.

I look into the Q matrix that the calibration gave me and what I don't understand is:

Q=
    1.        0.        0.        -3.563561719e+02
     0.       1.       0.       -2.623305859e+02
     0.       0.       0.      6.474889e+02
     0.       0.     4.7919830e-02     1.2482114258e+00

1/Q[3][2] = 21cm wich is the distance between my cameras How some coeficients of the matrix can be in pixels, whereas other are in cm? isn't that in problem for the caluls ? How can I find a relation between my results and the coordinates of the object in " the world" ?

3D coordinates of a colored tracked object with stereo vision

Hello, there is a lot of topics about 2D to 3D but I couldn't find my problem in those.

So I use the stereo_camera_calibration to find the parameters of my cameras. ( I followed this blog : [http://blog.martinperis.com/2011/01/opencv-stereo-camera-calibration.html] ) Then I am using this relation to deduce the 3D coordinates of my object :

vect=[[x],[y],[dx],[1]]
result = dot(self.Q, vect)
print "X=", result[0]/result[3]," Y= ",result[1]/result[3]," Z= ", result[2]/result[3]

where x and y are the coordinate on the image and dx is the difference between the x of the 2 cameras and Q the opencv matrix

What I get : X= [-81.16746711] Y= [ 87.00418513] Z= [-826.69658138] I don't understand how to use those results When moving the object coordinates "follows" the increase or the decrases

At the moment I am just focusing on trying to set up the Z.

I look into the Q matrix that the calibration gave me and what I don't understand is:

Q=
    1.        0.        0.        -3.563561719e+02
     0.       1.       0.       -2.623305859e+02
     0.       0.       0.      6.474889e+02
     0.       0.     4.7919830e-02     1.2482114258e+00

1/Q[3][2] = 21cm wich is the distance between my cameras How some coeficients of the matrix can be in pixels, whereas other are in cm? isn't that in problem for the caluls ? How can I find a relation between my results and the coordinates of the object in " the world" ?

EDIT : The relation between disparity and real depths are not linear so that explain why just trying to fix a coeficient didn't solved my problem. Is it possible to calculate the absolute distance between the camera and an object? Or maybe I need to use a landmark near my object to deduce the relative distance between the object and the landmark?