# 3D coordinates of a colored tracked object with stereo vision

Hello, there is a lot of topics about 2D to 3D but I couldn't find my problem in those.

So I use the stereo_camera_calibration to find the parameters of my cameras. ( I followed this blog : [http://blog.martinperis.com/2011/01/o...] ) Then I am using this relation to deduce the 3D coordinates of my object :

```
vect=[[x],[y],[dx],[1]]
result = dot(self.Q, vect)
print "X=", result[0]/result[3]," Y= ",result[1]/result[3]," Z= ", result[2]/result[3]
```

where x and y are the coordinate on the image and dx is the difference between the x of the 2 cameras and Q the opencv matrix

What I get : X= [-81.16746711] Y= [ 87.00418513] Z= [-826.69658138] I don't understand how to use those results When moving the object coordinates "follows" the increase or the decrases

At the moment I am just focusing on trying to set up the Z.

How can I find a relation between my results and the coordinates of the object in " the world" ?

EDIT : The relation between disparity and real depths are not linear so that explain why just trying to fix a coeficient didn't solved my problem. Is it possible to calculate the absolute distance between the camera and an object? Or maybe I need to use a landmark near my object to deduce the relative distance between the object and the landmark?

You may need to divide your disparity result by 16. Look at the documentation for BM and SGM. Another thing is you will need to adjust you calibration matrices if they were calibrated for a different image resolution then the resolution at which your are performing stereo correspondence. Also, there are built in function to do this for you, reprojectImageTo3D.

Thank you for your message. ReprojectImageTo3D is usefull but i is more understandable to me to write the calculus in an explicit way ... But because of that I get an error though .