OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Tue, 24 Mar 2015 08:16:33 -05003D coordinates of a colored tracked object with stereo visionhttp://answers.opencv.org/question/57859/3d-coordinates-of-a-colored-tracked-object-with-stereo-vision/Hello, there is a lot of topics about 2D to 3D but I couldn't find my problem in those.
So I use the stereo_camera_calibration to find the parameters of my cameras. ( I followed this blog : [http://blog.martinperis.com/2011/01/opencv-stereo-camera-calibration.html] )
Then I am using this relation to deduce the 3D coordinates of my object :
vect=[[x],[y],[dx],[1]]
result = dot(self.Q, vect)
print "X=", result[0]/result[3]," Y= ",result[1]/result[3]," Z= ", result[2]/result[3]
where x and y are the coordinate on the image and dx is the difference between the x of the 2 cameras and Q the opencv matrix
What I get :
X= [-81.16746711] Y= [ 87.00418513] Z= [-826.69658138]
I don't understand how to use those results
When moving the object coordinates "follows" the increase or the decrases
At the moment I am just focusing on trying to set up the Z.
How can I find a relation between my results and the coordinates of the object in " the world" ?
EDIT : The relation between disparity and real depths are not linear so that explain why just trying to fix a coeficient didn't solved my problem. Is it possible to calculate the absolute distance between the camera and an object? Or maybe I need to use a landmark near my object to deduce the relative distance between the object and the landmark?Thu, 19 Mar 2015 13:25:31 -0500http://answers.opencv.org/question/57859/3d-coordinates-of-a-colored-tracked-object-with-stereo-vision/Comment by scarlett for <p>Hello, there is a lot of topics about 2D to 3D but I couldn't find my problem in those. </p>
<p>So I use the stereo_camera_calibration to find the parameters of my cameras. ( I followed this blog : [<a href="http://blog.martinperis.com/2011/01/opencv-stereo-camera-calibration.html">http://blog.martinperis.com/2011/01/o...</a>] )
Then I am using this relation to deduce the 3D coordinates of my object : </p>
<pre><code>vect=[[x],[y],[dx],[1]]
result = dot(self.Q, vect)
print "X=", result[0]/result[3]," Y= ",result[1]/result[3]," Z= ", result[2]/result[3]
</code></pre>
<p>where x and y are the coordinate on the image and dx is the difference between the x of the 2 cameras and Q the opencv matrix </p>
<p>What I get :
X= [-81.16746711] Y= [ 87.00418513] Z= [-826.69658138]
I don't understand how to use those results
When moving the object coordinates "follows" the increase or the decrases </p>
<p>At the moment I am just focusing on trying to set up the Z. </p>
<p>How can I find a relation between my results and the coordinates of the object in " the world" ? </p>
<p>EDIT : The relation between disparity and real depths are not linear so that explain why just trying to fix a coeficient didn't solved my problem. Is it possible to calculate the absolute distance between the camera and an object? Or maybe I need to use a landmark near my object to deduce the relative distance between the object and the landmark?</p>
http://answers.opencv.org/question/57859/3d-coordinates-of-a-colored-tracked-object-with-stereo-vision/?comment=58230#post-id-58230Thank you for your message. ReprojectImageTo3D is usefull but i is more understandable to me to write the calculus in an explicit way ... But because of that I get an error though .Tue, 24 Mar 2015 08:16:33 -0500http://answers.opencv.org/question/57859/3d-coordinates-of-a-colored-tracked-object-with-stereo-vision/?comment=58230#post-id-58230Comment by Der Luftmensch for <p>Hello, there is a lot of topics about 2D to 3D but I couldn't find my problem in those. </p>
<p>So I use the stereo_camera_calibration to find the parameters of my cameras. ( I followed this blog : [<a href="http://blog.martinperis.com/2011/01/opencv-stereo-camera-calibration.html">http://blog.martinperis.com/2011/01/o...</a>] )
Then I am using this relation to deduce the 3D coordinates of my object : </p>
<pre><code>vect=[[x],[y],[dx],[1]]
result = dot(self.Q, vect)
print "X=", result[0]/result[3]," Y= ",result[1]/result[3]," Z= ", result[2]/result[3]
</code></pre>
<p>where x and y are the coordinate on the image and dx is the difference between the x of the 2 cameras and Q the opencv matrix </p>
<p>What I get :
X= [-81.16746711] Y= [ 87.00418513] Z= [-826.69658138]
I don't understand how to use those results
When moving the object coordinates "follows" the increase or the decrases </p>
<p>At the moment I am just focusing on trying to set up the Z. </p>
<p>How can I find a relation between my results and the coordinates of the object in " the world" ? </p>
<p>EDIT : The relation between disparity and real depths are not linear so that explain why just trying to fix a coeficient didn't solved my problem. Is it possible to calculate the absolute distance between the camera and an object? Or maybe I need to use a landmark near my object to deduce the relative distance between the object and the landmark?</p>
http://answers.opencv.org/question/57859/3d-coordinates-of-a-colored-tracked-object-with-stereo-vision/?comment=57959#post-id-57959You may need to divide your disparity result by 16. Look at the documentation for BM and SGM. Another thing is you will need to adjust you calibration matrices if they were calibrated for a different image resolution then the resolution at which your are performing stereo correspondence. Also, there are built in function to do this for you, reprojectImageTo3D.Fri, 20 Mar 2015 10:54:54 -0500http://answers.opencv.org/question/57859/3d-coordinates-of-a-colored-tracked-object-with-stereo-vision/?comment=57959#post-id-57959Answer by scarlett for <p>Hello, there is a lot of topics about 2D to 3D but I couldn't find my problem in those. </p>
<p>So I use the stereo_camera_calibration to find the parameters of my cameras. ( I followed this blog : [<a href="http://blog.martinperis.com/2011/01/opencv-stereo-camera-calibration.html">http://blog.martinperis.com/2011/01/o...</a>] )
Then I am using this relation to deduce the 3D coordinates of my object : </p>
<pre><code>vect=[[x],[y],[dx],[1]]
result = dot(self.Q, vect)
print "X=", result[0]/result[3]," Y= ",result[1]/result[3]," Z= ", result[2]/result[3]
</code></pre>
<p>where x and y are the coordinate on the image and dx is the difference between the x of the 2 cameras and Q the opencv matrix </p>
<p>What I get :
X= [-81.16746711] Y= [ 87.00418513] Z= [-826.69658138]
I don't understand how to use those results
When moving the object coordinates "follows" the increase or the decrases </p>
<p>At the moment I am just focusing on trying to set up the Z. </p>
<p>How can I find a relation between my results and the coordinates of the object in " the world" ? </p>
<p>EDIT : The relation between disparity and real depths are not linear so that explain why just trying to fix a coeficient didn't solved my problem. Is it possible to calculate the absolute distance between the camera and an object? Or maybe I need to use a landmark near my object to deduce the relative distance between the object and the landmark?</p>
http://answers.opencv.org/question/57859/3d-coordinates-of-a-colored-tracked-object-with-stereo-vision/?answer=58229#post-id-58229OK so it seems a division by 0 were hided somewhere :
I change my calcul with :
dx = float(cx - self.last_left_image_pos[0])
W = abs(dx * self.Q[3][2]) + self.Q[3][3]
X = (cx+ self.Q[0][3])/(W*self.ratio)
Y = (cy+ self.Q[1][3])/(W*self.ratio)
Z = self.Q[2][3]/(W*self.ratio)
with self.ratio = 105.32
Then I get some accurate results ...don't ask me about self.ratio, I think it is linked to the resolution because I don't need to change it even if my calibration parameters has changed, but I am still trying to figure it out
Tue, 24 Mar 2015 08:13:02 -0500http://answers.opencv.org/question/57859/3d-coordinates-of-a-colored-tracked-object-with-stereo-vision/?answer=58229#post-id-58229