OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Sat, 04 Apr 2020 11:09:11 -0500Finding depth of an object using 2 camerashttp://answers.opencv.org/question/228501/finding-depth-of-an-object-using-2-cameras/Hi,
I am new to stereo imaging and learning to find depth of an object.
I have 2 cameras kept separately looking at a cardboard surface.
**Given** :
- 8 points marked on the cardboard surface.
- Captured one image each from both the camera.
- Identified (x,y) coordinates of all 8 points in both the images.
**Problem** : Find the depth of each point i.e. distance of each point from the cameras.
*I tried solving it using following approach but I got weird result* :
1. Noted down 8 common points from both the left and right images captured from 2 different cameras.
2. Determined Fundamental Matrix between both the images using 8 points.
3. The fundamental matrix F relates the points on the image plane of one camera in image coordinates (pixels) to the points on the image plane of the other camera in image coordinates
- Opencv Function : cv::findFundamentalMat()
- Input to the function : 8 common points from both the image
- Output = Fundamental matrix of 3x3
4. Performed stereo rectification
- It reprojects the image planes of our two cameras so that they reside in the exact same plane, with image rows perfectly aligned into a frontal parallel configuration.
- Opencv Function : cv::stereoRectifyUncalibrated()
- Input to the function : 8 common Points from the images and fundamental matrix
- Output = Rectification matrices H1 and H2 for both the images.
5. Determined depth of a point
- Trying to find the depth of a point which is aprox. 39 feet (468 inches) away from the camera.
- Formula to find depth is Z = (f * T) / (xl – xr)
- Z is depth, f is focal lenth, T is distance between camera, xl and xr are x coordinate of a point on left image and right image respectively.
- Following are the values taken for the variables :
- f = From the determined camera intrinsic of the camera, I got fx and fy. So I found out f = sqrt(fx*fx + fy*fy)
- T = 2 cameras are kept apart 36 feet i.e. 432 inches. So, I gave T = 432
- xl and xr are x values of the point from left and right images which are perspective transformed using rectification matrices H1 and H2.
- But I got very weird result.
You can look at the screenshot of my experimentation and result.
![image description](/upfiles/158601511677758.png)
So could someone tell me the approach I am taking is right or wrong ?cvsolverSat, 04 Apr 2020 11:09:11 -0500http://answers.opencv.org/question/228501/Runing stereo match simplehttp://answers.opencv.org/question/119922/runing-stereo-match-simple/Hello, i have problem runing the opencv stereo_match simple i have type in the terminal the following commande
./stereo_match 0000L.png 0000R.png -i intrinsics.yml -e extrinsics.yml -p cloud.asc
and here is the output :
Command-line parameter error: The max disparity (--maxdisparity=<...>) must be a positive integer divisible by 16
Demo stereo matching converting L and R images into disparity and point clouds
Usage: stereo_match <left_image> <right_image> [--algorithm=bm|sgbm|hh|sgbm3way] [--blocksize=<block_size>]
[--max-disparity=<max_disparity>] [--scale=scale_factor>] [-i=<intrinsic_filename>] [-e=<extrinsic_filename>]
[--no-display] [-o=<disparity_image>] [-p=<point_cloud_file>]
How to choose run it with the correct argument? also how to choose the value for blocksize, -max-disparity , scale
Thank youLafiWed, 21 Dec 2016 15:19:09 -0600http://answers.opencv.org/question/119922/Would shot detection and ball tracking be easier with stereo camera versus solo camera?http://answers.opencv.org/question/103916/would-shot-detection-and-ball-tracking-be-easier-with-stereo-camera-versus-solo-camera/Assume two camera configurations on a basketball court:
1) Solo: One smartphone on a tripod.
2) Stereo: Using stereo camera like Bumblebee (or custom one) with two cameras, each sensor/lens 12 inches apart.
The computer vision goals are: (1) track the basketball; (2) track a player; (3) detect made shots; (4) detect shot distance; and (5) detect shot angle.
Are any of these goals easier with a stereo camera (configuration #2), or are they as easily achievable with a solo camera (configuration #1)?
EDIT: This paper (http://www.ai.sri.com/~beymer/vsam/iccv99.pdf) suggests a stereo camera would offer advantages over a solo camera, but the paper is also very old (1999). Is player & shot tracking better with a stereo camera, or are solo cameras equally as effective?
CrashalotFri, 07 Oct 2016 19:24:56 -0500http://answers.opencv.org/question/103916/