Stereo Map "Zoomed In"

asked 2019-02-11 04:27:33 -0500

WannabeEngr gravatar image

updated 2019-02-11 04:42:39 -0500

asked 6 hours ago

WannabeEngr gravatar image WannabeEngr 1 ●1 updated 1 hour ago

I am trying to produce a depth map from 2 rgb cameras using a github code from: (c)

I successfully finished its stereo calibration part with 50+ images. But, when i run its Main_Stereo Program, it gave me an stereo map that is somewhat "zoomed in". I tried to know which part is causing it & found out that the input (GRAYR & GRAYL) before stereo.compute is already zoomed in. I think the problem may be in its stereo rectification or maybe stereo calibration and i don't know how to fix it...i want to make a stereo map or disparity map(FILTEREDCOLORDEPTH) with the same resolution as my video.capture (CAMR & CAML)

So this is my code for stereo mapping,i highlighted the part with outputs

import numpy as np
import cv2
from sklearn.preprocessing import normalize

def coords_mouse_disp(event,x,y,flags,param):
    if event == cv2.EVENT_LBUTTONDBLCLK:
        #print x,y,disp[y,x],filteredImg[y,x]
        for u in range (-1,2):
            for v in range (-1,2):
                average += disp[y+u,x+v]
        Distance= -593.97*average**(3) + 1506.8*average**(2) - 1373.1*average + 522.06
        Distance= np.around(Distance*0.01,decimals=2)
        print('Distance: '+ str(Distance)+' m')


#Parameters for Distortion Calibration 

# Termination criteria
criteria =(cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
criteria_stereo= (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)

# Prepare object points
objp = np.zeros((9*6,3), np.float32)
objp[:,:2] = np.mgrid[0:9,0:6].T.reshape(-1,2)

# Arrays to store object points and image points from all images
objpoints= []   # 3d points in real world space
imgpointsR= []   # 2d points in image plane
imgpointsL= []

# Start calibration from the camera
print('Starting calibration for the 2 cameras... ')
# Call all saved images
for i in range(0,50):   # Put the amount of pictures you have taken for the calibration 
    t= str(i)
    ChessImaR= cv2.imread('images/chessboard-R'+t+'.png',0)    # Right side
    ChessImaL= cv2.imread('images/chessboard-L'+t+'.png',0)    # Left side
    retR, cornersR = cv2.findChessboardCorners(ChessImaR,
                                               (9,6),None)  # Define the number of chees corners
    retL, cornersL = cv2.findChessboardCorners(ChessImaL,
                                               (9,6),None)  # Left side
    if (True == retR) & (True == retL):

# Determine the new values for different parameters
#   Right Side

retR, mtxR, distR, rvecsR, tvecsR = cv2.calibrateCamera(objpoints,
hR,wR= ChessImaR.shape[:2]
OmtxR, roiR= cv2.getOptimalNewCameraMatrix(mtxR,distR,

#   Left Side
retL, mtxL, distL, rvecsL, tvecsL = cv2.calibrateCamera(objpoints,
hL,wL= ChessImaL.shape[:2]
OmtxL, roiL= cv2.getOptimalNewCameraMatrix(mtxL,distL,(wL,hL),1,(wL,hL))

print('Cameras Ready to use')

#Calibrate the Cameras for Stereo ...
edit retag flag offensive close merge delete


Didn't have time to go through the whole code, but as a hint, is the stereo calibration OK? It seems that the Left_Stereo_Map and Right_Stereo_Map are completely off.

kbarni gravatar imagekbarni ( 2019-02-11 07:05:13 -0500 )edit

Yes, from the last image above, it showed GRAYR & GRAYL where the inputs for the stereo map, and it was wrong from the very start. I'm thinking of two things. The mistake can be from the parameters/values after the images from chessboard calibration was called or from the StereoRectification part. Can't really check Left_Stereo_Map and Right_Stereo_Map through cv2.imshow :( don't know what to configure now..

WannabeEngr gravatar imageWannabeEngr ( 2019-02-11 07:13:50 -0500 )edit

I just really need to know how to make GRAYL & GRAYR values almost the same from the video capture like CAML & CAMR got. So i can get rid of the "zoom" in my Depth Map..

WannabeEngr gravatar imageWannabeEngr ( 2019-02-11 07:16:43 -0500 )edit

Yes, it's a problem with the rectification maps. You cannot display them directly (imshow), but what's interesting it's the values at the corners. Try to print Left_Stereo_Map[0] and Left_Stereo_Map[1] at the following coordinates: (0,0) and (W,H) where W and H are the width and height of the matrices.

Normally the values should be close to 0 and the width and height of the CAML matrix. But they will be around the center.

Try to understand the remap function too.

kbarni gravatar imagekbarni ( 2019-02-12 10:05:53 -0500 )edit

I'm sorry, how can i print them at a certain coordinate? and the width and height of CAML matrix would be 480x320 i guess

WannabeEngr gravatar imageWannabeEngr ( 2019-02-12 11:03:27 -0500 )edit

Please, this is not a python for dummies forum!

Ok, here's a hint...

kbarni gravatar imagekbarni ( 2019-02-13 07:27:37 -0500 )edit

From the camera, you'll typically get an image distorted with a pincushion, barrel, or other distortion. Calibration discovers these and may encode a zoom effect to clip the off-camera areas. Undistort and recitification steps keep the center point of the left and of the right right image the same, but will modify the image through local zoom and other transformations.

StereoRectify rotates/zooms esp. R image around the center pixels to produce epipolar images suitable for stereo matching and disparity generation. I see you set rectify_scale = 0 so the stereo rectify tries to include all available imager pixels; the scale is a floating point value ranging from 0.0 to 1.0, and may be your best adjustment point so that local and global scale adjustments end up closer to what you expect.

opalmirror gravatar imageopalmirror ( 2019-02-19 14:33:53 -0500 )edit