Ask Your Question

Bilou563's profile - activity

2016-08-30 09:32:36 -0600 asked a question Differences between uncalibrated and calibrated stereo

I would like to obtain the depth map with a stereo-microsope equiped with cameras. If i understand, we can use a calibrated method from the chessboard use and uncalibrated one based on specific points. I tried with the two methods and the undistorded images are clearly different :

image description

I have some difficulties to explain the differences as i used more 30 images per camera in several configurations. The angle between the two cameras is important and imposed from the stereo-microscope geometry but the rotation is very marked on the picture...

And i have a final question, for the depth map, the distance is from the lens or the camera captor?

2016-03-23 03:27:56 -0600 asked a question Opencv - Depth map from uncalibrated stereo system

I m trying to get a depth map from an uncalibrated method. I can obtain the fundamental matrix via different correspondent points from SIFT method and "cv2.findFundamentalMat". Then with "cv2.stereoRectifyUncalibrated" i can get the rectification matrix. Finally i can use "cv2.warpPerspective" to rectify and compute the disparity but this latter doesnt conduct to a good depth map...The values are very high so i m wondering if i have to use "warpPerspective" or i have to calculate rotation matrix from homography matrix got with "stereoRectifyUncalibrated"

A part of the code :

#Obtainment of the correspondent point with SIFT
sift = cv2.SIFT()

###find the keypoints and descriptors with SIFT
kp1, des1 = sift.detectAndCompute(dst1,None)
kp2, des2 = sift.detectAndCompute(dst2,None)

###FLANN parameters
FLANN_INDEX_KDTREE = 0
index_params = dict(algorithm = FLANN_INDEX_KDTREE, trees = 5)
search_params = dict(checks=50)

flann = cv2.FlannBasedMatcher(index_params,search_params)
matches = flann.knnMatch(des1,des2,k=2)

good = []
pts1 = []
pts2 = []

###ratio test as per Lowe's paper
for i,(m,n) in enumerate(matches):
    if m.distance < 0.8*n.distance:
        good.append(m)
        pts2.append(kp2[m.trainIdx].pt)
        pts1.append(kp1[m.queryIdx].pt)


pts1 = np.array(pts1)
pts2 = np.array(pts2)

#Computation of the fundamental matrix
F,mask= cv2.findFundamentalMat(pts1,pts2,cv2.FM_LMEDS)


# Obtainment of the rectification matrix and use of the warpPerspective to transform them...
pts1 = pts1[:,:][mask.ravel()==1]
pts2 = pts2[:,:][mask.ravel()==1]

pts1 = np.int32(pts1)
pts2 = np.int32(pts2)

p1fNew = pts1.reshape((pts1.shape[0] * 2, 1))
p2fNew = pts2.reshape((pts2.shape[0] * 2, 1))

retBool ,rectmat1, rectmat2 = cv2.stereoRectifyUncalibrated(p1fNew,p2fNew,F,(2048,2048))

dst11 = cv2.warpPerspective(dst1,rectmat1,(2048,2048))
dst22 = cv2.warpPerspective(dst2,rectmat2,(2048,2048))


#calculation of the disparity
stereo = cv2.StereoBM(cv2.STEREO_BM_BASIC_PRESET,ndisparities=16*10, SADWindowSize=9)
disp = stereo.compute(dst22.astype(uint8), dst11.astype(uint8)).astype(np.float32)
plt.imshow(disp);plt.colorbar();plt.clim(0,400)#;plt.show()
plt.savefig("0gauche.png")

#plot depth by using disparity focal length C1[0,0] from stereo calibration and T[0] the distance between cameras

plt.imshow(C1[0,0]*T[0]/(disp),cmap='hot');plt.clim(-0,500);plt.colorbar();plt.show()

Here the rectified pictures with uncalibrated method (and warpPerspective) :

image description

Here the rectified pictures with calibrated method :

image description

I dont know how the difference is so important between the two kind of pictures...and for the calibrated method, it doesnt seem aligned...strange The disparity map of the uncalibrated method :

image description

And the depth map are calculated with : C1[0,0]*T[0]/(disp) with T from the "stereocalibrate" but the values are very high...

2016-03-18 10:36:02 -0600 commented answer Stereo vision - Tilted camera and triangulation landmark

I used the product of R2 and R1.T of each cameraCalibrate for the minimum reprojection error to get the rotation matrix and then :

P1 = np.dot(C1,np.hstack((np.identity(3),np.zeros((3,1)))))

P2 =np.dot(C2,np.hstack(((R_0),T_0)))

for i in range(Coord1.shape[0]) z = cv2.triangulatePoints(P1, P2, Coord1[i,],Coord2[i,]) Is it not correct?

2016-03-18 10:09:05 -0600 commented answer Stereo vision - Tilted camera and triangulation landmark

Yep i could use rvecs and tvecs?

2016-03-16 12:55:29 -0600 commented answer Stereo vision - Tilted camera and triangulation landmark

I could use rvecs1 from my left camera calibration?

2016-03-16 08:59:33 -0600 received badge  Supporter (source)
2016-03-16 06:47:13 -0600 asked a question Stereo vision - Tilted camera and triangulation landmark

I m using a stereo system and so i m trying to get world coordinates of some points by a triangulation.

My cameras present an angle, the Z axis direction (direction of the depth) is not normal to my surface. That s why when i observe flat surface, i get no constant depth but a "linear" variation" isnt it correct? And i want the depth from the baseline direction. So i have to rotate my points?

image description

2016-03-15 06:23:26 -0600 received badge  Editor (source)
2016-03-15 05:41:44 -0600 asked a question Triangulation origin with stereo system

I m using a stereo system and so i m trying to get world coordinates of some points. I can do it with specific calibration for each camera and then i calculate rotation matrix and translation vector. And finally i triangulate but i m not sure of the origin of the world coordinates.

As you can see on my figure, values correspond to depth value but they shoud be close of 400 as it is flat. So i suppose that the origin is the left camera that s why it variates...

image description

A piece of my code with my projective arrays and triangulate function :

#C1 and C2 are the cameras matrix (left and rig)
#R_0 and T_0 are the transformation between cameras
#Coord1 and Coord2 are the correspondant coordinates of left and right respectively
P1 = np.dot(C1,np.hstack((np.identity(3),np.zeros((3,1))))) 

P2 =np.dot(C2,np.hstack(((R_0),T_0)))

for i in range(Coord1.shape[0])
    z = cv2.triangulatePoints(P1, P2, Coord1[i,],Coord2[i,])

My cameras present an angle, the Z axis direction (direction of the depth) is not normal to my surface. And i want the depth from the baseline direction. So i have to rotate my points?

image description

Thanks for help ;)

2016-03-09 06:55:54 -0600 commented question Change of landmark origin in calibration

It works ;)

2016-03-07 12:12:27 -0600 asked a question Problems with rectified pictures for stereo vision

I m trying to do stereo matching but it doesnt work well until now. I determined the "tvecs" and "rvecs" for each camera. So i can use these relationships to get the translation vector and rotation matrix :

R = Rright.Rleft.T and T = Tright - R.Tleft
with the meaned values of Tleft , Tright, Rright and Rleft

But i have a problem during the rectification step... The scale of rectified pictures have changed...and i dont know how to fix it...

A piece of my code for the left picture :

#i obtained the new camera matrix for the rectification 
C1new = cv2.getOptimalNewCameraMatrix(C1,dist1,(2048,2048),alpha=1)[0] #with C1 and dist1 respectively the #camera matrix and the distorsion values 

#and i obtained the map and i remap... 
left_maps = cv2.initUndistortRectifyMap(C1, dist1, R1, C1new, (2048, 2048), cv2.CV_32FC2) 
a = left_maps[0] 
left_img_remap = cv2.remap(left_stereo_image2, a[:,:,0], a[:,:,1], cv2.INTER_CUBIC)

And i got that... :

image description

To improve my results i tried to concatenate empty arrays to plot all the picture but when i do that and modify the size in "getOptimalNewCameraMatrix" and "initUndistortRectifyMap", i got :

image description

This time i got all the pictures but the scale between them is not good...so i cant determine disparity... Thanks for your help ;)

2016-03-06 12:44:39 -0600 received badge  Enthusiast
2016-03-05 16:01:18 -0600 commented question Change of landmark origin in calibration

Ok thanks it could be a good explanation. I will try and i will tell you ;)

2016-03-04 04:08:17 -0600 asked a question Change of landmark origin in calibration

I m trying to do stereo matching but it doesnt work well until now. So i m trying to improve the procedure and when i launch calibration, i remarked origin of the landmark could change from a picture to an other... It could impact the quality of the procedure?

You have an example below : image description

The landmark in real world is represented in white with the 3 lines. Thanks for help ;)

2016-01-25 11:18:44 -0600 asked a question Python - OpenCV stereocalibration error

I m trying to use the stereo part of openCV (3.0.0-dev version). I can do all the calibration for each cameras so i got the 2 camera matrix and distorsion vectors. But when i try to apply the stereocalibrate function, the calculation is done but the RMS is very high : 40! and so the obtained results seem wrong... But i dont know how the problem is...

import numpy as np
import cv2
import matplotlib.pyplot as plt
import os
import sys, getopt
from glob import glob
import Image

a=int(9) #chessboard windows
b=int(5)


# CAM1 calibration
If __name__ == '__main__':

args, img_mask = getopt.getopt(sys.argv[1:], '', ['save=', 'debug=', 'square_size='])
args = dict(args)
img_mask = '/home/essais-2015-3/Bureau/test_22_01/calib/gauche/*.tif'
img_names = sorted(glob(img_mask))
debug_dir = args.get('--debug')
square_size = float(args.get('--square_size', 1))

pattern_size = (a,b)
pattern_points = np.zeros( (np.prod(pattern_size), 3), np.float32 )
pattern_points[:,:2] = np.indices(pattern_size).T.reshape(-1, 2)
pattern_points *= square_size

obj_points = []

img_points1 = []
h, w = 0, 0
for fn in img_names:
    print 'processing %s...' % fn,
    img = cv2.imread(fn, 0)
    h, w = img.shape[:2]
    found, corners = cv2.findChessboardCorners(img, pattern_size)
    if found:
        term = ( cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_COUNT, 30, 0.1 )
        cv2.cornerSubPix(img, corners, (5, 5), (-1, -1), term)
    if debug_dir:
        vis = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR)
        cv2.drawChessboardCorners(vis, pattern_size, corners, found)
        path, name, ext = splitfn(fn)
        cv2.imwrite('%s/%s_chess.bmp' % (debug_dir, name), vis)
    if not found:
        print 'chessboard not found'
        continue
    img_points1.append(corners.reshape(-1, 2))
    obj_points.append(pattern_points)
print 'ok'
rms, C1, dist1, rvecs1, tvecs1 = cv2.calibrateCamera(obj_points, img_points1, (w, h), None, None)
print "RMS:", rms
print "camera matrix:\n", C1
print "distortion coefficients: ", dist1.ravel()
cv2.destroyAllWindows()


# CAM2 calibration
if __name__ == '__main__':

args, img_mask = getopt.getopt(sys.argv[1:], '', ['save=', 'debug=', 'square_size='])
args = dict(args)
img_mask = '/home/essais-2015-3/Bureau/test_22_01/calib/droite/*.tif'
img_names = sorted(glob(img_mask))
debug_dir = args.get('--debug')
square_size = float(args.get('--square_size', 1))

pattern_size = (a,b)
pattern_points = np.zeros( (np.prod(pattern_size), 3), np.float32 )
pattern_points[:,:2] = np.indices(pattern_size).T.reshape(-1, 2)
pattern_points *= square_size

obj_points = []

img_points2 = []
h, w = 0, 0
for fn in img_names:
    print 'processing %s...' % fn,
    img = cv2.imread(fn, 0)
    h, w = img.shape[:2]
    found, corners = cv2.findChessboardCorners(img, pattern_size)
    if found:
        term = ( cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_COUNT, 30, 0.1 )
        cv2.cornerSubPix(img, corners, (5, 5), (-1, -1), term)
    if debug_dir:
        vis = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR)
        cv2.drawChessboardCorners(vis, pattern_size, corners, found)
        path, name, ext = splitfn(fn)
        cv2.imwrite('%s/%s_chess.bmp' % (debug_dir, name), vis)
    if not found:
        print 'chessboard not found'
        continue
img_points2.append(corners.reshape(-1, 2))
obj_points.append(pattern_points)

print 'ok'
rms, C2, dist2, rvecs2, tvecs2= cv2.calibrateCamera(obj_points, img_points2, (w, h), None, None)
print "RMS:", rms
print "camera matrix:\n", C2
print "distortion coefficients: ", dist2.ravel()
cv2.destroyAllWindows()


# stereo calibration
rms, C1, dist1, C2, dist2, R, T, E,F = cv2.stereoCalibrate(obj_points, img_points1, img_points2, C1, dist1, C2, dist2,     (w,h))