Ask Your Question

Julien Seinturier's profile - activity

2018-06-15 02:51:22 -0600 received badge  Famous Question (source)
2016-07-29 00:30:56 -0600 received badge  Notable Question (source)
2015-10-22 14:53:25 -0600 received badge  Popular Question (source)
2014-04-25 03:53:34 -0600 commented answer Allied Vision VIMBA and OpenCV

Thanks a lot. But have you ever tested your code with VIMBA SDK that replace now the PvAPI ?

2014-04-25 02:48:59 -0600 asked a question Allied Vision VIMBA and OpenCV

Hello,

I saw that OpenCV can deal with Allied Vision PvAPI to grab data for GigaE cameras but can anyone give me a sample of code that grab a frame for a camera and srote it within a cv::MAt object ?

More over, for now PvAPI has been replaced by VIMBA SDK from Allied Vision. Do anyone has already used this SDK with OpenCV and do anyone has a sample of code that grab a frame for a camera and srote it within a cv::MAt object ?

Thanks a lot,

Julien

2014-02-14 02:26:37 -0600 received badge  Student (source)
2014-02-14 01:55:34 -0600 received badge  Editor (source)
2014-02-14 01:45:01 -0600 asked a question Stereo correspondence between left and right images

Hello,

I'm using OpenCV to set up a stereo odometry module. The whole module is written in Java but i'm using OpenCV to detect tracks on successive stereo couple. Here is my algorithm:

  1. Rectify images using stereo calibration parameters

  2. Detect matches between previous stereo left and current stereo left image using optical flow

  3. Compute disparity between current stereo left and current stereo right image

  4. Compute 3D position of the 2D points of the current stereo left images that have been matched with previous stereo left image

  5. Project computed 3D points to current stereo right image.

After step 4, i should have 2D matched points on current stereo left and current stereo right image.

The step 3 compute 3D points using this code:

 vector<Point3d> point3dInput;
 point3dInput.reserve(stereoLeftFeatureLocations->size());

 for(int i = 0; i < stereoLeftFeatureLocations->size(); i++){

   point3dInput.push_back(Point3d((*stereoLeftFeatureLocations)[i].x,     
                                  (*stereoLeftFeatureLocations)[i].y,    
                                  disp.at<short>((*stereoLeftFeatureLocations)[i].y,    
                                                 (*stereoLeftFeatureLocations)[i].x)));

 }

// Compute 3D points from disparity
vector<cv::Point3d> points3d;    
Point3f point3d;

perspectiveTransform(point3dInput, points3d, stereoRectQ);

where:

  • stereoLeftFeatureLocations is a pointer to a vector of Point2f that represents 2d points within the stereo left image and obtained with optical flow computation

  • disp is the disparity computed between current stereo left and right rectified images

  • stereoRectQ is the Q matrix given by rectification parameters computation.

Obtained 3D points seems valid at this time.

During the step 4, i project the 3D points onto the current stereo right image using this code:

cv::Mat decomposedCameraMatrix;

cv::Mat decomposedRotMatrix;

cv::Mat decomposedRotVector;

cv::Mat decomposedTransVector;

cv::Mat decomposedTransVectorEuclidian = cv::Mat(3, 1, CV_64F);

// Decompose stereo right projection matrix
cv::decomposeProjectionMatrix(getStereoRightProjectionMatrix(), decomposedCameraMatrix,
                              decomposedRotMatrix, decomposedTransVector);

// From rotation matrix to rotation vector
cv::Rodrigues(decomposedRotMatrix, decomposedRotVector);

// From homogeneous to euclidian translation vector
decomposedTransVectorEuclidian.at<double>(0, 0) =   decomposedTransVector.at<double>(0, 0) 
                                                  / decomposedTransVector.at<double>(0, 3);

decomposedTransVectorEuclidian.at<double>(1, 0) =   decomposedTransVector.at<double>(0, 1) 
                                                  / decomposedTransVector.at<double>(0, 3);

decomposedTransVectorEuclidian.at<double>(2, 0) =   decomposedTransVector.at<double>(0, 2) 
                                                  / decomposedTransVector.at<double>(0, 3);

cv::projectPoints(points3d, decomposedRotVector, decomposedTransVectorEuclidian, 
                  decomposedCameraMatrix, distortion, rightProjectedPoints);

My problem is that it seems that the projected points lie at the same coordinates on left and right images (seems that translation between images is not applied)

Could anyone can help me ? Maybe my method for stereo matching is bad but a have no other ideas at this time,

thanks.

Julien