Ask Your Question

Jacek's profile - activity

2013-03-11 19:52:30 -0500 received badge  Nice Answer (source)
2012-12-31 11:50:26 -0500 received badge  Nice Answer (source)
2012-10-21 12:51:34 -0500 received badge  Nice Answer (source)
2012-09-25 11:02:36 -0500 answered a question 5-points algorithm in opencv ?

Some time ago I've implemented 5-pt algorithm (according to Nister solution: see An Efficient Solution to the Five-Point Relative Pose Problem). The code uses additional packages: Eigen (for singular value decomposition) and rpoly.cpp (Jenkins-Traub real roots finder) and has about 500 lines of code (part of the code was generated in Matlab).

Experiments with synthetic data showed that it's more accurace and less sensitive to noise than 7-pt or 8-pt algorithms. And, as theory says, doesn't suffer from planar degeneracy.

If you are interested I can share a source code so it can be modified and added to OpenCV.

2012-09-24 01:59:18 -0500 answered a question Extract point cloud points from the output matrix of Linear Triangulation method

Maybe I'm missing something but it's not possible to perform metric reconstruction from 2 uncalibrated images (unless you make some additional assumptions, e.g. you know/estimate location of a vanishing point or you known something about camera calibration). You can make reconstruction up to a projective transformation. See projective reconstruction theorem in Harley book.

How do you calculate camera matrices from fundamental matrix? What formula are you using? Without knowing camera intrinsic matrix it's not possible. Are you really sure this step is correct?

triangulatePoints returns homogeneous coordinates of 3D points, one point in a column.

Homogeneous coordinate(X,Y,Z,W) = euclidean coordinate (X/W, Y/W, Z/W).

2012-09-24 01:41:35 -0500 answered a question best-matches

To filter out incorrect your matches you can use epipolar constraint. Use findFundamentalMat function with FM_RANSAC estimation method to estimate fundamental matrix based on matching points from 2 images. RANSAC can deal with situation when you have many outliers.

The function returs estimated fundamental matrix and a vector indicating which pair of matching points is inlier and which is outlier.

2012-09-24 01:28:49 -0500 answered a question camera calibration opencv error

I think boardSize is incorrect in your code. boardSize should be equal to a number of internal corners, so in your case it should be cv::Size boardSize(5,4).

2012-09-21 01:33:01 -0500 commented question Extract point cloud points from the output matrix of Linear Triangulation method

Can you please more specific? What is the matrix you refer to? Is it a fundamental matrix for 2 images?

2012-09-20 02:10:47 -0500 answered a question pose estimation using RANSAC

Camera matrix = [fx 1 cx; 1 fy cy;0 0 1] is wrong it should be [fx 0 cx; 0 fy cy;0 0 1] (or [fx s cx; 0 fy cy;0 0 1] if you have a camera with non-zero skew s but this is seldom the case).

Besides the procedure you described looks OK. Only reprojection error = 10 is huge, I'm not sure if this is for purpose but you usually should use reprojection error like 1 or 2 pixels.

2012-09-18 02:54:58 -0500 answered a question missing region in disparity map

I think it's a normal behaviour that region with x-coordinates between 0 and max_disparity is not reconstructed.

Suppose on a rectified left image you have a point with x-coordinate x0, where 0 < x0 < max_disparity. Potential matches for this point on the right rectified image have x-coordinate in < x0-max_disparity; x0 > range. But if x0 < max_disparity part of this range has negative x-coordinate and is not visible on the rectified right image. So disparity cannot be calculated.

And one advice: for better results use StereoSGBM algorithm. It usually gives much better results than simple block matching (StereoBM) algorithm.

2012-09-18 02:37:32 -0500 received badge  Supporter (source)
2012-09-17 10:45:52 -0500 received badge  Nice Answer (source)
2012-09-15 15:53:56 -0500 answered a question 3D scale is poor.

The reason of your issue could be that disparity computed by SGBM is for some reason multiplied by 16.

See this excerpt from OpenCV documentation:

disp – Output disparity map. It is a 16-bit signed single-channel image of the same size as the input image. It contains disparity values scaled by 16. So, to get the floating-point disparity map, you need to divide each disp element by 16.

Distance to the camera (Z coordinate) is inversly proportional to disparity. So you can equivalently multiply Z-coodinate by 16. Taking into account this scaling coefficient you have: 6*16 = 96 which is pretty close to 100.

2012-09-12 02:09:10 -0500 answered a question Use stereo matching algorithms with more than one image pair?

I don't think any OpenCV stereo algorithms allows to use more than 1 pair of images. StereoSGBM is designed to work with 2 images only. I'm also not aware of any available stereo software that works with many image pairs taken from the same locations but with varying lighting conditions.

Potentially SGBM can be changed to use more than 1 pair of images (assuming all pairs are taken from the same camera positions). One step in SGBM algorithm is to compute correspondence score between a block of pixels on the left image with a block of pixels on the right. It should be sufficient to change only this step of the algorithm, so that correspondence between blocks of pixel on each image pair is taken into account. And it can be good research topic to find out what's the best way to combine correspondence scores from many image pairs (e.g. take minimum score, or maybe sum of scores from each image pair?). But this requires some code change on SGBM algorithm.

2012-09-12 01:47:59 -0500 answered a question Camera intrinsic matrix

There's no a simple single function that will give you camera intrinsic parameters. Calibration process requires a few steps:

  1. Acquisition of calibration images (with chessboard pattern or with circle pattern)
  2. Detecting chessboard corners(or blobs in case of circle pattern) with subpixel accuracy
  3. Finding camera calibration paramters: intrinsic matrix and distortion coefficients with

The easiest approach is to use an example provided with OpenCV - calibration.exe. You can use it with your own images, so it'll do a calibration for you and give you camera intrinsic matrix and distortions coefficients.

Read below tutorial, it explains how OpenCV calibration example works and how to use it with your own calibration images: http://docs.opencv.org/doc/tutorials/calib3d/camera_calibration/camera_calibration.html#cameracalibrationopencv

2012-08-29 07:20:55 -0500 commented answer Disparity Map Quality using Graph Cuts
2012-08-29 03:40:58 -0500 commented answer RANSAC and 2D point clouds

If distortion is not too big you may still use ICP. ICP doesn't require that 2 point clouds are exactly identical. One point cloud may contain points corrupted with a noise and ICP will still work.

The catch is that ICP requires point clouds to be roughly aligned. So if corresponding points in your point clouds are quite close you may use ICP. Otherwise you'll need to use some method to roughly align your point clouds (e.g. compute some kind of feature descriptors for each point, use these feature descriptors to create matching between points in both clouds and roughly align point clouds using these matchings) and then use ICP for final registration.

2012-08-29 03:27:20 -0500 answered a question Disparity Map Quality using Graph Cuts

It seems GraphCut stereo algorithm it's not included in the latest OpenCV version (at least I wasn't able to find anything in the documentation).

I can suggest using StereoSGBM(). Semi Global Block Matching stereo algorithm gives very good results (similar to best global methods, like GraphCut) but at the speed comparable to simple local methods (like block matching algorithms). I've tried it myself and reasults were really very good.

2012-08-28 15:09:55 -0500 received badge  Editor (source)
2012-08-28 15:07:40 -0500 answered a question Stereo vision basics

OpenCV algorithms for computing stereo correspondence (StereoSGBM, StereoBM) require rectified images. If you use them on non-rectified images they won't work correctly.

Rectification is a transformation of images such that projections of the same scene point have equal y-coordinate on both images. E.g. if a projection of some 3D point onto a left image has y-coordinate = 150 it'll have y-coordinate = 150 on the second image. So rectification simplifies finding corresponding points (and subsequently simplifies computation of a depth map).

If you want to build your stereo vision application use the following approach:

  1. Calibrate cameras. The simplest approach is to use calibration procedure (stereo_calib.cpp/stereo_calib.exe) distributed in OpenCV examples. This will give intrinsic and extrinsic camera parameters. From rectification procedure you should get reprojection error on the level of 0.5 pixel. Much bigger reprojection error (above 1 pixel) usually means that there're some issues with a calibration.
  2. Use stereo camera parameters estimated by stereo calibration procedure to rectify ALL images. You may verify correctness of this step by looking on rectified images: projections of the same scene points on both images should have the same y-coordinate.
  3. Use stereo correspondence function on RECTIFIED images. I can suggest using StereoSGBM - it usually produces better results than simple StereoBM procedure.

Function cvInitUndistortRectifyMap needs to be used only once - after you get stereo camera parameters from stereo calibration procedure. This funciton computes a mapping needed to rectify images. Then you rectify ALL images using mapping returned by cvInitUndistortRectifyMap.

2012-08-28 12:06:14 -0500 answered a question RANSAC and 2D point clouds

I don't think RANSAC is a good idea in your case. RANSAC can be used when you have a number of measurements (e.g. pairs of corresponding points from 2 sets) containing some outliers (e.g. incorrectly matched points). The more outliers you have the more RANSAC iterations are needed to estimate parameters with a given confidence. In your scenario, choosing matching points randomly, you'll need a really HUGE number of iterations to ensure proper matching is found. Computational cost may be prohibitive.

Can you give more details about your problem. Why are you trying to estimate perspective transform? Are these 2D points projections of some 3D points?

2012-08-24 14:58:22 -0500 received badge  Teacher (source)
2012-08-24 05:19:19 -0500 answered a question Are SURF feature descriptors computed differently in 2.3.1 and 2.4.2? (bug or feature ???)

I've found the following issue on bug tracker: SURF : different number of keypoints between OpenCV 2.3 & 2.4 (Bug #1911)

It seems that there are some differences in SURF feature detector in OpenCV 2.4. One change is that extended parameter is set to true by default, so descriptor length is 128 not 64. Also it seems that in 2.4 more keypoints are located - but if this is for purpose or is some error/bug is not clear yet.

See: http://code.opencv.org/issues/1911