Ask Your Question

gozari's profile - activity

2018-02-27 08:40:20 -0600 received badge  Popular Question (source)
2017-08-24 18:33:19 -0600 received badge  Good Question (source)
2015-08-18 03:27:09 -0600 received badge  Famous Question (source)
2015-01-08 04:30:51 -0600 received badge  Notable Question (source)
2014-12-05 03:23:59 -0600 asked a question OpenCV, CUDA problem

I am working with Opencv for a while. Recently, I decided to use GPU functions which use CUDA to improve performance. I compiled Opencv to be able using GPU features. But when I run my project, right at the begining -before starting the main() function- I get this message:

"The procedure entry point clGetPlatformInfo could not be located in the dynamic link library nvcuda.dll".

since, exactly the same project works with normal Opencv, do you have any idea what could be the problem? Thanks! I am using Cuda 5.0, Opencv 2.4.6, vc++ 2010 and windows 7 64 bit. I run it on a system with Nvidia Gforce GTS 450 and the driver is updated to the last version.

Note: I checked some of Opencv_test_... programs. They seem to be fine. However, Opencv_test_ocl.exe throws a same error message.

2014-10-09 10:26:38 -0600 received badge  Popular Question (source)
2014-05-17 15:53:54 -0600 received badge  Nice Question (source)
2014-05-13 11:26:05 -0600 received badge  Editor (source)
2014-05-13 11:25:31 -0600 asked a question Inverse Perspective Mapping with Known Rotation and Translation

Hi,

I need to obtain a new view of an image from a desired point of view (a general case of bird's eye view). Imagine we change the camera's position with a known rotation and transformation. what would be the new image of the same scene?

We may put it in another way: how can we compute homography matrix by having the rotation and translation matrices?

I really appreciate any help!

2014-01-27 07:23:17 -0600 commented answer From Fundamental Matrix To Rectified Images

I tried the code with different kind of features. the best result was for SURF.

here are one of the cases that I have tested:

left image: http://i42.tinypic.com/10sembn.jpg

Right image: http://i42.tinypic.com/6gkdia.jpg

Matches: http://i41.tinypic.com/66ynbd.jpg

Fundamental matrix test on the left image: http://i39.tinypic.com/b84x10.jpg

Rectified image : http://i42.tinypic.com/2jczivt.jpg

in this specific case I set minHessian to 250. Also I checked all the possible combinations of R and T. The other 3 results were just a black image.

2014-01-27 03:47:21 -0600 commented answer From Fundamental Matrix To Rectified Images

Thank you for the answer! I tried it but the problem is still there. :( another question: does the same procedure work for RGB images?

2014-01-27 02:48:35 -0600 commented question From Fundamental Matrix To Rectified Images

I move the camera horizontally (almost).

2014-01-24 15:24:24 -0600 received badge  Student (source)
2014-01-24 08:48:12 -0600 asked a question From Fundamental Matrix To Rectified Images

I have stereo photos coming from the same camera and I am trying to use them for 3D reconstruction.

To do that, I extract SURF features and calculate Fundamental matrix. Then, I get Essential matrix and from there, I have Rotation matrix and Translation vector. Finally, I use them to obtain rectified images.

The problem is that it works only with some specific parameters. If I set minHessian to 430, I will have a pretty nice rectified images. But, any other value gives me just a black image or some obviously wrong images.

In all the cases, the fundamental matrix seems to be fine (I draw epipolar lines on both the left and right images). However, I can not say so about Essential matrix, Rotation matrix and Translation vector. Even so I used all the 4 possible combination of R and T.

Here is my code. Any help or suggestion would be appreciated. Thanks!


Mat img_1 = imread( "images/imgl.jpg", CV_LOAD_IMAGE_GRAYSCALE );
Mat img_2 = imread( "images/imgr.jpg", CV_LOAD_IMAGE_GRAYSCALE );


if( !img_1.data || !img_2.data )
{ return -1; }

//-- Step 1: Detect the keypoints using SURF Detector
int minHessian = 430;
SurfFeatureDetector detector( minHessian );
std::vector<keypoint> keypoints_1, keypoints_2;
detector.detect( img_1, keypoints_1 );
detector.detect( img_2, keypoints_2 );

//-- Step 2: Calculate descriptors (feature vectors)
SurfDescriptorExtractor extractor;
Mat descriptors_1, descriptors_2;
extractor.compute( img_1, keypoints_1, descriptors_1 );
extractor.compute( img_2, keypoints_2, descriptors_2 );

//-- Step 3: Matching descriptor vectors with a brute force matcher
BFMatcher matcher(NORM_L1, true);
std::vector< DMatch > matches;
matcher.match( descriptors_1, descriptors_2, matches );

//-- Draw matches
Mat img_matches;
drawMatches( img_1, keypoints_1, img_2, keypoints_2, matches, img_matches );
//-- Show detected matches
namedWindow( "Matches", CV_WINDOW_NORMAL );
imshow("Matches", img_matches );
waitKey(0);


//-- Step 4: calculate Fundamental Matrix
vector<point2f>imgpts1,imgpts2;
for( unsigned int i = 0; i<matches.size(); i++="" )="" {="" queryidx="" is="" the="" "left"="" image="" imgpts1.push_back(keypoints_1[matches[i].queryidx].pt);="" trainidx="" is="" the="" "right"="" image="" imgpts2.push_back(keypoints_2[matches[i].trainidx].pt);="" }="" mat="" f="findFundamentalMat" (imgpts1,="" imgpts2,="" fm_ransac,="" 0.1,="" 0.99);="" --="" step="" 5:="" calculate="" essential="" matrix="" double="" data[]="{1189.46" ,="" 0.0,="" 805.49,="" 0.0,="" 1191.78,="" 597.44,="" 0.0,="" 0.0,="" 1.0};="" camera="" matrix="" mat="" k(3,="" 3,="" cv_64f,="" data);="" mat_<double=""> E = K.t() * F * K; 

//-- Step 6: calculate Rotation Matrix and Translation Vector
Matx34d P;
//decompose E 
SVD svd(E,SVD::MODIFY_A);
Mat svd_u = svd.u;
Mat svd_vt = svd.vt;
Mat svd_w = svd.w;
Matx33d W(0,-1,0,1,0,0,0,0,1);//HZ 9.13
Mat_<double> R = svd_u * Mat(W) * svd_vt; //
Mat_<double> T = svd_u.col(2); //u3

if (!CheckCoherentRotation (R)) {
std::cout<<"resulting rotation is not coherent\n";
return 0;
}


//-- Step 7: Reprojection Matrix and rectification data
Mat R1, R2, P1_, P2_, Q;
Rect validRoi[2];
double dist[] = { -0.03432, 0.05332, -0.00347, 0.00106, 0.00000};
Mat D(1, 5, CV_64F, dist);

stereoRectify(K, D, K, D, img_1.size(), R, T, R1, R2, P1_, P2_, Q, CV_CALIB_ZERO_DISPARITY, 1, img_1.size(),  &validRoi[0], &validRoi[1] );