Ask Your Question

matteo's profile - activity

2019-06-28 08:36:24 -0600 received badge  Famous Question (source)
2017-06-25 13:49:15 -0600 received badge  Famous Question (source)
2017-02-23 13:07:52 -0600 received badge  Notable Question (source)
2016-04-17 13:50:39 -0600 received badge  Taxonomist
2016-03-11 21:17:50 -0600 received badge  Notable Question (source)
2015-12-11 16:11:55 -0600 received badge  Famous Question (source)
2015-10-22 06:52:55 -0600 received badge  Popular Question (source)
2015-10-09 22:05:21 -0600 received badge  Popular Question (source)
2015-03-19 02:22:57 -0600 received badge  Famous Question (source)
2014-12-09 13:50:21 -0600 marked best answer Wrong matrix from getAffineTransform

I need to use Affine Transformation algorithm in order to find a matrix (that describes the Af.Tr. between two images which I took 3 matching points from) and afterwards multiply it for some coordinates (that represent some points in the first frame) in order to find the respective points in the second. I'm trying to experiment the OpenCV function getAffineTransform with 3 points that stay still. this is my code:

Point2f terp1[]={Point2f(1,1),Point2f(30,1),Point2f(30,30)};    
Point2f terp2[]={Point2f(1,1),Point2f(30,1),Point2f(30,30)};

 Mat A_m( 2, 3, CV_32FC1 );
   A_m = getAffineTransform(terp1,terp2);


  PrintMatrix(A_m);

I expect the matrix to be: 1 0 0 / 0 1 0 .....but i recive (from PrintMatrix function) huge numbers that don't make sense as 2346027296 32673544 32900240 // 2346027296 32673544 32900240

why?

2014-12-09 13:47:57 -0600 marked best answer Feature matching of segmented images

image descriptionhi everybody, i'm using this example proposed by OpencCV for feature extraction from images of segmented roads (one translated with respect to the other). My problem is that it finds 0 matching point: how is it possible? thanks

2014-12-09 13:07:28 -0600 marked best answer FLANN algorithm cannot find any descriptor

I'm acquiring some images from a video stream (of a simulated camera in a simulated environment (V-rep) ) : my goal is to find matching points in order to compute the Affine Transformation Matrix between two frames. I'm trying to use the Feature Matching with FLANN algorithm proposed by OpenCV (http://docs.opencv.org/doc/tutorials/features2d/feature_flann_matcher/feature_flann_matcher.html#feature-matching-with-flann) with 2 Mat type images acquired through the cv_bridge (http://www.ros.org/wiki/cv_bridge/Tutorials/UsingCvBridgeToConvertBetweenROSImagesAndOpenCVImages) since i read them from a ROS topic. this is my code:

 vector< DMatch > MatchFinder(Mat img_1, Mat img_2)  { 
  int minHessian = 400; 
   unsigned int i;
 SurfFeatureDetector detector( minHessian );
 std::vector<KeyPoint> keypoints_1, keypoints_2;
 detector.detect( img_1, keypoints_1 ); 
  detector.detect( img_2, keypoints_2 );

 //-- Step 2: Calculate descriptors (feature vectors)
 SurfDescriptorExtractor extractor;
 Mat descriptors_1, descriptors_2;

extractor.compute( img_1, keypoints_1, descriptors_1 ); 
 extractor.compute( img_2, keypoints_2, descriptors_2 );

  //-- Step 3: Matching descriptor vectors using FLANN matcher

  FlannBasedMatcher matcher;

std::vector< DMatch > matches; matcher.match( descriptors_1, descriptors_2, matches);  // ERROR

 double max_dist = 0; double min_dist = 100;

//-- Quick calculation of max and min distances between keypoints 
for( i = 0; i <     descriptors_1.rows; i++ ) { 
double dist = matches[i].distance; 
if( dist <  min_dist ) min_dist = dist; if( dist > max_dist ) max_dist = dist;
 }

printf("-- Max dist : %f \n", max_dist );
printf("-- Min dist : %f \n", min_dist );

//-- Draw only "good" matches (i.e. whose distance is less than 2*min_dist ) 
//-- PS.- radiusMatch can also be used here. 
 std::vector< DMatch > good_matches;

 for(  i = 0; i < descriptors_1.rows; i++ ) {  
    if( matches[i].distance < 2*min_dist ) {
  good_matches.push_back( matches[i]);  }  }

  //-- Draw only "good" matches Mat img_matches; 
  drawMatches( img_1, keypoints_1, img_2, keypoints_2,
       good_matches, img_matches, Scalar::all(-1), Scalar::all(-1),
       vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );

   //-- Show detected matches

  imshow( "Good Matches", img_matches );
  ROS_INFO("size matchings: %u", good_matches.size()); 
   for(  i = 0; i < good_matches.size(); i++ ) { 
         printf( "-- Good Match [%d] Keypoint 1: %d  -- Keypoint 2: %d  \n", i, ...    
   good_matches[i].queryIdx, good_matches[i].trainIdx ); }

   waitKey(1); 
    return good_matches;  }

at the point where ERROR is written i receive this error: OpenCV Error: Unsupported format or combination of formats (type=0 ) in buildIndex_, file /tmp/buildd/ros-fuerte-opencv2-2.4.2-1precise-20130312-1306/modules/flann/src/miniflann.cpp, line 299 terminate called after throwing an instance of 'cv::Exception' what(): /tmp/buildd/ros-fuerte-opencv2-2.4.2-1precise-20130312-1306/modules/flann/src/miniflann.cpp:299: error: (-210) type=0 in function buildIndex_

Aborted (core dumped) This happens because in the second frame the algorithm doesn't manage to find any descriptor! I'm wandering: why?...I'm running my code with the camera that stays still and, so, the images are exactly the same

2014-10-26 03:29:38 -0600 received badge  Notable Question (source)
2014-09-17 13:34:26 -0600 marked best answer Unsupported format or combination of formats in buildIndex using FLANN algorithm

I'm acquiring some images from a video stream (of a simulated camera) : my goal is to find matching points in order to compute the Affine Transformation Matrix between two frames afterwards. I'm trying to use the Feature Matching with FLANN algorithm proposed by OpenCV (http://docs.opencv.org/doc/tutorials/features2d/feature_flann_matcher/feature_flann_matcher.html#feature-matching-with-flann) with 2 Mat type images acquired through the cv_bridge (http://www.ros.org/wiki/cv_bridge/Tutorials/UsingCvBridgeToConvertBetweenROSImagesAndOpenCVImages) since i read them from a ROS topic. this is my code:

 vector< DMatch > MatchFinder(Mat img_1, Mat img_2)
 {
int minHessian = 400;
unsigned int i;

SurfFeatureDetector detector( minHessian );

std::vector<KeyPoint> keypoints_1, keypoints_2;

detector.detect( img_1, keypoints_1 );
detector.detect( img_2, keypoints_2 );

//-- Step 2: Calculate descriptors (feature vectors)

SurfDescriptorExtractor extractor;

Mat descriptors_1, descriptors_2;

extractor.compute( img_1, keypoints_1, descriptors_1 );
extractor.compute( img_2, keypoints_2, descriptors_2 );

//-- Step 3: Matching descriptor vectors using FLANN matcher

FlannBasedMatcher matcher;

std::vector< DMatch > matches;
matcher.match( descriptors_1, descriptors_2, matches );  // ERROR

double max_dist = 0; double min_dist = 100;

//-- Quick calculation of max and min distances between keypoints
for( i = 0; i < descriptors_1.rows; i++ )
{ double dist = matches[i].distance;
if( dist < min_dist ) min_dist = dist;
if( dist > max_dist ) max_dist = dist;
}

printf("-- Max dist : %f \n", max_dist );
printf("-- Min dist : %f \n", min_dist );

//-- Draw only "good" matches (i.e. whose distance is less than 2*min_dist )
//-- PS.- radiusMatch can also be used here.
std::vector< DMatch > good_matches;

for(  i = 0; i < descriptors_1.rows; i++ )
{
  if( matches[i].distance < 2*min_dist )
{
      good_matches.push_back( matches[i]);
 }
 }

 //-- Draw only "good" matches
Mat img_matches;
drawMatches( img_1, keypoints_1, img_2, keypoints_2,
           good_matches, img_matches, Scalar::all(-1), Scalar::all(-1),
           vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );

//-- Show detected matches

imshow( "Good Matches", img_matches );
ROS_INFO("size matchings: %u", good_matches.size());
for(  i = 0; i < good_matches.size(); i++ )
{
  printf( "-- Good Match [%d] Keypoint 1: %d  -- Keypoint 2: %d  \n", i, ...
good_matches[i].queryIdx, good_matches[i].trainIdx );
}

 waitKey(1);
 return good_matches;
 }

at the point where ERROR is written i receive this error: OpenCV Error: Unsupported format or combination of formats (type=0 ) in buildIndex_, file /tmp/buildd/ros-fuerte-opencv2-2.4.2-1precise-20130312-1306/modules/flann/src/miniflann.cpp, line 299 terminate called after throwing an instance of 'cv::Exception' what(): /tmp/buildd/ros-fuerte-opencv2-2.4.2-1precise-20130312-1306/modules/flann/src/miniflann.cpp:299: error: (-210) type=0 in function buildIndex_

Aborted (core dumped)

But if I try the same code loading 2 JPG images it works!! what's wrong?

2014-06-10 06:22:01 -0600 received badge  Notable Question (source)
2014-05-04 17:25:48 -0600 received badge  Popular Question (source)
2014-04-07 11:58:42 -0600 asked a question Wrong result from Hartley and Zisserman Triangulation

I adapted the code of this interesting post in this way (only for one point):

 Mat A(4,3,CV_32FC1);

A.at<float>(0,0)=u.at<float>(0,0)*P.at<float>(2,0)-P.at<float>(0,0);
A.at<float>(0,1)=u.at<float>(0,0)*P.at<float>(2,1)-P.at<float>(0,1);
A.at<float>(0,2)=u.at<float>(0,0)*P.at<float>(2,2)-P.at<float>(0,2);

A.at<float>(1,0)=u.at<float>(1,0)*P.at<float>(2,0)-P.at<float>(1,0);
A.at<float>(1,1)=u.at<float>(1,0)*P.at<float>(2,1)-P.at<float>(1,1);
A.at<float>(1,2)=u.at<float>(1,0)*P.at<float>(2,2)-P.at<float>(1,2);

A.at<float>(2,0)=u1.at<float>(0,0)*P1.at<float>(2,0)-P1.at<float>(0,0);
A.at<float>(2,1)=u1.at<float>(0,0)*P1.at<float>(2,1)-P1.at<float>(0,1);
        A.at<float>(2,2)=u1.at<float>(0,0)*P1.at<float>(2,2)-P1.at<float>(0,2);

        A.at<float>(3,0)=u1.at<float>(1,0)*P1.at<float>(2,0)-P1.at<float>(1,0);
        A.at<float>(3,1)=u1.at<float>(1,0)*P1.at<float>(2,1)-P1.at<float>(1,1);
        A.at<float>(3,2)=u1.at<float>(1,0)*P1.at<float>(2,2)-P1.at<float>(1,2);

 Mat B(4,1,CV_32FC1);

B.at<float>(0,0)= -(u.at<float>(0,0)*P.at<float>(2,3)-P.at<float>(0,3));
B.at<float>(1,0)=-(u.at<float>(1,0)*P.at<float>(2,3)  -P.at<float>(1,3));
B.at<float>(2,0)=-(u1.at<float>(0,0)*P1.at<float>(2,3)    -P1.at<float>(0,3));
B.at<float>(3,0)=-(u1.at<float>(1,0)*P1.at<float>(2,3)    -P1.at<float>(1,3));

 Mat X;

 solve(A,B,X,DECOMP_SVD);

Where u and u1 are the points respectively in the first and second images and P and P1 are the projection matrices acquired multiplying C*[R|t], where C is the matrix of the intrinsic parameters of the camera and R,t the roto- translation matrices representing the camera position.

The problem is that the result i get, the coordinates of the point in the world, is not correct: there is no overflow or something similar, it is just wrong...for ex i get a depth of 35m instead of 2m,or, if i put the camera closer to the 3d point feature, the depth increases. What am I doing wrong? Ps. The camera is already calibrated and the features are retrieved by a rectified image.

2014-03-31 09:25:12 -0600 asked a question triangulatePoints with moving monocamera

well, I'm trying to use the function triangulatePoints in order to retrieve the depth of a point. First, i'm adopting this process not using a stereo ring but a monocamera: the camera is moving in the environment and I select two frames and a point present in both of them (of course I know the position of the camera at any time). Assuming that this procedure is correct, I am wondering to which camera frame the 3d point output of the mentioned function is referred.

2014-03-26 12:02:38 -0600 commented question Random values printing a Mat

I stupid....thanks!

2014-03-26 11:40:39 -0600 asked a question Random values printing a Mat

Hi, I would like to print out a matrix Mat after having written the desired values in it. I'm doing this way:

Mat Rot(3,3, CV_32FC1);

Rot.at<double>(0,0)=cos(roll)*cos(pitch);
Rot.at<double>(0,1)=cos(roll)*sin(pitch)*sin(yaw)-sin(roll)*cos(yaw);
Rot.at<double>(0,2)=cos(roll)*sin(pitch)*cos(yaw)+sin(roll)*sin(yaw);
Rot.at<double>(1,0)=sin(roll)*cos(pitch);
Rot.at<double>(1,1)=sin(roll)*sin(pitch)*sin(yaw)+cos(roll)*cos(yaw);
Rot.at<double>(1,2)=sin(roll)*sin(pitch)*cos(yaw)-cos(roll)*sin(yaw);
Rot.at<double>(2,0)=-sin(pitch);
Rot.at<double>(2,1)=cos(pitch)*sin(yaw);
Rot.at<double>(2,2)=cos(pitch)*cos(yaw);

    cout << " Rot " << Rot << endl;

    cout << " Rot(0,0) " <<   Rot.at<double>(0,0) << endl;

The strange thing is that, when I print out Rot, random values come out but the output of Rot(0,0) is correct!

Any idea? Thanks

2014-03-11 13:02:47 -0600 marked best answer Fill holes of a binary image

hi, my goal is to fill with white colour the CLOSED black zones, not that ones that end on the image boundaries but aren't closed by white pixels (i hope i'm clear). i tried with opening (or closure) functions but the they causes the closure of the opened/connected black rectangular zones, thing that is highly unwanted. can someone help me? image description

2014-02-16 06:47:24 -0600 received badge  Popular Question (source)
2014-01-17 09:41:26 -0600 commented answer What does projection matrix provided by the calibration represent?

if K stands for intrinsic matrix...no..in my case that's not true. i have just added the results of the calibration in the post.

2014-01-16 10:44:50 -0600 asked a question What does projection matrix provided by the calibration represent?

I'm using such a tool from ROS/OpenCV in order to perform the camera calibration. The procedure ends up providing: camera matrix, distortion parameters, rectification matrix and projection matrix. As far as I know the projection matrix contains the intrinsic parameter matrix of the camera multiplied by the extrinsic parameters matrix of the matrix. The extrinsic parameter matrix itself provides the roto-translation of the camera frame with respect to the world frame. If these assumptions are correct...how the projection matrix is computed by Opencv? I,m not defining any world frame!

camera matrix 414.287922 0.000000 382.549277 0.000000 414.306025 230.875006 0.000000 0.000000 1.000000

distortion -0.278237 0.063338 -0.001382 0.000732 0.000000

rectification 1.000000 0.000000 0.000000 0.000000 1.000000 0.000000 0.000000 0.000000 1.000000

projection 297.051453 0.000000 387.628900 0.000000 0.000000 369.280731 227.051305 0.000000 0.000000 0.000000 1.000000 0.000000

2013-12-17 11:16:08 -0600 asked a question drawMatches for vector<vector<DMatch> > good_matches

i need to detect the matchings between 2 images. I use the following code:

int MatchFinder(Mat img_1, Mat img_2, Mat& A )
{
vector<KeyPoint> keypoints_1;
vector<KeyPoint> keypoints_2;

Mat img_matches;
vector <Point2f> point1;
vector <Point2f> point2;

int minHessian = 400;

SiftFeatureDetector detector( minHessian );

detector.detect( img_1, keypoints_1 );
detector.detect( img_2, keypoints_2 );

//-- Step 2: Calculate descriptors (feature vectors)

SiftDescriptorExtractor extractor;

Mat descriptors_1, descriptors_2;

extractor.compute( img_1, keypoints_1, descriptors_1 );
extractor.compute( img_2, keypoints_2, descriptors_2 );


//-- Step 3: Matching descriptor vectors using FLANN matcher
FlannBasedMatcher matcher;

if ( descriptors_1.empty() )
    cvError(0,"MatchFinder","1st descriptor empty",__FILE__,__LINE__);


if ( descriptors_2.empty() )
    cvError(0,"MatchFinder","2nd descriptor empty",__FILE__,__LINE__);

////////////////////////////// -- Ratio check
std::vector<vector< DMatch > > matches;
matcher.knnMatch( descriptors_1, descriptors_2, matches, 2 );

std::vector<vector<DMatch> > good_matches;

double RatioT = 0.75;
//-- ratio Test
for(unsigned int i=0; i<matches.size(); i++)
{
   if((matches[i].size()==1)||(abs(matches[i][0].distance/matches[i][1].distance) < RatioT))
   {
      good_matches.push_back(matches[i]);
   }
}


 //-- Draw only "good" matches   //-- Show detected matches

// VISUALIZATION !?!?!

/////////////////////////////////// -- ratio end 1
if(good_matches.size()<4)
    {
    //cout << "less than 4 matching points!   "  <<endl;
    point1.clear();
    point2.clear();
    keypoints_1.clear();
    keypoints_2.clear();
    good_matches.clear();
    img_matches.release();

    return 1;
    }


else
{

    for(unsigned int i = 0; i < good_matches.size(); i++ )
    {
        //-- Get the keypoints from the good matches
        point1.push_back( keypoints_1[ good_matches[i][0].queryIdx ].pt );
        point2.push_back( keypoints_2[ good_matches[i][0].trainIdx ].pt );
    }
      return 0;
   }

   }

I would like now to visualize the matching between the 2 images as if i used drawMatches (actually i cannot use this function: i have a vector<vector<dmatch> > good_matches instead of vector<dmatch> good_matches).

Any suggestion?

2013-12-03 03:25:42 -0600 commented question BackgroundSubtractorMOG2 returns a white image

I'm not sure that I understood...could you please make an example? btw, this function IS in a loop (a ROS while loop) but the result is the one previously exposed

2013-12-02 11:42:28 -0600 asked a question BackgroundSubtractorMOG2 returns a white image

I have a camera rigidly attached to a tool. A portion of this tool is included in the field of view of the camera itself. The tool is always on the move. My goal is to retrieve from each image acquired by the camera the segmentation of the tool. I want then to apply the BackgroundSubtractorMOG2 in order to highlight only the portion of the image that doesn't change over time (the tool).

The code of the function i'm using is the following (c++ code).

 Mat Background_removal(Mat img)
 {

 Mat img_fore;

Mat background;

Mat foreground;

const int nmixtures=10;
const bool shadow=false;
const int history =1000;

BackgroundSubtractorMOG2 mog(history,nmixtures,shadow);

mog.operator()(img,foreground);

return foreground;

}

My issue is that this function returns an image completely white. What am I doing wrong?

2013-09-09 03:20:11 -0600 commented question Dynamic colouration of zones visualized by a flying camera

i have available: the image (img_1) representing a portion of a bigger image (img_2). Ihave to overlap img_1 on img_2 (in the right place e with the right proportions) .....forget about the colours, that is not important in the end

2013-09-06 05:33:21 -0600 asked a question Attach 2 images using the homography matrix

i have an image_1 representing a portion of another image_2. I have to attach the first (modified) to the second in the right position (overlapping the image_1 to the portion of image_2 representing the same thing). How can i do?

I was thinking to use the FLANN algorithm getting the homography matrix between image1 and image_2.

then modify image_1.

Then sum the two images using the matrix just found....the problem is that i really don't know how to perform this last step

2013-09-06 03:41:39 -0600 asked a question Dynamic colouration of zones visualized by a flying camera

i have an image (img_tot) printed on a plane and a down-facing camera flying on it and acquiring a frame, every time step "k", of a portion of such a plane. i have to store all the images (one each time step) representing img_tot with the portion represented by frame(k) coloured in a certain way. As if i had to highlight all the zones visualized by the camera in the plane How can I do??

2013-08-21 07:24:29 -0600 asked a question Nearest coloured point

Is there any function that finds the nearest pixel with a desired colouration to a given point?

2013-08-21 04:14:12 -0600 commented question Error in transormation using FindHomography

ok..i tried it out: this function removes about 50 wrong matchings over about 280.. by the way the result doesn't change..the shifts are still present....:(

2013-08-21 04:10:50 -0600 commented question Ratio check function with findHomography

updated..sorry

2013-08-20 13:07:42 -0600 asked a question Ratio check function with findHomography

i need to acquire a perspective matrix from two images taken by the same camera after its movement in a 3D space...I'm taking inspiration from this OpenCV's example about the use of findHomography. I would like to substitute the "filter for the good_matches there suggested with the "ratio check function":

class RatioCheck
{
public:
    RatioCheck(){};
    void filter(vector<vector<DMatch>>& nmatches, double RatioT);
};

void RatioCheck::filter(std::vector<vector<DMatch> > &nmatches, double RatioT=0.8)
{
    vector<vector<DMatch>> knmatches;
    for(int i=0; i<nmatches.size(); i++)
    {
        if((nmatches[i].size()==1)||...
            (abs(nmatches[i][0].distance/nmatches[i][1].distance) <RatioT))
        {
            knmatches.push_back(nmatches[i]);
        }
    }
    nmatches=knmatches;
}

In principle i have to give to the function a vector<vector<DMatch> > ..but actually i have a vector<DMatch>... does anybody know how?