Ask Your Question

nhatphongtran's profile - activity

2013-10-04 22:06:53 -0600 asked a question MacBeth Detection

What is the best way to train a cascade classifier to find this sort of artificial pattern?

image description

I tried running several training sessions with images from googlecode's haartraining tutorial varying from 500 negatives : 1000 positives to 1500 negatives : 4000 positives. Minimum false alarm rates varied between 0.4 to 0.5 and minimum hit rate varied from 0.995 to 0.999. Several tests were conducted with 10-15 stages, each resulting in an acceptance ratio of <0.001 at the very last stage.

I'm assuming my trainingdata is not good, because with each XML file the cascade classifier was not able to detect the macbeth chart. So I want to ask the more experienced people here how they would set up training for this pattern?

2013-10-01 20:03:12 -0600 commented question Detecting Artificial Patterns

It's weird that the XML is only 33kb, cannot imagine 33kb are enough to properly describe the pattern.

2013-10-01 13:14:55 -0600 commented question Detecting Artificial Patterns

yea I'm just overlaying the results on top of the colored image. I see what you're saying about the shape, but it's not really for an AR application. It's more for autodetection of that colorchart to neutralize and calibrate the colors of the incoming image, so the pattern has to be this.

2013-09-30 21:58:11 -0600 asked a question Detecting Artificial Patterns

Hi!

I trained a classifier to detect a MacBeth colorchart which is an artificial pattern and should be easy to detect. Training was done detecting haarfeatures from 1500 negatives and 4000 positives using bg photos from google (http://tutorial-haartraining.googlecode.com/svn/trunk/data/negatives/) that were similar to the environment where the pattern is most probably to be found.

The statistics of the training seemed to be very reasonable: image description

It went through all 10-stages and produced a 33kb XML cascade description file.

However when I try to find the pattern, it detects all sorts of things: image description

Does anyone have an idea how to improve the settings or properly find that pattern?

image description

2013-09-16 03:49:15 -0600 asked a question Keypoint Matching very sensitive to distortion/transformations

Hi guys!

From my understanding the keypoint matching process with FLANN or BruteForce are somewhat transformation independent, so it should be able to easily match a pattern that is slightly transformed in space. I did a test using both FLANN and BruteForce matcher in conjunction with Surf or FREAK with various settings in terms of the hessian and the minimum distance of the key points to get the "good results". However when testing matching a QR code found in some photos in which the location, distance and rotation just changed slightly, the results have been pretty inconsistent.

In rare cases it finds a match like this one (woohoo!): image description

Just placing the pattern somewhere else and slightly changing the rotation already caused troubles: image description

So right now I'm looking for some guidance how to improve detection of pattern like this, if I should continue looking into different feature detectors and extractors or if I should start looking into training samples.

I'd appreciate any help!

Here's my code:

using namespace cv;

void readme();

/** @function main */
int main( int argc, char** argv )
{
  if( argc != 3 )
  { readme(); return -1; }

  Mat img_object = imread( argv[1], CV_LOAD_IMAGE_GRAYSCALE);
  Mat img_scene = imread( argv[2], CV_LOAD_IMAGE_GRAYSCALE);

  if( !img_object.data || !img_scene.data )
  { std::cout<< " --(!) Error reading images " << std::endl; return -1; }

  //-- Step 1: Detect the keypoints using SURF Detector
  int minHessian =2000;

  SurfFeatureDetector detector( minHessian );
  std::vector<KeyPoint> keypoints_object, keypoints_scene;

  detector.detect( img_object, keypoints_object );
  detector.detect( img_scene, keypoints_scene );

  //-- Step 2: Calculate descriptors (feature vectors)
  FREAK extractor;

  Mat descriptors_object, descriptors_scene;

  extractor.compute( img_object, keypoints_object, descriptors_object );
  extractor.compute( img_scene, keypoints_scene, descriptors_scene );

  //-- Step 3: Matching descriptor vectors
  BFMatcher matcher(cv::NORM_HAMMING, true);

  std::vector< DMatch > matches;
  matcher.match( descriptors_object, descriptors_scene, matches );

  double max_dist = 0; double min_dist = 100;

  //-- Quick calculation of max and min distances between keypoints
  for( int i = 0; i < descriptors_object.rows; i++ )
  { double dist = matches[i].distance;
    if( dist < min_dist ) min_dist = dist;
    if( dist > max_dist ) max_dist = dist;
  }

  printf("-- Max dist : %f \n", max_dist );
  printf("-- Min dist : %f \n", min_dist );

  //-- Draw only "good" matches (i.e. whose distance is less than 3*min_dist )
  std::vector< DMatch > good_matches;

  float nndrRatio = 1;
  for (size_t i = 0; i < matches.size(); ++i)
  { 
      //if (matches[i].size() < 2)
      //            continue;

      const DMatch &m1 = matches[i];
      const DMatch &m2 = matches[i];

      if(m1.distance <= nndrRatio * m2.distance)        
      good_matches.push_back(m1);     
  }

  Mat img_matches(img_scene.clone());
  drawMatches( img_object, keypoints_object, img_scene, keypoints_scene,
               good_matches, img_matches, Scalar::all(-1), Scalar::all(-1),
               vector<char>(), DrawMatchesFlags::DRAW_RICH_KEYPOINTS);

  //-- Localize the object
  std::vector<Point2f> obj;
  std::vector<Point2f> scene;

  for( unsigned int i = 0; i < good_matches.size(); i++ )
  {
    //-- Get the keypoints from the good matches
    obj.push_back( keypoints_object[ good_matches[i].queryIdx ].pt );
    scene.push_back( keypoints_scene[ good_matches[i].trainIdx ].pt );
  }

  Mat H = findHomography( obj, scene, CV_RANSAC );

  //-- Get the corners from the image_1 ( the object to be "detected" )
  std::vector<Point2f> obj_corners(4);
  obj_corners[0] = cvPoint(0,0); obj_corners[1] = cvPoint( img_object.cols, 0 );
  obj_corners[2] = cvPoint( img_object.cols, img_object.rows ); obj_corners[3] = cvPoint( 0, img_object.rows );
  std::vector<Point2f> scene_corners(4);

  perspectiveTransform( obj_corners, scene_corners, H ...
(more)