Ask Your Question

colin747's profile - activity

2015-03-09 04:34:38 -0500 received badge  Scholar (source)
2015-02-13 03:29:38 -0500 commented answer ORB - object needs to be very close to camera

Thanks for an interesting answer, I'd come to a similar conclusion that it was due to the quality of the camera though I'm still not sure exactly what aspect of the camera is letting it down. I'd be reluctant to remove your answer as I still find it relevant to the problem and it may help people in the future. If you would have any resources on how to calculate this out? As it would be helpful to me if I could show on paper why this problem is occurring rather than asking people to take my word for it.

2015-02-11 08:24:27 -0500 asked a question ORB - object needs to be very close to camera

I have a program that takes a video feed from RSTP and checks for an object. The only problem is that the object needs to be about 6" from the camera but when I use a wired webcam the object can be a few feet away. Both camera are transmitting at the same resolution, what is causing this problem?

Camera transmission specs:

Resolution 640 * 480 FPS 20 Bitrate 500000

EDIT: Additional answer here may be of interest to others:

2015-02-03 08:16:19 -0500 received badge  Self-Learner (source)
2015-02-03 03:47:37 -0500 answered a question Best images for ORB feature point tracking

After playing about with different images I've came up with three images that are complex enough for the ORB detector to easily detect but distinct enough that it doesn't confuse them with each other. (N.B. I only stopped at three as it's all I need for the moment so I don't know if there will be a limit on the distinctiveness you can make these.)

Image one:

Image One - multiple overlapped squares and circles

Image Two:

Image Two - multiple overlapped triangles

Image Three:

Image Three - multiple overlapped stars

I have no reason to doubt that these images should not work with other feature point detection algorithms but I have not tested it. Hopefully this will be of some use for others

N.B. There was small false positives detected within the video frame which where filtered out by calculating the number of inliers within the homography and checking if this was over a pre-set value (in my case 100).

2015-02-03 03:37:27 -0500 received badge  Enthusiast
2015-02-02 03:18:25 -0500 commented question Best images for ORB feature point tracking

Is there any advantage to that method over using ORB? I only ask as the markers can be changed to whatever suits but I will try the Harris method and report back.

2015-01-30 04:38:57 -0500 asked a question Best images for ORB feature point tracking

A have a program which detects objects in a video stream, the objects will be markers that will be attached to objects so in effect it is recognising the marker rather than the object itself.

My question is what type of marker would be best for this?

I'm aware that a complex non-repeating object is best but I'm not sure how this translates to practical application. I thought that perhaps a QR code style marker would be good but although it is easily recognised with a very high accuracy rate when I start using multiple QR codes for each object the program is finding it difficult to distinguish between them.

EDIT: As per the first comment I tried the following image but could not get the program to detect it at all (It does detect the QR codes fine).

image description

2015-01-16 03:26:34 -0500 commented question OpenCV - match SURF points runtime error

I wrote the code using the OpenCV doc example, when I try it on a separate program with just still images it works fine so I'm assuming there is a problem in my code somewhere.

2015-01-14 09:59:41 -0500 asked a question OpenCV - match SURF points runtime error

I have a program which matches feature points found in a template image to those shown in the video feed. When I run the program I am getting the following error:

OpenCV Error: Assertion failed (i1 >= 0 && i1 < static_cast<int>(keypoints1.size())) in drawMatches, file bin/opencv-2.4.7/modules/features2d/src/draw.cpp, line 207
terminate called after throwing an instance of 'cv::Exception'
  what():  bin/opencv-2.4.7/modules/features2d/src/draw.cpp:207: error: (-215) i1 >= 0 && i1 < static_cast<int>(keypoints1.size()) in function drawMatches


This is the function drawMatches that is mentioned above:

drawMatches(img_1, templateKeypoints, frames, keypoints_1, good_matches, img_matches, cv::Scalar::all(-1), cv::Scalar::all(-1), std::vector<char>(), cv::DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS);

From what I've read I believe the problem is to do with if the feature points found in the video do not match the feature points in the template image then the program is aborting.

min_dist = 100;      
for(int i = 0; i < img_descriptors_1.rows; i++) {
        if(matches[i].distance <= 3 * min_dist) {

I am looking for the video feed to run constantly even if no matches are present.

EDIT: I've noticed if I repeatedly try to run the program I sometimes get an alternative error message:

OpenCV Error: Assertion failed (npoints >= 0 && points2.checkVector(2) == npoints && points1.type() == points2.type()) in findHomography, file /home/colin/bin/opencv-2.4.7/modules/calib3d/src/fundam.cpp, line 1074
terminate called after throwing an instance of 'cv::Exception'
  what():  /home/colin/bin/opencv-2.4.7/modules/calib3d/src/fundam.cpp:1074: error: (-215) npoints >= 0 && points2.checkVector(2) == npoints && points1.type() == points2.type() in function findHomography

2014-03-18 10:00:15 -0500 asked a question OpenCV Haar Classifier - how does it know when an object has been matched in live video

I have a trained OpenCV Haar Classifier, I am using the sample face detect program and supplying my classifier xml file as an argument. The program is working as expected, my question is how does the program know when the object has been detected?

Does it use the Haar feature rectangles on the live video feed and check for a feature match within the XML?

2014-02-05 09:35:17 -0500 commented question Video frame rate always 90k

Nope this is the full video.

2014-02-03 08:03:24 -0500 asked a question Video frame rate always 90k

I'm trying to get the frame rate of a captured video but it's always returning 90000 but surely this can't be correct, is there any problem with my code that may be causing this value to be returned?

VideoCapture cap("/home/colin/downloads/20140203_133838.mp4"); // open the video file for reading
  if ( !cap.isOpened() ) {
    cout << "Cannot open the video file" << endl;
    return 1;
  double fps = cap.get(CV_CAP_PROP_FPS); //get the frames per seconds of the video
  cout << "Frame per seconds : " << fps << endl;
  namedWindow("MyVideo",CV_WINDOW_AUTOSIZE); //create a window called "MyVideo"
  while(1) {
    Mat frame;
    bool bSuccess =; // read a new frame from video
    if (!bSuccess) {
      cout << "Cannot read the frame from video file" << endl;
    imshow("MyVideo", frame); //show the frame in "MyVideo" window
    if(waitKey(30) == 27) { //if escape key is pressed exit program
      cout << "esc key is pressed by user" << endl; 
2014-01-28 09:21:25 -0500 received badge  Editor (source)
2014-01-28 08:59:01 -0500 asked a question OpenCV compare FLANN points to saved FLANN points error

I'm looking to save a image along with keypoints and descriptors which are then loaded into another program and compared against another image for similarities.

However I'm having trouble comparing the two images. The code below is how I'm detecting the points on the new image and is working fine:

  //Detect the keypoints using SURF Detector
  int minHessian = 400;
  SurfFeatureDetector detector( minHessian );
  std::vector<KeyPoint> keypoints_1;
  detector.detect( img_1, keypoints_1 );
  //-- Step 2: Calculate descriptors (feature vectors)
  SurfDescriptorExtractor extractor;
  Mat descriptors_1;
  extractor.compute( img_1, keypoints_1, descriptors_1 );

I then read the image and points from the saved file which seems to be working fine (I'm getting no errors but I'm worried that something might not be correct here):

  Mat img_keypoints_1, img_keypoints_2, descriptors_2;
  drawKeypoints( img_1, keypoints_1, img_keypoints_1, Scalar::all(-1), DrawMatchesFlags::DEFAULT );
  //-- Load image from file
  FileStorage fs2(("../surfTemplatePoints/Keypoints.yml"), FileStorage::READ);
  FileNode kptFileNode = fs2["templateImageOneKey"];
  FileNode desFileNode = fs2["templateImageOneDes"];
  read( kptFileNode, img_keypoints_2 );
  read( desFileNode, descriptors_2 );

I then try to compare the new image and points with the image and points that have been loaded from the file, this is the point where I think I'm going drastically wrong:

  FlannBasedMatcher matcher;
  std::vector< DMatch > matches;
  matcher.match( descriptors_1, desFileNode, matches );

  double max_dist = 0; double min_dist = 100;

  for( int i = 0; i < descriptors_1.rows; i++ )
  { double dist = matches[i].distance;
    if( dist < min_dist ) min_dist = dist;
    if( dist > max_dist ) max_dist = dist;

  printf("-- Max dist : %f \n", max_dist );
  printf("-- Min dist : %f \n", min_dist );

  std::vector< DMatch > good_matches;

  for( int i = 0; i < descriptors_1.rows; i++ )
  { if( matches[i].distance <= max(2*min_dist, 0.02) )
    { good_matches.push_back( matches[i]); }

  Mat img_matches;
  drawMatches( img_1, keypoints_1, fs2, kptFileNode,
               good_matches, img_matches, Scalar::all(-1), Scalar::all(-1),
               vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );

  imshow( "Good Matches", img_matches );

The error output must be around 50 lines long hence why I'm not posting all of it but it seems to stem from this initial output:

dMatchFlann.cpp: In function ‘int main(int, char**)’:
loadAndMatchFlann.cpp:43:54: error: no matching function for call to ‘cv::FlannBasedMatcher::match(cv::Mat&, cv::FileNode&, std::vector<cv::DMatch>&)’

Let me know if the full error log would help.