Ask Your Question

Mr.Mountain's profile - activity

2020-10-09 15:36:11 -0600 received badge  Nice Question (source)
2020-10-09 14:34:03 -0600 received badge  Popular Question (source)
2019-09-16 08:05:27 -0600 received badge  Popular Question (source)
2018-10-08 11:36:01 -0600 received badge  Popular Question (source)
2013-12-13 05:45:42 -0600 received badge  Good Question (source)
2013-06-14 04:59:19 -0600 received badge  Nice Question (source)
2013-02-15 04:03:28 -0600 asked a question Using FAST with Descriptor causes crash

Hello,

I was trying to use the Features from Accelerated Segment Tests (FAST) with a descriptor in OpenCV 2.4.3 C++. But every time I try to match the computed descriptor-matrices I'll get an confusing error. I tried to investigate the source-code but I can't see why this assertion fails.

OpenCV Error: Assertion failed (trainDescCollection[iIdx].rows < IMGIDX_ONE) in knnMatchImpl, file C:/slave/WinInstallerMegaPack/src/opencv/modules/features2d/src/matchers.cpp, line 365

Here's some example-code to run. The images are already loaded.

Mat* fstImageDescr = new Mat;
Mat* sndImageDescr = new Mat;
vector<cv::KeyPoint> fstKeypoints, sndKeypoints;

FastFeatureDetector *fastDete = new FastFeatureDetector;
SurfDescriptorExtractor* surfDesc = new SurfDescriptorExtractor;

fastDete->detect(*fstImage, fstKeypoints);
fastDete->detect(*sndImage, sndKeypoints);
surfDesc->compute(*fstImage, fstKeypoints, *fstImageDescr);
surfDesc->compute(*sndImage, sndKeypoints, *sndImageDescr);

vector<DMatch>* matches = new vector<DMatch>;
BFMatcher *matcher = new BFMatcher(NORM_L2);
matcher->match(*fstImageDescr, *sndImageDescr, *matches); //CRASHES

Thank you for your help.

2013-02-13 11:20:40 -0600 asked a question Using Sift-Detektor with ORB-Descriptor

Hello, I'm trying to use the SIFTFeaturesDetector with an ORBDescriptorExtractor in C++. The images are loaded in colour. But I guess that doesn't matter since they would be converted by the detector/descriptor ?

SiftFeatureDetector sift;
sift.detect(fstImage,fstKeypoints1);
OrbDescriptorExtractor orbDesc;
orbDesc->compute(fstImage, fstKeypoints1, fstImageDescr);

This causes a bad_alloc coming from the vector of the keypoints. Im using 2.4.3 and can't tell why this is happening. When I'm trying to use ORB with a SIFT-Descriptor the system crashes completely. Do I miss something?

UPDATE: Found one issue: http://code.opencv.org/issues/2521#note-2 Perhaps a second one should be opened ... or I was wrong and everything else is fine.

2013-02-13 11:19:23 -0600 commented answer CV findHomography assertion error - counter => 4

Thank you. =)

2013-01-23 18:02:25 -0600 received badge  Scholar (source)
2013-01-23 18:02:12 -0600 commented answer CV findHomography assertion error - counter => 4

I found out that besides of RANSAC/LHS or something else the estimation of the homography always needs at least four points.

2013-01-20 11:13:05 -0600 asked a question CV findHomography assertion error - counter => 4

Hello, I'm currently finishing my evaluation-tool for interest point detectors. In the last steps I found a confusing error.

Mat findHomography(InputArray srcPoints, InputArray dstPoints, int method=0, double ransacReprojThreshold=3, OutputArray mask=noArray() )

The srcPoints and dstPoints are vector<points2f> which stores the corresponding points of the matched keypoints. So far nothing special - It's like in the tutorials.

But when I use RANSAC and have a vector<Points2f> in range [0, ... , 4[ I get an assertion error than the counter should be greater or equals four.

Question 1: Does the algorithm needs at least four points to describe what belongs to the current model or not and to create the consensus ?

Question 2: Is there any documentation about this? (I took a look at the doc and the tutorials.)

Thanks for your help.

2013-01-20 07:56:03 -0600 asked a question Does Flann only store one approximated nearest DMatch for every entry in a queue descriptor?

If I have several images and train a FlannBasedMatcher with theirs descriptors, will there only be on DMatch for every keypoint of the queue-image?

2013-01-06 05:29:27 -0600 answered a question missing library problem?

Open up the example and look at the imports. The package will describe where it is located. If it is a standard Java API class then you should copy the import into your class where you use the rectangle.

If the import has a different package, try to locate the library with this class. Copy the library into a folder. If you use Eclipse click right onto the lib and select "add to build path".

If none of this solutions worked, please give us more information. Perhaps a snippet of the example file.

2013-01-05 15:44:50 -0600 received badge  Editor (source)
2013-01-05 15:44:09 -0600 asked a question Saving Matcher/Descriptors

Hello, I'm currently working on a small application which should index some images and store this index. I'm reading the image and getting the keypoints. For writing down the data I would use yaml.

vector<cv::KeyPoint> keypoints;
cv::Mat source = cv::imread("test.jpg");
Ptr<FeatureDetector> detector = new SurfFeatureDetector(400);
detector->detect(source , keypoints);

Now I have the keypoints. This is the first possible move to save the keypoints, If I do so, I can complete reconstruct any keypoint found. So no problem.

But what I want is to store the descriptor. So lets go a little bit further:

cv::Mat sourceMatch; Ptr<DescriptorExtractor> descriptorExtractor = new SurfDescriptorExtractor();
descriptorExtractor->compute(source , keypoints, sourceMatch);
Ptr<DescriptorMatcher> matcher = new BFMatcher(NORM_L2);

At this Point I'm not sure if I could save down the descriptor to the yaml-file. Any idea? I could try to save down the matcher. But this won't work:

vector<Mat> descriptors;
descriptors.push_back(sourceMatch);
matcher->train();
FileStorage* fileStorage = newFileStorage("index.yml",FileStorage::WRITE, "UTF-8");
matcher->write(*fileStorage);

This statement only saves me a yaml file with the header. And as I would guess even if I could write down the matcher I wouldn't write down the consisting descriptors. Has anyone an idea how to save the descriptors?

thanks

2012-12-20 06:01:45 -0600 received badge  Student (source)
2012-12-20 05:02:12 -0600 asked a question Evaluation of Interest-Point-Detectors and Descriptors

I'm writing on a qualitative evaluation study of interest-point-detectors and descriptors. I've read all Mikolajczyk et al. papers as well as most of the surves from Datta et al. etc. Now I'm implementing an evaluation-tool with OpenCV. I'll take two images. One referred as source- and one as comparison-image.

1.) Detectors: Mikolajczyk uses the repeatability-criterion, correspondence count, matching score and an other metric to evaluate the performance of the detector. I would use repeatability and a correspondence count for matched regions.

2.) Descriptors: Here I would use the widely used Recall and Precision on the matched regions to describe the performance of the descriptor.

My question so far: Are these both good metrics for evaluation?

Now I'm trying to implement this is OpenCV and need a good eye from somebody to tell me if this code could do. Here I use SURF-Detector and -Descriptor for testing the metrics. Recall and Precision are not implemented yet, but there is a function calculating this.

#include <opencv.hpp>
#include "opencv2/core/core.hpp"
#include "opencv2/features2d/features2d.hpp"
#include "opencv2/nonfree/features2d.hpp"
#include "opencv2/highgui/highgui.hpp"
#include <stdio.h>
#include <stdlib.h>
#include <iostream>

int startSURF() {

    std::cout << "Starting " << std::endl;
    cv::Mat sourceImage = cv::imread("collection_1/DSCN5205.JPG",
            CV_LOAD_IMAGE_COLOR);
    cv::Mat comparisonImage = cv::imread("collection_1/DSCN5207.JPG",
            CV_LOAD_IMAGE_COLOR);

    if (!sourceImage.data) {
        std::cout << "Source-Image empty" << std::endl;
        return -1;
    } else if (!comparisonImage.data) {
        std::cout << "Comparison-Image empty" << std::endl;
        return -1;
    }

    //Detect keypoint with SURF
    int minHessian = 400;
    cv::Mat sourceMatchedImage, comparisonMatchedImage;
    std::vector<cv::KeyPoint> sourceKeypoints, comparisonKeypoints;

    cv::SurfFeatureDetector surfDetect(minHessian);
    surfDetect.detect(sourceImage, sourceKeypoints);
    surfDetect.detect(comparisonImage, comparisonKeypoints);

    //Calculate the SURF-Descriptor
    cv::SurfDescriptorExtractor surfExtractor;
    surfExtractor.compute(sourceImage, sourceKeypoints, sourceMatchedImage);
    surfExtractor.compute(comparisonImage, comparisonKeypoints,
            comparisonMatchedImage);

    //Flann-Matching   
    cv::FlannBasedMatcher flann;
    std::vector<cv::DMatch> matches;
    flann.match(sourceMatchedImage, comparisonMatchedImage, matches);

    //Repeatability and Correspondence-Counter    
    float repeatability;
    int corrCounter;
    cv::Mat h12;

    std::vector<cv::Point2f> srcK;
    std::vector<cv::Point2f> refK;

    for (int i = 0; i < matches.size(); i++) {
        srcK.push_back(sourceKeypoints[matches[i].queryIdx].pt);
        refK.push_back(comparisonKeypoints[matches[i].queryIdx].pt);
    }

    std::cout << "< Computing homography via RANSAC. Treshold-default is 3" << std::endl;
    h12 = cv::findHomography( srcK,refK, CV_RANSAC, 1 );

    cv::evaluateFeatureDetector(sourceImage, comparisonImage, h12,
            &sourceKeypoints, &comparisonKeypoints, repeatability, corrCounter);

    std::cout << "repeatability = " << repeatability << std::endl;
    std::cout << "correspCount = " << corrCounter << std::endl;
    std::cout << ">" << std::endl;

    std::cout << "Done. " << std::endl;
    return 0;    
}

I'm uncertain if this code works because SURF gets bad repeatability (e.g. 0.00471577) for my testing images with a rotation of almost 45°. Does anybody see a problem with the code?

Is there a way to evaluate the detector without RANSAC? I did not find a yet implemented method for this. Is the default of 3 a good threshold? I could overwrite it but the problem is that a good threshold can only be determined by experimental results. But I need a robust default-value for all detectors.

I think I definitively need the homography. But I never found a way ... (more)