Ask Your Question

Isa's profile - activity

2020-04-12 12:16:53 -0500 received badge  Notable Question (source)
2018-11-19 03:48:18 -0500 received badge  Notable Question (source)
2017-06-06 08:15:59 -0500 received badge  Popular Question (source)
2017-03-29 22:22:01 -0500 received badge  Popular Question (source)
2014-07-03 11:51:28 -0500 received badge  Student (source)
2014-07-03 10:23:44 -0500 asked a question Why is SURF not working as expected?

I'm trying to use features2d SURF to detect a given image (a video frame) in a bigger panorama image. But either SURF isn't suitable for this task or I couldn't use it correctly. What I get as output are images that look like this:

SURF output SURF output

which seems pretty poor to me. I did filter only the good matches. I used this tutorial: homography tutorial

Along with this Java code from another question: homography tutorial in Java

To explain my code: the panoramaImage is the big image I want to match every video frame with and the smallMats is a List from Mats that are video frames.

Now my question: (1) Is anything wrong with my code (besides that its ugly sometimes) or (2) is SURF just not good for this task? If (2) -> please tell me alternatives :)

Your answers are greatly appreciated, thanks!


    log.append(TAG + "----Sampling using SURF algo-----" + "\n");
    log.append(TAG + "----Detecting SURF keypoints-----" + "\n");

    MatOfKeyPoint keypointsPanorama = new MatOfKeyPoint();
    LinkedList<MatOfKeyPoint> keypointsVideo = new LinkedList<MatOfKeyPoint>();

    // detect panorama keypoints
    SURFfeatureDetector.detect(panoramaImage, keypointsPanorama);
    // detect video keypoints
    for (int i = 0; i < videoSize; i++) {
        MatOfKeyPoint keypoints = new MatOfKeyPoint();
        SURFfeatureDetector.detect(smallMats.get(i), keypoints);

    log.append(TAG + "----Drawing SURF keypoints-----" + "\n");
    Features2d.drawKeypoints(panoramaImage, keypointsPanorama,

    for (int i = 0; i < videoSize; i++) {
        Mat out = new Mat();
        Features2d.drawKeypoints(smallMats.get(i), keypointsVideo.get(i),

    log.append(TAG + "----Extracting SURF keypoints-----" + "\n");

    Mat descriptorPanorama = new Mat();
    LinkedList<Mat> descriptorVideo = new LinkedList<Mat>();

    // extracting panorama keypoints
    SURFextractor.compute(panoramaImage, keypointsPanorama,

    // extracting video keypoints
    for (int i = 0; i < videoSize; i++) {
        Mat descriptorVid = new Mat();
        SURFextractor.compute(smallMats.get(i), keypointsVideo.get(i),

    log.append(TAG + "----Matching SURF keypoints-----" + "\n");

    LinkedList<MatOfDMatch> matches = new LinkedList<MatOfDMatch>();

    for (int i = 0; i < videoSize; i++) {
        MatOfDMatch match = new MatOfDMatch();
        FLANNmatcher.match(descriptorVideo.get(i), descriptorPanorama,

        // extract only good matches
        List<DMatch> matchesList = match.toList();

        double max_dist = 0.0;
        double min_dist = 100.0;

        for (int j = 0; j < descriptorVideo.get(i).rows(); j++) {
            Double dist = (double) matchesList.get(j).distance;
            if (dist < min_dist)
                min_dist = dist;
            if (dist > max_dist)
                max_dist = dist;

                + "-- Extracting good matches from video object nr: " + i
                + "\n");
        log.append(TAG + "-- Max dist : " + max_dist + "\n");
        log.append(TAG + "-- Min dist : " + min_dist + "\n");

        LinkedList<DMatch> good_matches = new LinkedList<DMatch>();
        MatOfDMatch goodMatch = new MatOfDMatch();

        for (int j = 0; j < descriptorVideo.get(i).rows(); j++) {
            if (matchesList.get(j).distance < 3 * min_dist) {



    log.append(TAG + "----Drawing SURF matches-----" + "\n");


    for (int i = 0; i < videoSize; i++) {
        Mat img_matches = new Mat();
        Features2d.drawMatches(smallMats.get(i), keypointsVideo.get(i),
                panoramaImage, keypointsPanorama, matches.get(i),
                img_matches, new Scalar(255, 0, 0), new Scalar(0, 0, 255),
                new MatOfByte(), 2);

        LinkedList<Point> objList = new LinkedList<Point>();
        LinkedList<Point> sceneList = new LinkedList<Point>();

        List<KeyPoint> keypoints_objectList = keypointsVideo.get(i)
                .toList ...
2014-03-31 12:00:38 -0500 commented answer How to pass a MatOfKeyPoint and MatOfPoint2f to native code? (OpenCV 4 Android)

Thanks! I casted it to Mat and then used Andreys answer to convert it to a vector of keypoints.

2014-03-30 09:53:53 -0500 asked a question How to pass a MatOfKeyPoint and MatOfPoint2f to native code? (OpenCV 4 Android)

I'm currently struggling with the Java Native Interface for Eclipse.

What I have:

With OpenCV, I detected keypoints of a frame and got back an object of type MatOfKeyPoint. (image is of type Mat)

private MatOfKeyPoint mKeypoints;
private FeatureDetector mDetector;
mDetector = FeatureDetector.create(FeatureDetector.FAST);
mDetector.detect(image, mKeypoints);

What I want:

My mKeypoints is now of type MatOfKeyPoint. I want to pass this object to native code so I can do calculations faster. After my calculation, the native method should save its results in an object of type MatOfPoint2f

How I tried to do it:

I wrote a method

private native void getSkylinePoints(long addrMatOfKeyPoint, long addrOutputMat);

and used it like this:


(mSkylinePoints is of type MatOfPoint2f and is not null)

My c++ code then looks like this:

    jobject, jlong addrMatOfKeyPoint, jlong addrOutputMat)
    vector<KeyPoint>& keypoints  = *(vector<KeyPoint>*)addrMatOfKeyPoint;
    vector<Point2f>& output  = *(vector<Point2f>*)addrOutputMat;

        // without this line, it works
    if (!keypoints.empty())
        output.push_back (keypoints[0].pt);


I know that vector< KeyPoint> in c++ corresponds to MatOfKeyPoint in Java and vector< Point2f> in c++ corresponds to MatOfPoint2f in Java. I also wrote native functions that pass an object of type Mat (Mat in Java and Mat in c++) and there it works.

The error I get:

Debugging c++ code in eclipse is hard. All the LogCat tells me is:

Tag: libc   Text: fatal signal 6 (SIGABRT) at 0x000358a (code=-6), thread 13757 (Thread-5023)

I think that you can't just do this

vector<KeyPoint>& keypoints  = *(vector<KeyPoint>*)addrMatOfKeyPoint;

as I did with Mat objects:

Mat& background  = *(Mat*)addrBackground;

Does anyone know how to do this? Thanks in advance for any help!


2014-03-30 09:50:25 -0500 asked a question Errors displayed on logcat when running opencv sample projects

Hey all,

I downloaded the samples from OpenCV for Android and let them run in eclipse on a real device (Nexus 7). They all work fine, but when I'm looking at the LogCat log, I see ca 20 errors displayed for each frame. The log is filled with them. Although all the samples work, this irritates me and makes it hard to debug the code.

Here a screenshot of the log: screenshot of logcat while running android OpenCV samples

Does anyone know where these errors come from and what they mean? Should I worry about them, if no: can I somehow hide them? if yes: what to do?

Another question: I get, when running the JavaCameraView idle (Sample CameraPreview) only 15fps max. This is few, isn't it? Or is this always like this when using OpenCV with Java (over JNI) instead of c++?

Thanks in advance for any hints! Isa

2014-03-19 18:00:15 -0500 received badge  Critic (source)
2014-03-19 14:14:27 -0500 received badge  Editor (source)
2014-03-19 14:07:14 -0500 asked a question How to find the best match between two curves, part-to-whole

Hey all

I'm currently facing the following problem:

given: two curves, both approximated by sample points. One of them is slightly bigger than the other, in my case, the small curve has 20 sample points and the large one 25. They are not uniformly distributed, but the sample points are in a certain range (-> point x1 in range 0-9, point x2 in range 10-19 etc)

what I need: The best match between the two curves under the assumption that the small curve is a subset of the bigger curve.

I thought about a sliding-window algorithm, but I don't know how to make it rotation invariant. I need only translation and rotation, no scaling.

I already tried

estimateRigidTransform(Mat pointSetA, Mat pointSetB, bool fullAffine)

but this works only with point sets that are of equal size and the pairs of points must be corresponding (x1 with y1, x2 with y2, etc).


is no option, because it also calculates the perspective transform which I don't need.

Here is an image for clarification:

image description

(The image is only a sample, I didn't count the points..)

Does anyone know an elegant algorithm for this? Or any ideas how to write my own?

Thanks in advance for any help! Isa

2014-03-19 11:05:16 -0500 received badge  Scholar (source)
2014-03-19 11:05:16 -0500 received badge  Scholar (source)
2014-03-19 10:49:00 -0500 received badge  Supporter (source)
2014-03-19 10:48:53 -0500 commented answer estimateRigidTransform returns an empty matrix

I see, thats even better. But I thought, estimateRigidTransform doesn't use RANSAC?

2014-03-19 10:07:07 -0500 asked a question estimateRigidTransform returns an empty matrix

Hey all,

I'm trying to use estimateRigidTransform in my program to find the best transformation matrix between two point sets using only translation, rotation & uniform scaling.

My two point sets are of type

MatOfPoint2f mPointSetA;
MatOfPoint2f mPointSetB;

and are both of size 20. (They do have 20 rows) For those who wonder, I get them like this:

LinkedList<Point> listOfPointsA;
// fill list with points...


Now I'm calling:

Mat R = Video.estimateRigidTransform(mPointSetA, mPointSetB, false);

But the R that I get is of size 0x0! Could anybody tell me what I'm missing here?

Thanks in advance for any help! Isa