Ask Your Question

ximobayo's profile - activity

2019-06-21 21:37:32 -0600 received badge  Notable Question (source)
2017-06-15 09:43:12 -0600 received badge  Popular Question (source)
2015-01-27 14:29:18 -0600 received badge  Nice Answer (source)
2013-08-11 16:50:38 -0600 commented question How to detect object from video using SVM

@StevenPuttermans Understood! sorry about the bad usage

2013-08-10 11:27:43 -0600 answered a question How to detect object from video using SVM

But what is the SVM learning? Histogram? Some descriptor?

2013-08-09 07:29:56 -0600 answered a question What to do with DMatch value ?

goodMatches in that example is a vector to write down which features were found, and push_back isa method of the vector class that store one element more at the end of the array

2013-08-09 02:37:10 -0600 answered a question What to do with DMatch value ?

The struct DMatch tells for a descriptor (feature) which descriptor (feature) from the train set is more similar. So there are a queryIndex a TrainIndex (which features decide the matcher that is more similar) and a distance. The distance represents how far is one feature from other (in some metric NORM_L1, NORM_HAMMING etc).

So you can determine if a matche is correct if you set a threshold for the distance. For example with SURF features you can match with knnMatch the 2 nearest features, and then you get a Matrix of Matches and you can take as good matches all that its distance are the half than the second I mean:

   matcher->knnMatch(desc1, trainDesc, matches, 2, Mat(), false );
    for(int i=0; i<matches.size(); i++){
      if(matches[i][0].distance > matches[i][1].distance * 0.6){
    goodMatches.push_back(i);
      }
    }
2013-08-09 02:16:37 -0600 asked a question Best points for opticalFlow

Hi everyone

I am working on a tracker and one step include compute the optical flow. I am using the function of LK calcOpticalFlowPyrLK and I try to track some points extracted before wit SURF extractor but I don't know if that features are correct to track with the LK.

¿Does someone know which are the best points to track with this function?

Thanks

2013-08-06 07:15:40 -0600 asked a question BFmatcher types error

Hi everyone.

I have some trouble with the BFmatcher. I make a matrix with some descriptors but when I try knnmatch() Doesn't work due a types error. I have a set of descriptor in mainDesc I pick some of them

Mat kdesc();
kdesc.push_back(mainDesc.row(i));

The matcher is a member of a function. Does need some inizialization? Then I call knnMatch and the ouput is:

OpenCV Error: Assertion failed (queryDescriptors.type() == trainDescCollection[0].type()) in knnMatchImpl, file /home/maxpower/opencv-2.4.6.1/modules/features2d/src/matchers.cpp, line 351 terminate called after throwing an instance of 'cv::Exception' what(): /home/maxpower/opencv-2.4.6.1/modules/features2d/src/matchers.cpp:351: error: (-215) queryDescriptors.type() == trainDescCollection[0].type() in function knnMatchImpl

Regards

2013-08-04 14:58:19 -0600 commented answer FREAK descriptor type with selectPairs

Thank you! I will try in that way.

2013-08-04 14:57:56 -0600 received badge  Scholar (source)
2013-08-04 14:57:55 -0600 received badge  Supporter (source)
2013-08-03 06:29:32 -0600 received badge  Nice Answer (source)
2013-08-03 06:14:48 -0600 answered a question OpenNI with depth camera other than kinect

Hi, I had the same problem I tried with the openni but I didn't understand that kind of code is to crazy in my point of view.

The alternative that I have found, is openkinect, there are drivers and a library. With this code you can get the both images:

#include "libfreenect.hpp"
#include <iostream>
#include <vector>
#include <cmath>
#include <pthread.h>
#include <cv.h>
#include <cxcore.h>
#include <highgui.h>

using namespace cv;
using namespace std;

class Mutex {
public:
    Mutex() {
        pthread_mutex_init( &m_mutex, NULL );
    }
    void lock() {
        pthread_mutex_lock( &m_mutex );
    }
    void unlock() {
        pthread_mutex_unlock( &m_mutex );
    }
private:
    pthread_mutex_t m_mutex;
};

class MyFreenectDevice : public Freenect::FreenectDevice {
  public:
    MyFreenectDevice(freenect_context *_ctx, int _index)
        : Freenect::FreenectDevice(_ctx, _index), m_buffer_depth(FREENECT_DEPTH_11BIT),m_buffer_rgb(FREENECT_VIDEO_RGB), m_gamma(2048), m_new_rgb_frame(false), m_new_depth_frame(false),
          depthMat(Size(640,480),CV_16UC1), rgbMat(Size(640,480),CV_8UC3,Scalar(0)), ownMat(Size(640,480),CV_8UC3,Scalar(0))
    {
        for( unsigned int i = 0 ; i < 2048 ; i++) {
            float v = i/2048.0;
            v = std::pow(v, 3)* 6;
            m_gamma[i] = v*6*256;
        }
    }
    // Do not call directly even in child
    void VideoCallback(void* _rgb, uint32_t timestamp) {
        std::cout << "RGB callback" << std::endl;
        m_rgb_mutex.lock();
        uint8_t* rgb = static_cast<uint8_t*>(_rgb);
        rgbMat.data = rgb;
        m_new_rgb_frame = true;
        m_rgb_mutex.unlock();
    };
    // Do not call directly even in child
    void DepthCallback(void* _depth, uint32_t timestamp) {
        std::cout << "Depth callback" << std::endl;
        m_depth_mutex.lock();
        uint16_t* depth = static_cast<uint16_t*>(_depth);
        depthMat.data = (uchar*) depth;
        m_new_depth_frame = true;
        m_depth_mutex.unlock();
    }

    bool getVideo(Mat& output) {
        m_rgb_mutex.lock();
        if(m_new_rgb_frame) {
            cv::cvtColor(rgbMat, output, CV_RGB2BGR);
            m_new_rgb_frame = false;
            m_rgb_mutex.unlock();
            return true;
        } else {
            m_rgb_mutex.unlock();
            return false;
        }
    }

    bool getDepth(Mat& output) {
            m_depth_mutex.lock();
            if(m_new_depth_frame) {
                depthMat.copyTo(output);
                m_new_depth_frame = false;
                m_depth_mutex.unlock();
                return true;
            } else {
                m_depth_mutex.unlock();
                return false;
            }
        }

  private:
    std::vector<uint8_t> m_buffer_depth;
    std::vector<uint8_t> m_buffer_rgb;
    std::vector<uint16_t> m_gamma;
    Mat depthMat;
    Mat rgbMat;
    Mat ownMat;
    Mutex m_rgb_mutex;
    Mutex m_depth_mutex;
    bool m_new_rgb_frame;
    bool m_new_depth_frame;
};



int main(int argc, char **argv) {
    bool die(false);
    string filename("snapshot");
    string suffix(".png");
    int i_snap(0),iter(0);

    Mat depthMat(Size(640,480),CV_16UC1);
    Mat depthf  (Size(640,480),CV_8UC1);
    Mat rgbMat(Size(640,480),CV_8UC3,Scalar(0));
    Mat ownMat(Size(640,480),CV_8UC3,Scalar(0));

        //The next two lines must be changed as Freenect::Freenect isn't a template but the method createDevice:
        //Freenect::Freenect<MyFreenectDevice> freenect;
        //MyFreenectDevice& device = freenect.createDevice(0);
        //by these two lines:
        Freenect::Freenect freenect;
        MyFreenectDevice& device = freenect.createDevice<MyFreenectDevice>(0);

    namedWindow("rgb",CV_WINDOW_AUTOSIZE);
    namedWindow("depth",CV_WINDOW_AUTOSIZE);
    device.startVideo();
    device.startDepth();
    while (!die) {
        device.getVideo(rgbMat);
        device.getDepth(depthMat);
        cv::imshow("rgb", rgbMat);
        depthMat.convertTo(depthf, CV_8UC1, 255.0/2048.0);
        cv::imshow("depth",depthf);
        char k = cvWaitKey(5);
        if( k == 27 ){
            cvDestroyWindow("rgb");
            cvDestroyWindow("depth");
            break;
        }
        if( k == 8 ) {
            std::ostringstream file;
            file << filename << i_snap << suffix;
            cv::imwrite(file.str(),rgbMat);
            i_snap++;
        }
        if(iter >= 1000) break;
        iter++;
    }

    device.stopVideo();
    device.stopDepth();
    return 0;
}
2013-08-02 15:35:42 -0600 received badge  Teacher (source)
2013-08-02 15:17:38 -0600 answered a question simple measure over an image

So if you know the points with their (x,y) e.g. (x1,y1) and (x2,y2)
the distance is sqrt((x2-x1)^2+(y2-y2)^2)

If you mean the 2d spatial distance in the image.

2013-08-02 14:07:39 -0600 answered a question simple measure over an image

Do you mean spatial distance? What you mean exactly, do you need all the distances between all the points?

2013-08-02 03:42:18 -0600 answered a question Object Detection with Freak (Hamming)

The matchs returns the distance to the matched descriptor. So you can compare the nearest match with the next I mean:

We have 1 well-know feature from our object and we want to check a set of input features.

After compute the descriptors you call the knnmatch. now we can check how good is the nearest match. if the distance to the neares is less than the second match *factor we accept like the feature.

For example with surf features the autos at their paper I remember that use a 0.6 as factor. So follow this criterion if the distance of the nearest match is less or equal than the distance of 2-th nearest macht * 0.6 the feature is found.

2013-08-02 03:25:18 -0600 asked a question FREAK descriptor type with selectPairs

Hi everyone!

I am developing a tracker, I was using SURF features and descriptors, and I want to try the FREAK descriptor.

I have read the paper of the autor and I understand more or less the procedure but I have some doubts wiht the opencv implementation.

The return of selectPairs() is a int vector, but to match them I think is needed unsigned char because with other types I get an execution error.

What I do now is: -extract features with ORB (for example) -select pairs -copy the result into a Mat (there I have to cast from int to char) -call match with the Mat

Am I doing correctly? Is there an important speed up with flann instead bruteforce? (small sets to match)

Regards

2013-07-29 11:53:39 -0600 received badge  Editor (source)
2013-07-29 11:48:24 -0600 answered a question Best way to detect that eyes are closed.

Try with matchtemplate with normalized cross correlation I think there will be a big difference comaparing the both. So you can set a minimum threshold.