Ask Your Question

nkint's profile - activity

2020-11-16 15:01:15 -0600 received badge  Great Question (source)
2018-12-13 05:28:44 -0600 received badge  Good Answer (source)
2018-05-14 04:10:13 -0600 received badge  Notable Question (source)
2018-02-05 18:01:02 -0600 received badge  Notable Question (source)
2016-11-21 06:26:51 -0600 received badge  Popular Question (source)
2016-03-30 06:30:04 -0600 received badge  Popular Question (source)
2015-12-16 15:34:45 -0600 received badge  Famous Question (source)
2015-10-13 08:05:00 -0600 received badge  Taxonomist
2014-12-09 13:43:04 -0600 marked best answer optical flow state of art in version 2.4.2

hi, I've just update my opencv at 2.4.2 version.

In documentation i don't see anymore Horn and Schunck algorithm and i'm asking why (i have some snippet that use it).

Then i've read about an excellent method called "Combining Local and Global" that mixes Lucas Kanade and Horn Schnuck and i'm wondering where to find some code of it compatible with opencv, or if it is planned to be released in the next versions.

2014-10-08 06:56:42 -0600 received badge  Nice Answer (source)
2014-02-22 04:46:51 -0600 answered a question Comparing two HOG descriptors vectors

A similar question is asked here:

http://stackoverflow.com/questions/11626140/extracting-hog-features-using-opencv

they just do a hog-to-hog distance accumulating the error.. nothing complicated, just an accumulation error between two float array (the two hog must be of the same size of course)

2014-01-31 09:24:01 -0600 commented question Extract point for calcOpticalFlowPyrLK and cluster

yes but sometimes the objects merge in one big blob and I'd to split them via clustering and optical flow

2014-01-29 07:36:54 -0600 asked a question Extract point for calcOpticalFlowPyrLK and cluster

Hi!

I want to use calcOpticalFlowPyrLK to calculate optical flow inside some blob detected with Mog background subtractor. Due to the fact that LK optical flow is a sparse optical flow I have to give some points in input.

Reading on the net I have 3 possibility:

  1. Use some lattice points, dividing blob regions in grid
  2. Use goodFeatureToTrack
  3. detect cornerns with cornerHarris

Which method is preferred? Are there some benchmark or comparison?

and then cluster them., wit - for example - kmeans. On what to cluster? Position of the point and optical flow intensity? Or what else?

Thanks in advance

2014-01-29 03:07:23 -0600 commented answer Unable to build documentation

if you are on MAC OS X using BREW consider this for sphinx: https://gist.github.com/terenceponce/3786784

2014-01-22 10:14:42 -0600 received badge  Teacher (source)
2014-01-22 09:45:44 -0600 received badge  Necromancer (source)
2014-01-22 08:22:25 -0600 answered a question How to training HOG and use my HOGDescriptor?

Hi! I have found this repository:

https://github.com/DaHoC/trainHOG

It seems to be a nice tutorial on how to train an HOG detector using SVMlight 6.02. Even if I haven't tried myself I would give it a try!

2014-01-22 08:07:24 -0600 asked a question HOG people detect example image

Hi! I'm attempting to dive into people detection using HOG.

I think at some point to train my own detector but first I want to give the standard people detector a try. So I'm starting with the example peopledetect.cpp sample in the opencv root.

I'm using Opencv 2.4.3, and in the sample/cpp I have this example (I think it is the same of the newest version but I'm copying it here to be sure):

#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/objdetect/objdetect.hpp"
#include "opencv2/highgui/highgui.hpp"

#include <stdio.h>
#include <string.h>
#include <ctype.h>

#include <iostream>

using namespace cv;
using namespace std;

// static void help()
// {
//     printf(
//             "\nDemonstrate the use of the HoG descriptor using\n"
//             "  HOGDescriptor::hog.setSVMDetector(HOGDescriptor::getDefaultPeopleDetector());\n"
//             "Usage:\n"
//             "./peopledetect (<image_filename> | <image_list>.txt)\n\n");
// }

int main(int argc, char** argv)
{

    std::cout << "OPENCV version: " << CV_MAJOR_VERSION << " " << CV_MINOR_VERSION << std::endl; 

    Mat img;
    FILE* f = 0;
    char _filename[1024];

    if( argc == 1 )
    {
        printf("Usage: peopledetect (<image_filename> | <image_list>.txt)\n");
        return 0;
    }
    img = imread(argv[1]);

    if( img.data )
    {
        strcpy(_filename, argv[1]);
    }
    else
    {
        f = fopen(argv[1], "rt");
        if(!f)
        {
            fprintf( stderr, "ERROR: the specified file could not be loaded\n");
            return -1;
        }
    }

    HOGDescriptor hog;
    hog.setSVMDetector(HOGDescriptor::getDefaultPeopleDetector());
    namedWindow("people detector", 1);

    for(;;)
    {
        char* filename = _filename;
        if(f)
        {
            if(!fgets(filename, (int)sizeof(_filename)-2, f))
                break;
            //while(*filename && isspace(*filename))
            //  ++filename;
            if(filename[0] == '#')
                continue;
            int l = (int)strlen(filename);
            while(l > 0 && isspace(filename[l-1]))
                --l;
            filename[l] = '\0';
            img = imread(filename);
        }
        printf("%s:\n", filename);
        if(!img.data)
            continue;

        fflush(stdout);
        vector<Rect> found, found_filtered;
        double t = (double)getTickCount();
        // run the detector with default parameters. to get a higher hit-rate
        // (and more false alarms, respectively), decrease the hitThreshold and
        // groupThreshold (set groupThreshold to 0 to turn off the grouping completely).
        hog.detectMultiScale(img, found, 0, Size(8,8), Size(32,32), 1.05, 2);
        t = (double)getTickCount() - t;
        printf("tdetection time = %gms\n", t*1000./cv::getTickFrequency());

        std::cout << "found: " << found.size() << std::endl;

        size_t i, j;
        for( i = 0; i < found.size(); i++ )
        {
            Rect r = found[i];
            for( j = 0; j < found.size(); j++ )
                if( j != i && (r & found[j]) == r)
                    break;
            if( j == found.size() )
                found_filtered.push_back(r);
        }
        for( i = 0; i < found_filtered.size(); i++ )
        {
            Rect r = found_filtered[i];
            // the HOG detector returns slightly larger rectangles than the real objects.
            // so we slightly shrink the rectangles to get a nicer output.
            r.x += cvRound(r.width*0.1);
            r.width = cvRound(r.width*0.8);
            r.y += cvRound(r.height*0.07);
            r.height = cvRound(r.height*0.8);
            rectangle(img, r.tl(), r.br(), cv::Scalar(0,255,0), 3);
        }
        imshow("people detector", img);
        int c = waitKey(0) & 255;
        if( c == 'q' || c == 'Q' || !f)
            break;
    }
    if(f)
        fclose(f);
    return 0;
}

With the image I have it does not find anything, so prob. I have to change some parameters, i.e. the size of the detected people in ... (more)

2013-11-18 08:02:48 -0600 asked a question exception in CountNonZero (too see if 2 mat are equal)

Hi! I have this exception

 OpenCV Error: Assertion failed (src.channels() == 1 && func != 0) in countNonZero

and I don't understan why. I use it in this function:

bool compare_images(string file1, string file2) {
    cout << "comapre_images > file1: "<<file1<<", file2: "<<file2<<endl;
    cv::Mat m1 = cv::imread(file1);
    cv::Mat m2 = cv::imread(file2);

    if(m1.empty() || m2.empty()) {
        cout << "WARNING: one of the two file is empty" << endl;
        return false;
    }

    if (m1.cols != m2.cols || m1.rows != m2.rows || m1.dims != m2.dims) {
        cout << "WARNING: the two images differs on size" << endl;
        return false;
    }

    cv::Mat diff;
    cv::compare(m1, m2, diff, cv::CMP_NE);

    int nz = cv::countNonZero(diff);

    std::cout << "comparing " << file1 << " and " << file2
              << " the diff pixels are:" << nz;
    return nz==0;
}
2013-09-03 10:13:16 -0600 asked a question camera calibration with partially occluded patterns

hi! in the documentation of calibrateCamera method I found:

Although, it is possible to use partially occluded patterns, or even different patterns in different views. Then, the vectors will be different.

where can I find more information about this?

2013-08-30 10:09:47 -0600 asked a question cv::undistort and values of distortion coefficent

Hi!

I'm porting to opencv a little script for lens distortion correction.

This program is doing the undistortion with the Brown normalization, and it uses some parameters that gets out of a (close source) software for camera calibration that has values in the so called photogrammetric representation.

The main difference I have noticed till now is that it express values of focal length and principal points in mm while opencv undistort function takes values in pixel. Ok, I have pixel dimension and I can do this conversion.

But.. Even after this conversion the cv::undistort function still gets me an image that is not correctly undistorted.

I think that there should be some scaling factor that I'm not considering. So I'm asking: what are the values of distortion coefficent units? Are they in radians? Or there is some other conversion that I have to do? Some advice?

EDIT: I report the name of the log and how I'm using it (referring to opencv cv::undistort documentation):

Camera interior orientation: focal length (mm), principal point (mm). Radial distortion parameters: k1, k2, k3 Decentring distortion parameters: p1, p2 Affinity, non-orthogonality parameters: b1, b2

I'm using focal length parameter (scaled in pixel) for fx anf fy, principal point (scaled in pixel as well) as cx, cy. k1, k2, k3, p1, p2 as they are b1, b2: not using those parameters.

2013-08-20 07:57:11 -0600 asked a question Common pre processing in blob extracting

Hi! I'm trying to do some blob extracting and the normal procedure I've seen in a lot of example codes is:

  • after some background butraction/modeling/etc we have cv::Mat I (black and white)
  • apply some processing like blur, erode, dilate
  • apply findContours
  • filter blob (like with contourArea)

I'd like to know if there are some common way of handling the pre-processing before find contours or if they can be avoided because of computationally too expencive

2013-08-09 06:23:27 -0600 asked a question Systematically explore all parameters of BackgroundSubtractor objects

Hi! I have some video with illumination changes (and no objects) on which I'd like to test some BackgroundSubtractor objects (MOG, MOG2, and gpu modules too, FGDStatModel, GMG_GPU), and find out which one is more robust against illumination changes. The problem is that each algorithm has a lot of settings, and tuning parameters could require ages.

For now I've done a simple class that tries out all algorithm and I'm manually trying some different combinations of parameters. But I'm looking for some more systematically way of testing it.

Any kind of advice?

2013-07-26 05:22:05 -0600 asked a question are there some samples of legacy code for tracking?

Hi! I'm interested in tracking and I've found in legacy module some tracking class (blobtracking, condensation, kalman post processing, etc).

The files I'm talking about are listed in github here: https://github.com/Itseez/opencv/tree/master/modules/legacy/src

I'd like to test them.. is there some samples? or just some notes about the reason those files are moved to legacy/deprecated module?

Thanks.

2013-07-26 05:16:05 -0600 commented answer Best method to track multiple objects?

do you have some example for particle filter in multiple-object tracking?

2013-07-26 05:08:14 -0600 commented answer syntax for particle filter in opencv 2.4.3

thank you, very nice. Is there a place where to download you patch? I don't understand exactly you fixes with diff command

2013-06-11 06:34:33 -0600 commented answer OpenCV on Mac OS X 10.8 Mountain Lion

why do this post has such a low votes? It is super important, and it works for me after days of troubles on mac compilers

2013-06-10 04:08:29 -0600 commented answer reading opencv + qt code

not much links are following..

2013-06-03 07:22:42 -0600 commented answer reading opencv + qt code

thanks! i hope some link will follow : )

2013-06-03 06:28:23 -0600 asked a question reading opencv + qt code

hi! i've managed to let qt and opencv run with threads, and i'm quite proud about it. but there are some problems and uncertainties i have and i think i can learn and clarify a lot reading the source of some qt + opencv application and see how other people make some kind of decisions. but i didn't find a lot in the net.. i'm wondering if someone can suggest me some opensource project that is using massively opencv.

2013-05-29 05:22:04 -0600 commented question OpenCV + CUDA + OSX (10.8.3)

I'm having the same problem (and in another machine with ubuntu is easy to resolved too but I still want to compile it in Mac). How did you add that flag? in cmake?

2013-05-27 04:02:25 -0600 commented answer install OpneCV with CUDA in MAC

no. it is very annoying because i have done an application under ubuntu that use gpu stuff a lot. but i can't run on my mac. i hope to find a solution.. just in case, keep in touch

2013-04-29 13:17:17 -0600 asked a question install OpneCV with CUDA in MAC

Hi! I'm trying to install opencv 2.4.5 on Mac 10.8 with no success. I'm using 2.4.5 opencv because it is the last but If there are some issues on that version i can downgrade, no problem. (I have already installed command line extension from Xcode and CUDA toolkit from NVIDIA)

I'm stucked in setting the right compiler for CUDA.

What I've done is: download opencv, run CMAKE, run make, get the error:

clang: error: unsupported option '-dumpspecs'

(with manual cmake and with macport same error)

So, after some google i've found that the problem could be CUDA_HOST_COMPILER. I've tried to change it in /usr/bin/gcc, /usr/bin/llvm-g++ and i get on but after a little while I get another error:

cc1plus: error: unrecognized command line option "-Wno-narrowing"

I can post all output from cmake and make, if it needed.

What can I do? I need to compile opencv with cuda, but I don't have other need like a particular version of opencv or gcc or clang or llvm (and I normally develop under ubuntu so I don't understand deeply the differences between those compilers).

this is my system settings:

OS X 10.8.3 (12D78)

and

>>> clang --version
Apple LLVM version 4.2 (clang-425.0.24) (based on LLVM 3.2svn)
Target: x86_64-apple-darwin12.3.0
Thread model: posix

and

>>> g++ --version
i686-apple-darwin11-llvm-g++-4.2 (GCC) 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2336.11.00)
>>> ls -al /usr/bin/g++
/usr/bin/g++ -> llvm-g++-4.2

and

>>> cc --version
Apple LLVM version 4.2 (clang-425.0.24) (based on LLVM 3.2svn)
Target: x86_64-apple-darwin12.3.0
Thread model: posix
>>> ls -al /usr/bin/cc
/usr/bin/cc -> clang
2013-03-04 11:17:40 -0600 marked best answer finding centroid of a mask

hi, i have a mask obtained by threshold function. i wonder if i can find its centroid with builtin function, now i'm doing manual:

    float sumx=0, sumy=0;
    float num_pixel = 0;
    for(int x=0; x<difference.cols; x++) {
        for(int y=0; y<difference.rows; y++) {
            int val = difference.at<uchar>(y,x);
            if( val >= 50) {
                sumx += x;
                sumy += y;
                num_pixel++;
            }
        }
    }
    Point p(sumx/num_pixel, sumy/num_pixel);

but probabilly is there a better way..

2013-01-30 11:27:19 -0600 asked a question osx mountain lion and qt

Hi! I am from a little while on opnecv programming on ubuntu. Now just shifted to mac osx 10.8. I have installed both opencv and qt with homebrew.

The problem is that when i compile and execute an opencv program with highgui it open a window but not a qt window. in particular case when i'm showing with imshow some test matrixes like 4x4 or 5x5 mats.. in ubuntu i had a zoomed view, here i have a small window with only few pixel not zoomed. Then i don't have the upper control menu:

image description

well... i don't have qt on opencv highgui module.

i know that homebrew is not a perfect packet manager so i'd like to make as less modification as possibile, and only from brew (don't recompile please)

some hints?

2013-01-14 06:29:49 -0600 commented answer differences in histogram equalization between equalizeHist and wikipedia example

no minMacLoc give me 4 and 255 and i use cout &lt;&lt; imgEqualized &lt;&lt; endl;. I'm on ubuntu 10.10, gcc version 4.4.5, opencv 2.4.3, i compile with g++pkg-config --libs --cflags opencvtest.cpp . What could it be? what version of opencv do you use?