Ask Your Question

icedecker's profile - activity

2020-12-03 02:05:34 -0600 received badge  Popular Question (source)
2017-10-16 14:38:27 -0600 received badge  Guru (source)
2017-10-16 14:38:27 -0600 received badge  Great Answer (source)
2017-07-17 16:14:47 -0600 received badge  Popular Question (source)
2016-06-26 20:16:50 -0600 received badge  Good Question (source)
2016-01-18 12:10:06 -0600 received badge  Nice Answer (source)
2015-05-18 16:10:25 -0600 received badge  Famous Question (source)
2014-12-09 13:44:44 -0600 marked best answer Multiple object detection with 2D features and homography?

The code showed in the tutorial about 2D features and homography, the SURF_Homography.cpp can be adapted to detect multiple occurrences of the same object on a image?

I'm trying to figure a good way to do this:

1 - I have the list of matched features

2 - When I find the first object, calculate the homography

3 - Delete the matched features inside the homography.

4 - Iterate until I don't have any matched features.

The problem is that I don't have much idea how to do the step 3, how to know if keypoints are inside of a homography region? Anyone have idea how to do it, or have a better algorithm?

2014-06-11 14:36:24 -0600 commented answer cv::findContours finds more than one contour

After you find the contours, you can filter by comparing the current ratio of width/height with the expected ratio w/h of the contours you want.

2014-06-11 14:32:40 -0600 commented question Template matching is not working correctly?Is there a way to solve this

If you know the expected size (or rotation) of the object in image, try to resize (rotate) the template image to the expected size/angle.

2014-06-03 12:36:34 -0600 marked best answer Unresolved inclusion in OpenCV+Android tutorial

Hi,

I have followed the tutorial to run the mixed native code and java, installed the ndk stuff, and it works nicely. But in the files at jni directory the Eclipse always show a "Unresolved inclusion: <jni.h>" and other unresolved messages like as "Unresolved inclusion: <opencv2 core="" core.hpp="">".

I have searched on the net, and says that I should include, in c/c++ properties. But the tutorial examples of OpenCV+Android are not c++ projects. C/C++ perspective does not apply to these projects and I don't find in any place of Eclipse how to include, even switching to c/c++ perspective. I have the plugin cdt installed.

How I could handle it? Thanks!

2014-05-12 02:01:21 -0600 marked best answer How to read the pixels of the three channels at the same time?

Hi, I'm trying to convert the following C++ code in Java (for an Android app). I need to sum the pixels in the three channels at the same time (and others operations). Any suggestions?

vector<Mat> channel; // in java List<Mat> channel = null;
split(img, channel); // in java org.opencv.core.Core.split(img, channel);

// sum the pixels of three channels - How to do it in Java? The it_r, it_g and it_b are iterators
for (; it_r != it_endr; ++it_r, ++it_g, ++it_b) {
    r = (double) *it_r;
    g = (double) *it_g;
    b = (double) *it_b;

    tmp = r + g + b;
    if (tmp > sum) {
        sum = tmp;
    }
}
2014-04-17 19:59:18 -0600 received badge  Taxonomist
2014-01-30 12:09:24 -0600 answered a question Extracting the Percentage of color (Red,blue,green,yellow,orange) in an image in Opencv?

You can use the function countNonZero and divide by the total number of pixels of image. A example of use:

vector<Mat> channels;
split(hsv_img,channels);

Mat red, blue, green;
inRange(channels[0], Scalar(0), Scalar(10), red); // red
// ... do the same for blue, green, etc only changing the Scalar values and the Mat

double image_size = hsv_img.cols*hsv_img.rows;
double red_percent = ((double) cv::countNonZero(red))/image_size;

But it could be not optimal depending of the application (for example, if you need to scan lots of image). Anyway, you can use it to compare the values.

2014-01-20 14:07:48 -0600 received badge  Good Answer (source)
2014-01-20 14:07:48 -0600 received badge  Enlightened (source)
2014-01-18 18:05:15 -0600 received badge  Notable Question (source)
2014-01-09 07:48:06 -0600 marked best answer Error in parameter of traincascade?

Hi,

I'm trying to train new detectors, with previous OpenCV version 2.3.1, my parameters had worked nice. I have discarded the previous detectors, so I needed to train again in the new version OpenCV 2.4.2. But with the same parameters, it had the following error:

===== TRAINING 2-stage =====
<BEGIN
OpenCV Error: Bad argument (Can not get new positive sample. The most possible reason is 
insufficient count of samples in given vec-file.
) in get, file /home/user/opencv/apps/traincascade/imagestorage.cpp, line 159
terminate called after throwing an instance of 'cv::Exception'
  what():  /home/user/opencv/apps/traincascade/imagestorage.cpp:159: error: (-5) Can not 
  get new positive sample. The most possible reason is insufficient count of samples in given 
  vec-file. 
  in function get

The parameters that I used:

    ./opencv_traincascade -data mix25x15 -vec mix.vec -bg negatives.txt -numStages 15 
-minHitRate 0.999 -maxFalseAlarmRate 0.5 -numPos 3600 -numNeg 3045 -w 25 -h 15  
-precalcValBufSize 2048 -precalcIdxBufSize 2048 -mode ALL

But I have put the exact number of positive samples, I have looked on the vec file to be sure. I also put the correct number of negatives images. I've tried to put the same number of pos and neg, for example 1200 (of course I've created the corresponding vec file) for each, but it is not working. I have tried also with 1200 pos and 3045 neg.

I'm not sure if is something in the code or in my parameters. Any idea? Thanks!

2013-06-26 16:16:13 -0600 received badge  Popular Question (source)
2013-04-30 17:52:12 -0600 commented question CascadeClassifier::load function always returns false

If you are running in Windows, make sure that you are using slash () in the path. You are using the backslash (/), it is used in Linux environments.

2013-04-01 09:06:29 -0600 commented answer Keypoint Descriptor with different size on different computers

Yes, I'm using OpenCV 2.4.4 in both computers.

2013-04-01 06:38:39 -0600 commented question Haar Training hang after few stages.

I suggest you to use opencv_trainingcascade instead of opencv_haartraining. With TBB enabled, the training will run faster with the multithreading enabled.

2013-04-01 06:31:03 -0600 commented question Problems with compiling the `displayImage`-example

I think that the compiler can find the path to cv and highgui, but not the path to opencv2 modules. Type the command pkg-config --cflags --libs opencv and see what it returns.

2013-04-01 06:27:39 -0600 commented question Keypoint Descriptor with different size on different computers

I'm running in these computers the same configuration: OpenCV 2.4.4 and Ubuntu 12.04 (both are 64 bits).

2013-04-01 06:25:59 -0600 commented answer Keypoint Descriptor with different size on different computers

I've initialized the SURF with these parameters: SurfFeatureDetector detector(minHessian,4,3,true,true);. So is strange that it behaves different in different computers.

2013-03-28 14:06:46 -0600 asked a question Keypoint Descriptor with different size on different computers

I'm having a strange behavior with my keypoint descriptor. I have stored the keypoints descriptors of a object in a yml file for further use in another code. The size (cols) is 128. When I extract in runtime the keypoint descriptor of a scene to detect the object (the descriptors of the object are in the yml file), in my laptop the size of the scene descriptor is 128 and the matching run nicely.

But in another computer, running the same code the size of the extracted scene descriptors is 64. So I'm unable to do the matching.

How it is possible that the size of the descriptors are different in different computers with the same code? It is possible to force a descriptor to have the desired size?

2013-03-03 16:56:32 -0600 received badge  Nice Answer (source)
2013-01-31 05:56:29 -0600 marked best answer Performance evaluation for detection

I'm trying to evaluate some cascades I have trained with trainingcascade.cpp. So I would like to know if using the intersection of two rectangles (the reference rect and the detected rect) is a good metric. The criteria would be something:

if intersect.area() > (0.9*ref.area())
   hit++
else 
   miss++

It is good enough to evaluate? I have looked the code in opencv_performance, but did not understood well the metric used here. It is a code for rect intersection or another thing?

Thanks!

2013-01-13 05:01:54 -0600 received badge  Civic Duty (source)