Ask Your Question

mada's profile - activity

2019-11-29 10:19:33 -0600 received badge  Notable Question (source)
2017-08-30 04:54:32 -0600 received badge  Notable Question (source)
2017-02-10 06:14:31 -0600 received badge  Popular Question (source)
2016-04-13 09:58:07 -0600 received badge  Popular Question (source)
2014-09-12 08:14:42 -0600 commented question TrainCascade stuck on getting new negatives

The probability is 0.5^9.

2014-05-28 08:55:56 -0600 commented question Getting Lesser stages in haartraining.exe output

Training can finish if some parameters are met before the 10th stage. Try adding more positives/negatives. And check opencv_traincascade, I think haartraining is obsolete.

2014-05-13 01:33:53 -0600 commented question train cascade - recommendation on better training

I suppose -w and -h should be proportional to the size of the object you are trying to recognize. If both width and height of the chair are similar, then you can stay with 80-80. Other question, to make the cascade more accurate add more positives and negatives(various backgrounds) if possible.

2014-05-13 01:22:33 -0600 commented question TrainCascade stuck on getting new negatives

In higher stages it is harder to select new negatives. For 24th stage probability that a negative window will be selected is 24^maxFalseAlarmRate. Sadly, the process of selecting negatives is not parallelized and it can take hours or days to finish.

2014-05-05 07:03:43 -0600 commented question traincascade Assertion failed

Did you use opencv_createsamples to save positive images in .vec file? Also, -w(width) and -h(height) parameters of opencv_traincascade should be the same size as they were in opencv_createsamples. (negative windows should have the same size as positives in .vec file)

2014-04-28 02:29:29 -0600 commented question Cascade classifier, few questions

Yes, false alarm rate would be very low. But, more true positives would be missed, since at the beginning of every stage a few positives are omitted and new ones selected from .vec file.

2014-04-24 09:47:45 -0600 commented question 2.4.9 estimated release

Judging by the opencv changelog, it is already released.

2014-04-22 09:47:30 -0600 asked a question Cascade classifier, few questions

Hi, I've got a few questions about Haar cascade classifier (opencv_traincascade).

Let's say I've got 5000 positives and 10000 negatives(of various sizes) in my training set. After the training is done, detector still has some false and missed detections.

1)How will adding lots of false positives to the training set influence the cascade training? From my experience false positives could be eliminated, but with the drawback of missing more positives. Is there a better approach?

2)Importance of the order of negatives in .txt file? I reckon the important ones, like usual backgrounds and objects appearing the most should be placed first. So they could be eliminated in the first stages...some other suggestions?

3)Is there an optimal number of positives and negatives that are used for training? Also, same question for positives/negatives ratio?

Thanks!

2014-03-24 02:20:55 -0600 commented question if opencv_traincascade learning crashed

If you start the application again, it will use all the previously stored stages in .xml files and continue with training.

2014-03-19 04:48:46 -0600 commented question Silly question about opencv_traincascade.exe

Read this document first, http://docs.opencv.org/doc/user_guide/ug_traincascade.html Add more samples, both positive and negative, numPos must be lower than number of positive samples in .vec file. It makes no sense to have only one stage.

2013-08-14 11:11:41 -0600 asked a question Storing feature information (keypoints, descriptors)

Hi,

I have run into a problem while trying to store image features in a file. I am using GPU version of SURF algorithm. I have calculated keypoints and descriptors and they are both stored in a gpu::GpuMat structure. More precisely, I have a vector like this:

struct featureInfo
{
    gpu::GpuMat keypoints;
    gpu::GpuMat descriptors;
    std::string frameName;
} frameData;

std::vector<featureInfo> data;

It contains keypoints/descriptors and other info for hundreds of frames. What would be the easiest (or fastest) way to store it to a file?

I guess gpu::GpuMat files need to be downloaded to CPU before being stored? And, if that is the case, is there a way to upload descriptors back to GPU without having to calculate them again?

Thanks!

2013-08-09 05:20:18 -0600 commented question What to do with DMatch value ?

@ximobayo explained it quite good. Using knnMatch will give you more options in discarding false matches, also check this out: http://answers.opencv.org/question/11840/false-positive-in-object-detection/

2013-08-08 07:08:02 -0600 commented question What to do with DMatch value ?

Discard some matches based on distance, it seems the only parameter of use in DMatch. Or apply some algorithms like testing symmetry between matches(find matches for image1 in image2 and vice versa, then check if they are symmetrical), or try RANSAC and so on.

2013-06-19 02:15:05 -0600 received badge  Critic (source)
2013-06-05 05:23:22 -0600 asked a question RANSAC homography on GPU

Hello,

I couldn´t find an implementation of RANSAC algorithm on GPU in OpenCV, but only the CPU version - findFundamentalMat. When using low distance values, it takes a huge number of iterations to acquire desired confidence level and the execution time is increasing a lot. I am using it to make a better distinction between similar images(neighbor frames of video) and therefore lower distance values are desired.

My questions:

Will the algorithm be implemented for GPU in near future? Is it worth the time and possible speed-up to implement it by myself?

If anyone is familiar with some reliable GPU implementation of RANSAC outside of OpenCV, that could also be of use.

Thanks!

2013-04-29 08:57:53 -0600 asked a question SURF parameters dependency

Hi,

I am trying to find the best SURF parameters for image matching that I am working on. For that, I need to find out what are parameters dependant on.

So,

  • Hessian threshold - from what I read - it depends on average local contrast in an image, that means sharpness. Is there any other image property that influences this parameter?

  • Number of octaves - ?. From my tests, number of octaves does not really changes anything a lot. (used 3,4,5)

  • Number of octave layers - ? Can´t figure out on what image property it depends...
  • Extended - 128-element should be better in most of the images...
  • Upright - used when rotation is under +/- 15°

Anyone has experience with it? For example: blurry images -> small number of octave layers is the best...and such correlations. Or some article that points that out? Thanks!

2013-04-16 05:18:17 -0600 asked a question Sorting keypoints and descriptors on GPU

Hello,

When calculating keypoints/descriptors with SURF on GPU, in the each run (with same parameters and image) they are stored in a different order, but with the same content. Because of that, some algorithms that I am using produce slightly different results each time.

What would be the easiest way to sort them after calculation? Downloading them and sorting seems hard, since descriptors on CPU contain 64/128x more elements. Is it possible to detect keypoints, download them to CPU, sort them and put them back on GPU to calculate descriptors? I know it is possible to do the calculations separately on CPU version. Also, would it produce same results each time?

Thanks!

2013-04-12 04:12:38 -0600 commented question BFMatcher implemented differently on GPU?

Another similar question. If I run SURF detector(on GPU) with same parameters and same image 2x, the keypoints I get are not exactly identical. Mostly they are, but there are a few keypoints that are found in 1st test and haven´t been found in 2nd one and vice versa(number of keypoints found is the same in both tests) Why is it happening?

2013-04-08 02:46:14 -0600 commented question BFMatcher implemented differently on GPU?

What you are saying, it is possible that it is not my fault?

2013-04-04 09:31:58 -0600 asked a question BFMatcher implemented differently on GPU?

Hi,

I am using SURF and BFMatcher in my application, for which I have CPU and GPU code. I did additional calculations with the matches (calculated by BFMatcher) on CPU and GPU versions in the same way (at least I think I did), to decide which matches to keep. Depending on which version I am using, I noticed that I get slightly different results in the number of matches kept in the end.

SURF and BFMatcher have the same arguments in both versions. For additional calculations I implemented my own functions/kernels.

Is it possible that SURF or BFMatcher are implmented a bit different on CPU and GPU versions, which makes my results different? Or I did some mistake while transfering the CPU code to GPU (which I can´t find)?

Thanks!

2013-03-26 10:21:24 -0600 asked a question Debugging CUDA kernels

Hi,

I´ve made some of my own CUDA code additional to OpenCV code, which I would like to debug using Parallel NSight in Visual Studio. CUDA kernels are in separate .cu files.

When I am trying to start CUDA debugging, while (or just after, not sure) loading lots of modules (various .cu files) it crashes. This is the error I get:

    OpenCV Error: Gpu API call (out of memory) in unknown function, file ..\.\
opencv-2.4.4\modules\core\src\gpumat.cpp, line 1415

Also, a window gets opened: "Microsoft Visual c++ Debug Library", with: "Debug error!" and "R6010 abort has been called"

Anyone knows what is the issue, and is it possible to debug CUDA code? CUDA debugger works on CUDA samples (without OpenCV)... Is it normal to load lots of modules(no symbols loaded) and is it causing this error?

2013-03-25 06:44:25 -0600 asked a question Keeping only non-zero elements in cv::Mat

Hi,

I´ve got a problem trying to get rid of the zero valued elements in matrix. I want to remove them and keep only non-zero elements in it.

Example: Mat: [1,2,3,0,0,0,4,5,6,0,0,0...] --> [1,2,3,4,5,6,...]

What would be the easiest and fastest way to do it? I have looked into SparseMat, but does not seem to work. Any other solutions? If it is possible to do it directly in GpuMat before downloading the results to cv::Mat, it would be even better.

Thanks!

2013-03-21 10:47:09 -0600 received badge  Scholar (source)
2013-03-21 05:36:27 -0600 commented question CUDA with OpenCV, STL transformation

Ok, so here is the code (using knnMatch):

gpu::BFMatcher_GPU matcher(NORM_L2);

matcher.knnMatch( descriptors1GPU,descriptors2GPU, matches1, 2);

Which automatically gives results in std::vector<cv::DMatch>. Should I use knnMatchSingle then or? Does it calculates the results without "downloading" to CPU.

2013-03-21 04:45:46 -0600 received badge  Student (source)
2013-03-21 04:32:11 -0600 asked a question CUDA with OpenCV, STL transformation

Hi,

I am writing my own CUDA kernel that should operate on matches I got using BFMatcher_GPU, with knnMatch algorithm in OpenCV. Matches are stored in std::vector<std::vector<cv::DMatch>> structure.

What would be the easiest and most efficient way to use those matches in my own CUDA kernel? Is the transfer to GpuMat necessary and how would it be done? Could Thrust library be used somehow?

Thanks!

2013-03-14 05:55:59 -0600 edited question OCL error - cl_khr_fp64

Hi, I´ve got another OpenCL problem, now using:

  • NVIDIA GTX560 Ti
  • OpenCV 2.4.4 built with OCL support

I am using surf_matcher OCL sample.
I get this kind of error:

:51:1: error: must specify '#pragma OPENCL EXTENSION cl_khr_fp64: enable' before using 'double'

F d = 0; ^ :31:11: note: instantiated from: ... etc...ends with:

OpenCV Error: Gpu API call (CL_BUILD_PROGRAM_FAILURE) in unknown function, file ......\modules\ocl\src\initialization.cpp, line 531*

I can´t change OCL kernel file nonfree-surf.cu which does not specifies "#pragma OPENCL EXTENSION cl_khr_fp64: enable". Since I am not really experienced with OPENCL, does anyone has an idea how/where to specify that and what is the problem?

It seems like a bug in OpenCV for this sample, because when I run performance tests, every algorithm/sample is working except SURF.