Ask Your Question

drKzs's profile - activity

2020-04-02 03:45:23 -0600 received badge  Popular Question (source)
2018-07-31 01:16:58 -0600 received badge  Famous Question (source)
2017-08-19 02:05:41 -0600 received badge  Nice Question (source)
2017-01-25 17:24:45 -0600 received badge  Notable Question (source)
2016-09-22 10:29:41 -0600 commented question Improve ORB for rescaling ? (opencv)

Thinking about that, I thought it was more depending on the descriptor than on the detector ?

2016-09-22 10:09:18 -0600 commented question Improve ORB for rescaling ? (opencv)

Well your answer is clear to me though : if I well understand, it depends on the detector, so there's nothing I can do except changing of detector, rigth ? So (still if I well understand), your answer is no :) thanks

2016-09-22 08:49:09 -0600 asked a question Improve ORB for rescaling ? (opencv)

Hi,

I have to work on an application of image matching. I'm not specialized at all in image, but I do my best :)

I'm currently replacing sift by another descriptor (and detector).

I have used ORB/ORB, and the results are kinda good, except when there is some rescaling ... That's what I read about ORB, so i'm not surprised.

I am wondering if there's some tests (algo, concept, test ...) to improve the behaviour, like the way ratio test or cross matching improve matching ?

(I'm trying BRISK but the result I have at the moment are worse than ORB, though I still have to play with parameters)

Thanks for any idea/advice :)

2016-09-14 06:49:35 -0600 asked a question Brisk & opencv 2.4.8 (or more)

Hi,

I have decided to use binary feature matching in an application, and after reading here and there, I finally decided to use ORB/ORB or BRISK/BRISK (and maybe FREAKS descriptor in further step) I don't want for the moment to switch to opencv 3.x so I momentally eliminated AKAZE from my choices. I currently use opencv 2.4.8

I read a post of end 2014 : link text

which led me to : link text

If I well understand (which is not sure ;), BRISK implementation at least for the detection isn't "stable", so If I stick to opencv 2.4.8 I should better use ORB/ORB (or ORB/FREAKS if it's possible) ? Remark: I don't talk about quality comparison of the features, only opencv version "stability"

So I just wonder :

  • Was it some isolated points of view or a "real" issue noticed by many people ?

  • Do any following opencv 2.4.x versions fixed the issue ?

Thanks in advance :)

2016-08-23 07:58:32 -0600 received badge  Popular Question (source)
2014-01-21 11:18:39 -0600 commented question parallelization with openmp or tbb

yes, I could toggle parallelization on/off where it suits me better, if I decide which way is the best :) now i have to check where are the "too much parallelized" moment ...

2014-01-17 04:00:11 -0600 asked a question parallelization with openmp or tbb

Hello,

I was wondering if there was a more preconized parallel library to use with opencv (I mean to use the parallelized code of the library) ?

I tried tbb and it works fine, I was just wondering if it would be even better with openmp ?

I realize it would be more a question like "openmp vs tbb performance", but just to know if some people have experienced some differences ?

And question related, is there a documentation where I can see which feature of opencv is parallelized, or should I look the code ? Because I am parallelizing my app, and I wouldn't want to overload the cpus by using an opencv-parallelized-method in my own parallelized loop (Well it would use the available cpu resources, but I want to let some cores available for other processes) :)

But maybe the question I should answer is "is it more efficient to use parallelized library and at same time parallelize your code" ... maybe it's better to choose one way or other, not to mix ? What do you think ?

Thanx :)

2013-11-27 03:41:07 -0600 commented question cascade classifier memory

Yes, i'll have a look to the code, in the meanwhile i'll delete my models each time. I'll post if I find a clue in the code. Thank you all for your answers :)

2013-11-26 09:08:35 -0600 commented question cascade classifier memory

What i post is the only code (except for the values of the path) ... I just load 1000 models (for sample here it's always the same model loaded 1000 times, easier to do ;), and loop over them to detect the same image. I'm just wondering why i loose so much memory, since I don't keep any data nor results, as seen in the code.

2013-11-26 08:47:12 -0600 commented question cascade classifier memory

I move the code out of my methods. It's not a beautiful code nor the most efficient, just a sample to show how I notice the problem (once again, I struggle with formatting, i maybe shouldnot post code in comment ?)

vector< cv::CascadeClassifier* > myModels ; for (int i=0 ; i < 1000 ; i++) { myModels.push_back( new cv::CascadeClassifier() ) ; if ( !myModels.back()->load(modelPath) ) { cerr << "ERROR when loading model" << endl ; exit(1) ; }
}

for (uint i=0 ; i< myModels.size() ; i++) { cv::Mat image = cv::imread(inputImagePath); std::vector<cv::Rect> objects; cv::Size size ; size.width = 0; size.height = 0; myModels[i]->detectMultiScale(image, objects, 1.1, 2, 0 | cv::CASCADE_SCALE_IMAGE, size); //delete(myModels[i]) ; //if uncommented, ok }

2013-11-26 07:28:40 -0600 commented question cascade classifier memory

Well, being sure would be very presumptuous :) What it is sure, is I don't use the Rect objects at the moment, I only count them ; they are stored in a container declared within a "myDetect" method, so the container is deleted at the end of the method, what I checked. It's not a container of reference, so logically I don't need to clean any objectts... Other thing sure, when I comment the call of detectMultiScale whithin myDetect, it's ok (of course there's no calculation, but was just to see). I can test with differents images, it's good idea.

2013-11-25 11:45:50 -0600 commented question cascade classifier memory

I had already tried with options --leak-check=full --track-origins=yes, cause i was first searching a memory leak in our codes. But valgrind returned no error.

I have done it again with the option you suggested --show-reachable=yes, with 360 models , the result is

LEAK SUMMARY definitely lost: 0 bytes in 0 blocks

indirectly lost: 0 bytes in 0 blocks

possibly lost: 0 bytes in 0 blocks

still reachable: 55,629 bytes in 978 blocks

suppressed: 0 bytes in 0 blocks

And the heap summary is : in use at exit: 55,629 bytes in 978 blocks

total heap usage: 289,714 allocs, 288,736 frees, 1,163,377,807 bytes allocated

(sorry for the formatting, I don't manage to make it fine)

I keep looking into my code to be sure

2013-11-25 10:41:56 -0600 received badge  Supporter (source)
2013-11-25 10:30:50 -0600 commented question cascade classifier memory

You mean I should better delete my CascadeClassifier object after using it, and reload it if necessary ? (sorry if I musunderstand too)

My purpose was to keep my 1000 (well not so much - for example) CascadeClassifiers in memory in order to avoir to reload them each time I need them. So I was wondering if the detectMultiScale method was a method that "only" do calcultation and returns results, or if it keeps data into CascadeClassifier object (which would explain why my memory increases)

2013-11-25 09:33:24 -0600 received badge  Student (source)
2013-11-25 09:05:24 -0600 asked a question cascade classifier memory

Hello,

I have a simple stupid question :)

We're using a CascadeClassifier to load LBP models (I say we cause I don't code the core, but it impacts me ;)). We use the method detectMultiScale.

I made some tests to check charge, cpu & memory load, etc on our application, and I noticed something: let's imagine we have a lots of model (1000 or more :D)

I can load them and keep them in a list/vector whatever, memory's ok.

Then, with a sample image, for each classifier I call the detectMultiScale method : the memory grows up very very quickly . But if delete the classifier at each loop after using it, the memory stays ok.

So I was wondering, do the CascadeClassifier keeps data in his instance after detectMultiScale ? Well, it can be in our own code (ooh yes!), but as I don't find anything, I ask for the library too ...

Thanx a lot :)

2013-09-25 11:22:10 -0600 received badge  Editor (source)
2013-09-25 11:20:57 -0600 commented answer CvMat imread alpha

Ouuups I mistook, i wanted to go from CV_LOAD_IMAGE_UNCHANGED to CV_LOAD_IMAGE_COLOR... (i edit my post)

So i guess i need to convert it to 3 channel rgb with convertTo method... What i am wondering is which type to use, since I don't know which type is used when calling imread({myFile}, CV_LOAD_IMAGE_COLOR) .... So you seem to say it's CV_8UC3

2013-09-25 09:55:51 -0600 asked a question CvMat imread alpha

Hello,

I was wondering what was the method to obtain,

  • from a cv::Mat obtained by imread({myFile}, CV_LOAD_IMAGE_UNCHANGED)
  • a cv::Mat identical to the one obtained by imread({myFile}, CV_LOAD_IMAGE_COLOR)

If I well understand, i need to remove the alpha channel from my cv::Mat ? so I have to remove one dimension of my matrix by iterate over cols & rows ?

(I precise if it wasn't obvious, i dunno anything in image treatment ;))

Thanx a lot :)

2013-09-25 09:40:49 -0600 commented answer cvMat data size

ok, thanx :)

2013-09-24 11:15:27 -0600 asked a question cvMat data size

Hello,

Sorry if the question has been asked already, but i'm not sure of the simple answer : how to determine the cv::Mat data field size ?

I've read this page : link text

So rows*step is enough, no need to take care of elemSize or depth ?

Thanx a lot :)