Ask Your Question

patrickmarais's profile - activity

2017-08-23 04:24:20 -0600 commented question Random forest - how to force parallel training?

I was afraid that was the case :-/ It also seems that the parallelisation they had before was only for the best split determination and not to build each tree in parallel (although split computation is probably the most expensive part). I wonder why it was removed. I'm very disappointed.

2017-08-22 10:53:05 -0600 answered a question OpenCV 3.1 Random Forest - rtrees

They have an example in the source code which shows how to set things up. Fom opencv git repo:

opencv/samples/cpp/letter_recog.cpp

I used this as a basis for my random forest implementation. You need to encapsulate your training data in the TrainData class and use method calls to set each paramter. I didn't use the 2.x versions, but that seems to be the big change?

2017-08-22 10:33:40 -0600 asked a question Random forest - how to force parallel training?

Hi,

In the version 2.4 documentation (http://docs.opencv.org/2.4/modules/ml... ) for the CvRTrees::train() method it is stated that TBB is used for (multicore) acceleration. Has this been dropped? I am using v3.1.0 and although I built opencv using WITH_TBB=ON (and OPENMP support, juts in case) and and I link against libtbb the train method still runs on a single core. Note that I am not using cmake for my build: I explicitly link against opencv_core and opencv_ml and the code does what it should with these. I also added libtbb, as noted above, but to no avail.

I have checked libopencv_core.so and libopencv_ml.so - libtbb is referenced in both so's. I have the current up-to -date version of TBB (installed via apt).

I had a look at the random forest train method and I see no reference to any kind of parallelisation structure (e.g. parallel_for) ...but why would this be removed - RF is embarrassingly parallel and it seems a perfect candidate.

If there is something I am missing, please help...I really can't do with a single core only for my huge data sets.