Ask Your Question

Revision history [back]

Random forest - how to force parallel training?


In the version 2.4 documentation ( ) for the CvRTrees::train() method it is stated that TBB is used for (multicore) acceleration. Has this been dropped? I am using v3.1.0 and although I built opencv using WITH_TBB=ON (and OPENMP support, juts in case) and and I link against libtbb the train method still runs on a single core. Note that I am not using cmake for my build: I explicitly link against opencv_core and opencv_ml and the code does what it should with these. I also added libtbb, as noted above, but to no avail.

I have checked and - libtbb is referenced in both so's. I have the current up-to -date version of TBB (installed via apt).

I had a look at the random forest train method and I see no reference to any kind of parallelisation structure (e.g. parallel_for) ...but why would this be removed - RF is embarrassingly parallel and it seems a perfect candidate.

If there is something I am missing, please help...I really can't do with a single core only for my huge data sets.