traincascade parallelize grabbing of negative windows

asked 2016-03-20 08:42:17 -0500

can gravatar image

Hello,i just wanted to ask that is there a way to parallelize the process of grabbing negative windows at the start of each stage in the traincascade application since that is the task that takes up alot of time when you are training the deeper stages.

Thanks.

edit retag flag offensive close merge delete

Comments

It is one of the most common problems around so if you would try to tackle it, feel free to keep me in the loop because I am more than interested in a working version of this. I might have a thesis student next year trying to tackle this with OpenMP optimizations. What you could do is seperate the negative data, and grab partial sets of negatives from those seperated negative data sets in a parallel way. You just need to make sure that none of the algorithms reads the exact same data at the same time.

StevenPuttemans gravatar imageStevenPuttemans ( 2016-03-22 09:14:28 -0500 )edit
1

I am indeed thinking of tackling it but right now i have a main project going on so i need to find a vacancy to make this going.OpenMP was the first thing that comes to mind,upon first look i immediately thought of;each thread needs to find numNeg/numThreads amount of negative windows but how come they can read the same data if you divide the negative data set?

can gravatar imagecan ( 2016-03-22 10:47:21 -0500 )edit

They cannot. But as far as I grasp the sequential pipeline. For each newly grabbed negative a check is performed 2 ways. First a check is done to ensure that the negative is still classified as positive by previous stages (which works perfectly fine in parallel since the previous stages are fixed) but secondly a check is being performed to ensure that two subsequent negatives are different enough (to ensure they do not bring the same redundant info to the training) and this is not independant of parrallel processes. So we should check how this influences the actual training outcome and performance.

StevenPuttemans gravatar imageStevenPuttemans ( 2016-03-23 05:49:14 -0500 )edit

It should improve performance since it gets really annoying to train the deeper stages when you are aiming for a good classifier and by good i mean the overall false alarm rate should be like 4-5 negatives classified as positives in 7-8 million negatives.Another thing i have been thinking lately is that the traincascade application itself i guess has a failsafe to prevent running out of negative samples in the middle of the training,my guess is the program randomly rotates or changes brightness or contrast of the negative images to make the data set last long.What do you think about this? This should be taken into account as well.

can gravatar imagecan ( 2016-03-23 06:41:58 -0500 )edit