Ask Your Question

Revision history [back]

If you can process a 360x60 image in 5ms, a 1280x800 should take 47 times that: about 230ms. I guess you had a typo and you meant around 200ms, as there might be other factors (like jpeg decompression) that affect the runtime.

You outline 5 steps and I see that there is a problem with the order of the operations: (I'm assuming you use color images as input as your are converting with CV_BGR2GRAY ).

Step number 3, cvCvtColor(), should be your first step, as steps 1 & 2 dilate/erode usually don't make much sense on non-binary images: Working on 1-channel images is way faster than working on 3-channel images.

Step 4 uses cvSmooth() and depending on the method (CV_MEDIAN for instance) this usually taxes your system: avoid it if you can.

Now, regarding optimization, a 1280x800 image takes more that 3MB to hold in memory, and whatever you do will be constrained by the memory access speed of your system. Operations are faster if all the data involved is held on the CPU cache. It would be better if you design your algorithms so you can divide your image in several horizontal strips so each strip data (aggregating the input and all intermediate images) fits on the cache, and apply all of your processing steps to each strip separately.

Also you could use a few threads to process your image in parallel. Keep in mind that you want to keep most data in the CPU cache, so you might need to increase the number of strips, so most data is kept in the cache.

Cheers.