Whats' the best way to speed up filtering a very large image by multi-processing?

asked 2015-09-24 21:13:32 -0600

ChaoyuanYeh gravatar image

I'd like to perform Guassian filtering on a very large image (80,000 x 60,000) in memory using multiple cores. After some reading, I concluded that there are two possible route: 1) Using multiprocessing module and split the filtering task to multiple worker. 2) Since NumPy supports openMP, use the multiprocessing function provided by NumPy.

  1. Which way is better?

If option 2 is better, then

  1. Should I build NumPy with openMP or should I build OpenCV with OpenMP or both?
  2. How do I actually use it in a python script?
edit retag flag offensive close merge delete