Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

OpenCV Python GPU support ? or faster variance convolution

Hi,

I was wondering if the current OpenCV Python had GPU support yet ?

OR is there a faster way to calculate convolved variance ?

MaxFrom3DArray = numpy.amax(imgArray, axis=0)    # where imgArray is a 3D array
Back2ImMax = Image.fromarray(MaxFrom3DArray, 'P')
Back2ImMax.save(os.path.join(MaxFromMulti, filename), "TIFF") 

    ForVariance = cv2.imread((MaxFromMulti + filename), cv2.IMREAD_UNCHANGED)
wlen = 40
def winVar(img, wlen):
    wmean, wsqrmean = (cv2.boxFilter(x, -1, (wlen, wlen),
    borderType=cv2.BORDER_REFLECT) for x in (img, img*img))
    return wsqrmean - wmean*wmean
windowVar = winVar(ForVariance, wlen)
numpy.set_printoptions(threshold='nan')
print windowVar

Thanks in advance TWP

OpenCV Python GPU support ? or faster variance convolution

Hi,

I was wondering if the current OpenCV Python had GPU support yet ?

OR is there a faster way to calculate convolved variance ?

MaxFrom3DArray = numpy.amax(imgArray, axis=0)    # where imgArray is a 3D array
Back2ImMax = Image.fromarray(MaxFrom3DArray, 'P')
Back2ImMax.save(os.path.join(MaxFromMulti, filename), "TIFF") 

    ForVariance = cv2.imread((MaxFromMulti + filename), cv2.IMREAD_UNCHANGED)
wlen = 40
def winVar(img, wlen):
    wmean, wsqrmean = (cv2.boxFilter(x, -1, (wlen, wlen),
    borderType=cv2.BORDER_REFLECT) for x in (img, img*img))
    return wsqrmean - wmean*wmean
windowVar = winVar(ForVariance, wlen)
numpy.set_printoptions(threshold='nan')
print windowVar

This takes hours in Python, and ages using python multiprocessing, with CPU cores maxed out. It takes a fraction of a second and hardly any cpu usage when serialised in c sharp. Doesn't something seem a bit off about that ?

Thanks in advance TWP

OpenCV Python GPU support ? or faster variance convolution

Hi,

I was wondering if the current OpenCV Python had GPU support yet ?

OR is there a faster way to calculate convolved variance ?

MaxFrom3DArray = numpy.amax(imgArray, axis=0)    # where imgArray is a 3D array
Back2ImMax = Image.fromarray(MaxFrom3DArray, 'P')
Back2ImMax.save(os.path.join(MaxFromMulti, filename), "TIFF") 

    ForVariance = cv2.imread((MaxFromMulti + filename), cv2.IMREAD_UNCHANGED)
wlen = 40
def winVar(img, wlen):
    wmean, wsqrmean = (cv2.boxFilter(x, -1, (wlen, wlen),
    borderType=cv2.BORDER_REFLECT) for x in (img, img*img))
    return wsqrmean - wmean*wmean
windowVar = winVar(ForVariance, wlen)
numpy.set_printoptions(threshold='nan')
print windowVar

This takes hours in Python, and ages using python multiprocessing, multi-threading, with CPU cores maxed out. It takes a fraction of a second and hardly any cpu usage when serialised in c sharp. Doesn't something seem a bit off about that ?

Thanks in advance TWP