Increasing performance of vision algorithm [closed]
I have a code which has to analyse around (640 * 480 * 2000 * 1000 * 200) variables(image data,colour and depth) from an image in one process, cas of which performance is lacking,since i'm new in multithreading(slipping during class period), by using multithreading can i increase the performance to such level where it will run smoothly?
You really have 400 Million data points for every pixel? I find that hard to believe.
What is the data you actually have, and what are you trying to do? Somethings are easy to parallelize, and others are very difficult or impossible.
Did you just tell us that you were sleeping during the class that covered concurrency?
i am using kinect sensor, where i am using the camera/depth/points in ros to do the computer vision tasks(currently 3d only), here 640*480 is the total number of voxels/pixels, 20000,2000,200... are the number of pixels to be analysed in each loop for each pixel. but i got better solution for this problem from another forum where i am using map and creating a large multidimensional array, where only occupied parts shows the value and others shows 0. here by analysing each pixel and corresponding pattern w.r.t that pixel the code will detect a 3d object if the pixel is part of surface of object. but it could be decreased by using segmentation and sepereting different coloured objects from image. i cant share how the pattern matching works, but i have found better method so i'll close this q