Increasing performance of vision algorithm [closed]

asked 2016-09-18 02:53:56 -0600

dineshlama gravatar image

updated 2016-09-18 02:54:56 -0600

I have a code which has to analyse around (640 * 480 * 2000 * 1000 * 200) variables(image data,colour and depth) from an image in one process, cas of which performance is lacking,since i'm new in multithreading(slipping during class period), by using multithreading can i increase the performance to such level where it will run smoothly?

edit retag flag offensive reopen merge delete

Closed for the following reason duplicate question by dineshlama
close date 2016-12-04 08:40:25.149234

Comments

2

You really have 400 Million data points for every pixel? I find that hard to believe.

What is the data you actually have, and what are you trying to do? Somethings are easy to parallelize, and others are very difficult or impossible.

Tetragramm gravatar imageTetragramm ( 2016-09-18 09:28:33 -0600 )edit
1

Did you just tell us that you were sleeping during the class that covered concurrency?

Der Luftmensch gravatar imageDer Luftmensch ( 2016-09-18 19:10:41 -0600 )edit

i am using kinect sensor, where i am using the camera/depth/points in ros to do the computer vision tasks(currently 3d only), here 640*480 is the total number of voxels/pixels, 20000,2000,200... are the number of pixels to be analysed in each loop for each pixel. but i got better solution for this problem from another forum where i am using map and creating a large multidimensional array, where only occupied parts shows the value and others shows 0. here by analysing each pixel and corresponding pattern w.r.t that pixel the code will detect a 3d object if the pixel is part of surface of object. but it could be decreased by using segmentation and sepereting different coloured objects from image. i cant share how the pattern matching works, but i have found better method so i'll close this q

dineshlama gravatar imagedineshlama ( 2016-12-04 08:40:11 -0600 )edit