Image segmentation on Real time

asked 2018-12-13 23:35:19 -0600

vineetjai gravatar image

updated 2018-12-14 03:31:23 -0600

I am doing masking on every frame of Real time video on Nvidia GPU by using dnn . I had loaded weights in dnn class using net = cv2.dnn.readNetFromTensorflow(weightsPath, configPath) and executing masks from it using net.forward but it is giving 1 fps speed for masking with only one thread.

I have two questions:-

  1. As I know Nvidia support is not present. So is this possible that in future it can give more fps if nvidia support present in future?
  2. I need atleast 10 fps speed (I can drop some frame in real time video). So, I had tried threading by initializing net in every new thread but its speed is much less than 1fps. I know, It is because I am calling a new thread on each frame with initializing net (as net can't be shared in between thread and also I can't restart thread which has done processing) which take much more time to initialize. I can't even pass a batch of frames to thread as I want a real time show of mask too (even this can't take care of net initializing problem, as I have to initialize net in every new thread). So any ideas how to make it more efficient?

I am importing weights from tensorflow model trained already into dnn of opencv4.0 dev. Segmentation is performed using COCO dataset which has around 80 classes.

I can add more information whenever required.

edit retag flag offensive close merge delete

Comments

@vineetjai, Could you tell us at least which type of network is used and what problem is solved? Segmentation of such kind of objects is performed?

dkurt gravatar imagedkurt ( 2018-12-14 03:09:10 -0600 )edit