Retrained tensorflow MobileNetSSD using the dnn module

asked 2018-01-12 09:52:17 -0600

XenonHawk gravatar image

updated 2018-01-12 12:12:32 -0600

Hello guys!

I retrained MobilenetSSD by using the Tensorflow Obect Detection API, and am now trying to load the frozen inference graph using the dnn module function:

net = cv.dnn.readNetFromTensorflow(prototxt, weights)

where I use https://github.com/opencv/opencv_extr... as 'pbtxt' with the num_classes set to my retrained number of classes and the frozen graph as 'weights'.

However, the output is just a bunch of random boxes. The inference is correct when using pure tensorflow, but that is too slow for my application.

My theory is a mismatch between the structure of graph definitions used by the Tensorflow Object Detection and the one used by OpenCV dnn module, but I would love to hear if anyone have any experience with the problem or some suggestions on how to solve it?

I am using Opencv 3.4.0 with contrib modules.

The problem seems to be similar to this

Kind regards, XenonHawk

edit retag flag offensive close merge delete

Comments

can you show, how you setup your input blob ? (maybe it's something simple as bgr <--> rbg)

berak gravatar imageberak ( 2018-01-12 10:21:36 -0600 )edit

My code is just a slight modification of https://github.com/opencv/opencv/blob...

so my blob setup is:

blob = cv.dnn.blobFromImage(frame, inScaleFactor, (inWidth, inHeight), (meanVal, meanVal, meanVal), swapRB) net.setInput(blob) detections = net.forward()

XenonHawk gravatar imageXenonHawk ( 2018-01-12 10:30:17 -0600 )edit

i might be entirely on the wrong track, but what happens, if you swap bgr -> rgb (either with a flag, or your own preprocessing) ?

( the tensorflow version of it seems to use rgb, see here )

berak gravatar imageberak ( 2018-01-12 10:42:05 -0600 )edit
1

Thank you for the suggestion, but unfortunately it did not solve my problem.

My problem is similar to the one described here

XenonHawk gravatar imageXenonHawk ( 2018-01-12 12:11:25 -0600 )edit