DNN using multiple images works with tensorflow models but fail with darknet models

asked 2019-10-13 03:52:03 -0600

Shay Weissman gravatar image

OpenCV => :3.4.5 Operating System / Platform => windows 7 windows 10: Compiler => :microsoft vs2019: C++ Detailed description I am working on license plate detection. I have 2 models: 1 ssd mobilenet 2 darknet tiny yolo v3. Both works fine with opencv inference when using one image as input to blobFromImages. When I add second image to the matrices vector: The tensorflow model postprocessing works fine while the darknet fail. In the sample postprocessing code of darknet models the results depends on output[i].rows and cols. When entering 2 images the returned outputs[i].rows and cols equal -1. If this is ok than how do I extract the results from the output matrices. With tensorflow model the output matrix rows and cols is always -1 but extracting the results does not depand on these.

edit retag flag offensive close merge delete

Comments

3.4.5 -- please update it.

berak gravatar imageberak ( 2019-10-13 04:04:02 -0600 )edit
1

I checked with opencv 3.4.7 it is the same.

Shay Weissman gravatar imageShay Weissman ( 2019-10-13 04:37:45 -0600 )edit

I think that darknet models do not work well with multiple images and there is a bug here!!!! Can anyone confirm?

Shay Weissman gravatar imageShay Weissman ( 2019-10-16 02:34:43 -0600 )edit