Ask Your Question
0

how to increase processing speed of YOLO running on QEMUx86-64

asked 2020-02-18 03:52:36 -0600

aquarian-source gravatar image

hello all. i am running YOLOv3 on QEMUx86-64 using opencv::dnn module. at the moment it takes 5 minutes for a very small model. how can i accelerate this processing of application?

edit retag flag offensive close merge delete

2 answers

Sort by ยป oldest newest most voted
0

answered 2020-02-19 14:58:29 -0600

Consider using the Mobile Net version of YOLO. This contributor provides a good brief on MobileNets architecture and you can find the paper here. A simple Google search yields a lot of readily-available options. According to this benchmark, it has a 217 milliseconds inference time on an i5. Keep in mind, what you gain in speed, you loose in accuracy.

edit flag offensive delete link more

Comments

1

That is great. It is definitely worth a try for my embedded device. Just one question: this also uses opencv::dnn module?

aquarian-source gravatar imageaquarian-source ( 2020-02-19 16:11:02 -0600 )edit
1

Yes it does. The DNN module just loads the model weights and performs the respective operations. MobileNets still perform convolutions, subsampling, etc and these are operations already supported by DNN module. The setup should still be the same regardless but refer to this tutorial for guidance.

eshirima gravatar imageeshirima ( 2020-02-20 06:50:14 -0600 )edit
0

answered 2020-02-18 12:54:46 -0600

holger gravatar image

updated 2020-02-18 12:56:43 -0600

Well - if you are in the lucky situation that you have a nvidiea gpu avaible you could compile opencv dnn module with cuda support. Or you could use yolo directly(this repo https://github.com/AlexeyAB/darknet has .dll / .so avaible - so you can include it in your app.

Or you could have a strong machine in the network and your client communicates to the server - which does the heavy lifting for the client)

Or you could try convert the yolo model and run it on special hardware like nvidea jetson / google coral / intel movideos.

edit flag offensive delete link more

Comments

2

Or you could run the MobileNet version of YOLOv3

eshirima gravatar imageeshirima ( 2020-02-19 14:24:13 -0600 )edit
1

Hmm - this will convert the model to an optimized form i assume? Never tried this - thanks for the hint - i read some people doing this but i was always afraid/lazy.

holger gravatar imageholger ( 2020-02-19 14:31:49 -0600 )edit

No nvidia support for now but i can try these. How about opencl framework? And how to use it for opencv::dnn module based application

aquarian-source gravatar imageaquarian-source ( 2020-02-19 16:15:26 -0600 )edit
1

I never had any success with opencl - unfortunatly.So i cannot say something about this - i wasted too much time on this stuff ^^

holger gravatar imageholger ( 2020-02-20 03:48:22 -0600 )edit

Question Tools

1 follower

Stats

Asked: 2020-02-18 03:52:36 -0600

Seen: 433 times

Last updated: Feb 19 '20