Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Does OpenCV::DNN support 16 bit float arithmetic?

Recently I used the dnn::shrinkCaffeModel to convert a Caffe network to half precision floating point. When I used the new network, I saw that forward pass time was about the same time (in fact a little slower) as in the original network.

I expected the new model to be faster!!!!!, but it was not like that. Please, any explanation?

I guess DNN does not support 16 bit float point arithmetic (only convert from 16bit to 32bit and viceversa) or it's my laptop that does not support it, but I'm not sure.