Does OpenCV::DNN support 16 bit float arithmetic?
Recently I used the dnn::shrinkCaffeModel to convert a Caffe network to half precision floating point. When I used the new network, I saw that forward pass time was about the same time (in fact a little slower) as in the original network.
I expected the new model to be faster!!!!!, but it was not like that. Please, any explanation?
I guess DNN does not support 16 bit float point arithmetic (only convert from 16bit to 32bit and viceversa) or it's my laptop that does not support it, but I'm not sure.
@Rbt, Yeah, OpenCV can import models with fp16 weights (Caffe/TensorFlow) or uint8 (TensorFlow) into fp32 format. All the computations are in single precision floats (fp32). There is a PR with FP16 support for Intel's GPUs: https://github.com/opencv/opencv/pull.... Read more in OpenCV's evolution proposal: https://github.com/opencv/opencv/issu....
Whoops, I accidently deleted @Rbt 's response "Thanks, this answered my question", so I'm reposting it here.
@dkurt , how about reposting your comment as an answer?
@opalmirror, sure. Done!
Thanks! @Rbt, if you can mark that the answer is correct, I think we're all set. :)