1 | initial version |
Hi, @Jaykob! Unfortunately, I haven't experimented with DNN on mobile platforms but I can tell you that TensorFlow (+ Eigen computational backend by default) on CPU is definitely faster. In example, Inception-5h in TF takes 17.9ms versus 19.58ms in DNN. On the other hand, you'd like to run your model on GPU (using OpenCL, if it possible) to save power, in example. DNN is going to has some OpenCL backends (libdnn and Halide). I don't know if TensorFlow supports OpenCL. We're interested to know, what kind of layers are still required for you to extend TensorFlow importer as fast as it possible. Please, keep in touch with your decision. May be it will be the first time I run model on mobile device to help you =)