so, the answer is: YES, (latest) opencv CAN read your tensorflow model.
using the sample tf importer
./tf_inception -i=space_shuttle.jpg
Output blob shape 1 x 1008 x 56467584 x 0
Inference time, ms: 356.165
Best class: #234 'space shuttle'
Probability: 99.9972%
if it fails for you, you'll probably have to build from github master src, remember, this is bleeding edge, and the 3.3 release might be missing some parts.
we can also add some lines, to see the internal structure (tf models unfortunately do not come with a human readable prototxt):
vector<String> lnames = net.getLayerNames();
for (auto n:lnames) {
cerr << n << endl;
}
as you can see, the "inception" layers are modelled as a sequence of "ordinary" conv, relu pool layers, so almost any inception version should work (as long as it is not adding unknown layers)
conv2d0_pre_relu/conv
conv2d0
maxpool0
localresponsenorm0
conv2d1_pre_relu/conv
conv2d1
conv2d2_pre_relu/conv
conv2d2
localresponsenorm1
maxpool1
mixed3a_1x1_pre_relu/conv
mixed3a_1x1
mixed3a_3x3_bottleneck_pre_relu/conv
mixed3a_3x3_bottleneck
mixed3a_3x3_pre_relu/conv
mixed3a_3x3
mixed3a_5x5_bottleneck_pre_relu/conv
mixed3a_5x5_bottleneck
mixed3a_5x5_pre_relu/conv
mixed3a_5x5
mixed3a_pool
mixed3a_pool_reduce_pre_relu/conv
mixed3a_pool_reduce
mixed3a
mixed3b_1x1_pre_relu/conv
mixed3b_1x1
mixed3b_3x3_bottleneck_pre_relu/conv
mixed3b_3x3_bottleneck
mixed3b_3x3_pre_relu/conv
mixed3b_3x3
mixed3b_5x5_bottleneck_pre_relu/conv
mixed3b_5x5_bottleneck
mixed3b_5x5_pre_relu/conv
mixed3b_5x5
mixed3b_pool
mixed3b_pool_reduce_pre_relu/conv
mixed3b_pool_reduce
mixed3b
maxpool4
mixed4a_1x1_pre_relu/conv
mixed4a_1x1
mixed4a_3x3_bottleneck_pre_relu/conv
mixed4a_3x3_bottleneck
mixed4a_3x3_pre_relu/conv
mixed4a_3x3
mixed4a_5x5_bottleneck_pre_relu/conv
mixed4a_5x5_bottleneck
mixed4a_5x5_pre_relu/conv
mixed4a_5x5
mixed4a_pool
mixed4a_pool_reduce_pre_relu/conv
mixed4a_pool_reduce
mixed4a
mixed4b_1x1_pre_relu/conv
mixed4b_1x1
mixed4b_3x3_bottleneck_pre_relu/conv
mixed4b_3x3_bottleneck
mixed4b_3x3_pre_relu/conv
mixed4b_3x3
mixed4b_5x5_bottleneck_pre_relu/conv
mixed4b_5x5_bottleneck
mixed4b_5x5_pre_relu/conv
mixed4b_5x5
mixed4b_pool
mixed4b_pool_reduce_pre_relu/conv
mixed4b_pool_reduce
mixed4b
mixed4c_1x1_pre_relu/conv
mixed4c_1x1
mixed4c_3x3_bottleneck_pre_relu/conv
mixed4c_3x3_bottleneck
mixed4c_3x3_pre_relu/conv
mixed4c_3x3
mixed4c_5x5_bottleneck_pre_relu/conv
mixed4c_5x5_bottleneck
mixed4c_5x5_pre_relu/conv
mixed4c_5x5
mixed4c_pool
mixed4c_pool_reduce_pre_relu/conv
mixed4c_pool_reduce
mixed4c
mixed4d_1x1_pre_relu/conv
mixed4d_1x1
mixed4d_3x3_bottleneck_pre_relu/conv
mixed4d_3x3_bottleneck
mixed4d_3x3_pre_relu/conv
mixed4d_3x3
mixed4d_5x5_bottleneck_pre_relu/conv
mixed4d_5x5_bottleneck
mixed4d_5x5_pre_relu/conv
mixed4d_5x5
mixed4d_pool
mixed4d_pool_reduce_pre_relu/conv
mixed4d_pool_reduce
mixed4d
mixed4e_1x1_pre_relu/conv
mixed4e_1x1
mixed4e_3x3_bottleneck_pre_relu/conv
mixed4e_3x3_bottleneck
mixed4e_3x3_pre_relu/conv
mixed4e_3x3
mixed4e_5x5_bottleneck_pre_relu/conv
mixed4e_5x5_bottleneck
mixed4e_5x5_pre_relu/conv
mixed4e_5x5
mixed4e_pool
mixed4e_pool_reduce_pre_relu/conv
mixed4e_pool_reduce
mixed4e
maxpool10
mixed5a_1x1_pre_relu/conv
mixed5a_1x1
mixed5a_3x3_bottleneck_pre_relu/conv
mixed5a_3x3_bottleneck
mixed5a_3x3_pre_relu/conv
mixed5a_3x3
mixed5a_5x5_bottleneck_pre_relu/conv
mixed5a_5x5_bottleneck
mixed5a_5x5_pre_relu/conv
mixed5a_5x5
mixed5a_pool
mixed5a_pool_reduce_pre_relu/conv
mixed5a_pool_reduce
mixed5a
mixed5b_1x1_pre_relu/conv
mixed5b_1x1
mixed5b_3x3_bottleneck_pre_relu/conv
mixed5b_3x3_bottleneck
mixed5b_3x3_pre_relu/conv
mixed5b_3x3
mixed5b_5x5_bottleneck_pre_relu/conv
mixed5b_5x5_bottleneck
mixed5b_5x5_pre_relu/conv
mixed5b_5x5
mixed5b_pool
mixed5b_pool_reduce_pre_relu/conv
mixed5b_pool_reduce
mixed5b
avgpool0
head0_pool
head0_bottleneck_pre_relu/conv
head0_bottleneck
head0_bottleneck/reshape
nn0_pre_relu/matmul
nn0
nn0/reshape
softmax0_pre_activation/matmul
softmax0
head1_pool
head1_bottleneck_pre_relu/conv
head1_bottleneck
head1_bottleneck/reshape
nn1_pre_relu/matmul
nn1
nn1/reshape
softmax1_pre_activation/matmul
softmax1
avgpool0/reshape
softmax2_pre_activation/matmul
softmax2
I don't know which version of tensorflow is used in opencv 3.3.0 But i can say it is not version 1.3.0 version because release date is 28 days ago (september 12 -28 = august 15) and opencv 3.3 it is august 4.
opencv is not using tensorflow at all.
(you can still load models trained with tf into opencv's dnn module, so the question is more like: does the dnn module have all the layers you need, and is the importer able to read them).
Apparently there are several versions of the Inception model too. What version of Inception works best on OpenCV 3.3.0 ? (I don't know the inception version from OpenCV for Unity plugin, but it wraps OpenCV 3.3.0)
there is no support for unity from opencv at all, we cannot help you with that.
(and, imho, like that you're only adding useless complexity. rather try a simple c++ or python program, and see, what errors you get there)
I am not asking any Unity support. Unity is not the issue here. Which version on inception does OpenCV3.3.0 run (is capable of running based on having the required layers)? Is it 1, 2 or 3 ? Easy question not in any way connected to Unity. As for running image-detection inside game-engine, it gives new exiting opportunities. And please note that my re-trained inception network works in tensorflow 1.3.0 (there is no error in it). There is nothing wrong with it. It just probably want's something OpenCV can not yet deliver. Now I also know that OpenCV 3.3.0 can run some version of tensorflow-inception (here is a zip of the .pb and .txt-files even, https://storage.googleapis.com/downlo...), but which one is it?
k.k. apologies for being overly harsh on the unity thing. (and a good idea, to add your files !)
we can simply check your model
So this git file was submittet on Jun 26, 2017 https://github.com/opencv/opencv/blob... This is probably the file whose inception version I need. Would hate to train a network with the wrong version. :-D
just for the record, - which inception version is it ?
@berak, Inception-5h might be also known as GoogLeNet v2. But I'm not strongly sure in it. Recently we had some experience with different TensorFlow graphs manipulations to make it work in DNN (i.e. Mobilenet classification model). So I think we could run other networks from Inception family but it'll require some time. Besides MobileNet seems to be more efficient.
@dkurt, ah, thanks a lot. (i originally thought, the link above would be the "self-trained" net, but it isnt)
we should probably call you to help much earlier in cases like this, i guess ;)