Ask Your Question
0

What version of tensorflow does OpenCV 3.3.0 use ?

asked 2017-09-13 08:06:32 -0600

I wonder, what version of tensorflow does OpenCV 3.3.0 use ?

I ask this because, I just trained a model (based on inception) inside a docker-container using TensorFlow version 1.3.0. Then when I tried to test this version with TensorFlow for Unity Plugin (wraps/uses OpenCV 3.3.0) I just got Unity3D editor to crash.

I just switched the .pb and .txt files to my new ones in the working inception-model source-code and it crashed Unity3D when I run it :-D

So perhaps there is a version mismatch in the TF between OpenCV 3.3.0 and Tensorflow 1.3 ? But what is the correct version of TF to use for training models for Open CV 3.3.0 ?

edit retag flag offensive close merge delete

Comments

I don't know which version of tensorflow is used in opencv 3.3.0 But i can say it is not version 1.3.0 version because release date is 28 days ago (september 12 -28 = august 15) and opencv 3.3 it is august 4.

LBerger gravatar imageLBerger ( 2017-09-13 08:21:10 -0600 )edit
2

opencv is not using tensorflow at all.

(you can still load models trained with tf into opencv's dnn module, so the question is more like: does the dnn module have all the layers you need, and is the importer able to read them).

berak gravatar imageberak ( 2017-09-13 10:21:22 -0600 )edit

Apparently there are several versions of the Inception model too. What version of Inception works best on OpenCV 3.3.0 ? (I don't know the inception version from OpenCV for Unity plugin, but it wraps OpenCV 3.3.0)

ableRex358 gravatar imageableRex358 ( 2017-09-14 01:54:33 -0600 )edit

there is no support for unity from opencv at all, we cannot help you with that.

(and, imho, like that you're only adding useless complexity. rather try a simple c++ or python program, and see, what errors you get there)

berak gravatar imageberak ( 2017-09-14 02:07:12 -0600 )edit
2

I am not asking any Unity support. Unity is not the issue here. Which version on inception does OpenCV3.3.0 run (is capable of running based on having the required layers)? Is it 1, 2 or 3 ? Easy question not in any way connected to Unity. As for running image-detection inside game-engine, it gives new exiting opportunities. And please note that my re-trained inception network works in tensorflow 1.3.0 (there is no error in it). There is nothing wrong with it. It just probably want's something OpenCV can not yet deliver. Now I also know that OpenCV 3.3.0 can run some version of tensorflow-inception (here is a zip of the .pb and .txt-files even, https://storage.googleapis.com/downlo...), but which one is it?

ableRex358 gravatar imageableRex358 ( 2017-09-14 06:42:58 -0600 )edit

k.k. apologies for being overly harsh on the unity thing. (and a good idea, to add your files !)

we can simply check your model

berak gravatar imageberak ( 2017-09-14 06:51:55 -0600 )edit

So this git file was submittet on Jun 26, 2017 https://github.com/opencv/opencv/blob... This is probably the file whose inception version I need. Would hate to train a network with the wrong version. :-D

ableRex358 gravatar imageableRex358 ( 2017-09-14 07:07:20 -0600 )edit

just for the record, - which inception version is it ?

berak gravatar imageberak ( 2017-09-14 07:20:39 -0600 )edit
1

@berak, Inception-5h might be also known as GoogLeNet v2. But I'm not strongly sure in it. Recently we had some experience with different TensorFlow graphs manipulations to make it work in DNN (i.e. Mobilenet classification model). So I think we could run other networks from Inception family but it'll require some time. Besides MobileNet seems to be more efficient.

dkurt gravatar imagedkurt ( 2017-09-14 07:57:15 -0600 )edit

@dkurt, ah, thanks a lot. (i originally thought, the link above would be the "self-trained" net, but it isnt)

we should probably call you to help much earlier in cases like this, i guess ;)

berak gravatar imageberak ( 2017-09-14 08:29:44 -0600 )edit

1 answer

Sort by ยป oldest newest most voted
0

answered 2017-09-14 07:12:56 -0600

berak gravatar image

updated 2017-09-14 08:11:23 -0600

so, the answer is: YES, (latest) opencv CAN read your tensorflow model.

using the sample tf importer

./tf_inception -i=space_shuttle.jpg
Output blob shape 1 x 1008 x 56467584 x 0
Inference time, ms: 356.165
Best class: #234 'space shuttle'
Probability: 99.9972%

if it fails for you, you'll probably have to build from github master src, remember, this is bleeding edge, and the 3.3 release might be missing some parts.

we can also add some lines, to see the internal structure (tf models unfortunately do not come with a human readable prototxt):

vector<String> lnames = net.getLayerNames();
for (auto n:lnames) {
    cerr << n << endl;
}

as you can see, the "inception" layers are modelled as a sequence of "ordinary" conv, relu pool layers, so almost any inception version should work (as long as it is not adding unknown layers)

conv2d0_pre_relu/conv
conv2d0
maxpool0
localresponsenorm0
conv2d1_pre_relu/conv
conv2d1
conv2d2_pre_relu/conv
conv2d2
localresponsenorm1
maxpool1
mixed3a_1x1_pre_relu/conv
mixed3a_1x1
mixed3a_3x3_bottleneck_pre_relu/conv
mixed3a_3x3_bottleneck
mixed3a_3x3_pre_relu/conv
mixed3a_3x3
mixed3a_5x5_bottleneck_pre_relu/conv
mixed3a_5x5_bottleneck
mixed3a_5x5_pre_relu/conv
mixed3a_5x5
mixed3a_pool
mixed3a_pool_reduce_pre_relu/conv
mixed3a_pool_reduce
mixed3a
mixed3b_1x1_pre_relu/conv
mixed3b_1x1
mixed3b_3x3_bottleneck_pre_relu/conv
mixed3b_3x3_bottleneck
mixed3b_3x3_pre_relu/conv
mixed3b_3x3
mixed3b_5x5_bottleneck_pre_relu/conv
mixed3b_5x5_bottleneck
mixed3b_5x5_pre_relu/conv
mixed3b_5x5
mixed3b_pool
mixed3b_pool_reduce_pre_relu/conv
mixed3b_pool_reduce
mixed3b
maxpool4
mixed4a_1x1_pre_relu/conv
mixed4a_1x1
mixed4a_3x3_bottleneck_pre_relu/conv
mixed4a_3x3_bottleneck
mixed4a_3x3_pre_relu/conv
mixed4a_3x3
mixed4a_5x5_bottleneck_pre_relu/conv
mixed4a_5x5_bottleneck
mixed4a_5x5_pre_relu/conv
mixed4a_5x5
mixed4a_pool
mixed4a_pool_reduce_pre_relu/conv
mixed4a_pool_reduce
mixed4a
mixed4b_1x1_pre_relu/conv
mixed4b_1x1
mixed4b_3x3_bottleneck_pre_relu/conv
mixed4b_3x3_bottleneck
mixed4b_3x3_pre_relu/conv
mixed4b_3x3
mixed4b_5x5_bottleneck_pre_relu/conv
mixed4b_5x5_bottleneck
mixed4b_5x5_pre_relu/conv
mixed4b_5x5
mixed4b_pool
mixed4b_pool_reduce_pre_relu/conv
mixed4b_pool_reduce
mixed4b
mixed4c_1x1_pre_relu/conv
mixed4c_1x1
mixed4c_3x3_bottleneck_pre_relu/conv
mixed4c_3x3_bottleneck
mixed4c_3x3_pre_relu/conv
mixed4c_3x3
mixed4c_5x5_bottleneck_pre_relu/conv
mixed4c_5x5_bottleneck
mixed4c_5x5_pre_relu/conv
mixed4c_5x5
mixed4c_pool
mixed4c_pool_reduce_pre_relu/conv
mixed4c_pool_reduce
mixed4c
mixed4d_1x1_pre_relu/conv
mixed4d_1x1
mixed4d_3x3_bottleneck_pre_relu/conv
mixed4d_3x3_bottleneck
mixed4d_3x3_pre_relu/conv
mixed4d_3x3
mixed4d_5x5_bottleneck_pre_relu/conv
mixed4d_5x5_bottleneck
mixed4d_5x5_pre_relu/conv
mixed4d_5x5
mixed4d_pool
mixed4d_pool_reduce_pre_relu/conv
mixed4d_pool_reduce
mixed4d
mixed4e_1x1_pre_relu/conv
mixed4e_1x1
mixed4e_3x3_bottleneck_pre_relu/conv
mixed4e_3x3_bottleneck
mixed4e_3x3_pre_relu/conv
mixed4e_3x3
mixed4e_5x5_bottleneck_pre_relu/conv
mixed4e_5x5_bottleneck
mixed4e_5x5_pre_relu/conv
mixed4e_5x5
mixed4e_pool
mixed4e_pool_reduce_pre_relu/conv
mixed4e_pool_reduce
mixed4e
maxpool10
mixed5a_1x1_pre_relu/conv
mixed5a_1x1
mixed5a_3x3_bottleneck_pre_relu/conv
mixed5a_3x3_bottleneck
mixed5a_3x3_pre_relu/conv
mixed5a_3x3
mixed5a_5x5_bottleneck_pre_relu/conv
mixed5a_5x5_bottleneck
mixed5a_5x5_pre_relu/conv
mixed5a_5x5
mixed5a_pool
mixed5a_pool_reduce_pre_relu/conv
mixed5a_pool_reduce
mixed5a
mixed5b_1x1_pre_relu/conv
mixed5b_1x1
mixed5b_3x3_bottleneck_pre_relu/conv
mixed5b_3x3_bottleneck
mixed5b_3x3_pre_relu/conv
mixed5b_3x3
mixed5b_5x5_bottleneck_pre_relu/conv
mixed5b_5x5_bottleneck
mixed5b_5x5_pre_relu/conv
mixed5b_5x5
mixed5b_pool
mixed5b_pool_reduce_pre_relu/conv
mixed5b_pool_reduce
mixed5b
avgpool0
head0_pool
head0_bottleneck_pre_relu/conv
head0_bottleneck
head0_bottleneck/reshape
nn0_pre_relu/matmul
nn0
nn0/reshape
softmax0_pre_activation/matmul
softmax0
head1_pool
head1_bottleneck_pre_relu/conv
head1_bottleneck
head1_bottleneck/reshape
nn1_pre_relu/matmul
nn1
nn1/reshape
softmax1_pre_activation/matmul
softmax1
avgpool0/reshape
softmax2_pre_activation/matmul
softmax2
edit flag offensive delete link more

Comments

1

Thanks! I eventually found some more info on the google-released Inception5h-model. This model seems to be commonly and widely used on mobile-platform demos. However apparently it's only original documentation was a name of a zip-file it came on (https://github.com/Hvass-Labs/TensorF...). This model runs fast (fractions of seconds) on mobile platforms while apparently other later inception version can take up to 8 seconds. So my effort to run my latest inception (v3 I think) files on mobile would have at best resulted to very laggy mobile demo. Instead, I should try to load this inception5h-file for retraining (but only when recognizing similar objects) and run the re-trained result. Sorry for getting frustrated in the process, thanks for the help. ;-)

ableRex358 gravatar imageableRex358 ( 2017-09-15 00:41:44 -0600 )edit
1

But for scientific and coding curiosity, here is the inception 3 (latest example from pulling from git couple days ago) model I was trying to load into ~OpenCV~. https://drive.google.com/file/d/0B4ZT...

ableRex358 gravatar imageableRex358 ( 2017-09-15 01:32:17 -0600 )edit

there's actually much work done currently, to get smaller/faster networks to run on mobile in opencv, like SSD_net, squeezenet. you also want to watch the outcome of this years GSOC projects related to that.

berak gravatar imageberak ( 2017-09-15 01:43:43 -0600 )edit
Login/Signup to Answer

Question Tools

2 followers

Stats

Asked: 2017-09-13 08:06:32 -0600

Seen: 327 times

Last updated: Sep 14