Ask Your Question
0

How to load a Keras model build with tensorflow backend in OpenCV

asked 2018-09-06 01:54:34 -0600

TripleS gravatar image

updated 2018-09-06 03:03:49 -0600

berak gravatar image

Disclaimer, I posted the same question here and on Stackoverflow.

I'm trying to do deployment from Keras to opencv c++.

I trained a simple CNN with the mnist dataset (my example is a modified Keras example). After training I exposed tensorflow graph from Keras backend and saved the model and the graph.

tensorFlowSession = K.get_session()
tf.saved_model.simple_save(tensorFlowSession, newpath + "/TensorFlow", inputs={"x": x}, outputs={"y": y})
tf.train.write_graph(tensorFlowSession.graph_def,newpath + "/TensorFlow",  "trainGraph_def.pbtxt")

Then I tried to load the saved model using opencv in python, I started with opencv in python, however I experience a similar error using opencv in c++.

net = cv.dnn.readNet(newpath + '/TensorFlow/' + 'saved_model.pb', newpath + '/TensorFlow/' + 'trainGraph.pbtxt')

The problem is the opencv failed to load tensorflow graph, I get an error-

[libprotobuf ERROR /io/opencv/3rdparty/protobuf/src/google/protobuf/wire_format_lite.cc:629] String field 'tensorflow.FunctionDef.Node.ret' contains invalid UTF-8 data when parsing a protocol buffer. Use the 'bytes' type if you intend to send raw bytes.

Saving and loading a tensorflow graph using opencv should be rather straightforward, what am I missing here? See attached my code. Any help would be appreciated

'''Trains a simple convnet on the MNIST dataset.
Gets to 99.25% test accuracy after 12 epochs
(there is still a lot of margin for parameter tuning).
16 seconds per epoch on a GRID K520 GPU.
'''

from __future__ import print_function
import keras
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
from keras import backend as K
import tensorflow as tf
import datetime
import cv2 as cv
from pathlib import Path
import numpy as np
from os import listdir
from os.path import isfile, join
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '-1'

batch_size = 128
num_classes = 10
epochs = 1

# input image dimensions
img_rows, img_cols = 28, 28

# the data, split between train and test sets
(x_train, y_train), (x_test, y_test) = mnist.load_data()

if K.image_data_format() == 'channels_first':
    x_train = x_train.reshape(x_train.shape[0], 1, img_rows, img_cols)
    x_test = x_test.reshape(x_test.shape[0], 1, img_rows, img_cols)
    input_shape = (1, img_rows, img_cols)
else:
    x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)
    x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)
    input_shape = (img_rows, img_cols, 1)

x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')

# convert class vectors to binary class matrices
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)

model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3),
                 activation='relu',
                 input_shape=input_shape))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation='softmax'))

model.compile(loss=keras.losses.categorical_crossentropy,
              optimizer=keras.optimizers.Adadelta(),
              metrics=['accuracy'])

model.fit(x_train, y_train,
          batch_size=batch_size,
          epochs=epochs,
          verbose=1,
          validation_data=(x_test ...
(more)
edit retag flag offensive close merge delete

Comments

I dont know i really just want to help / not troll.

But why would you train a model with keras and evaluate with open cv? I mean you already have keras(with potentially gpu support) in place. I personalyl think this is a "bad" idea - i wouldnt spent any time on this.

Well you could try reading https://www.tensorflow.org/guide/save... if you see any "patterns" related to the exception you get from opencv. The error message from opencv is good here.

holger gravatar imageholger ( 2018-09-06 02:09:41 -0600 )edit

I want to load the trained model using opencv C++. My goal is to migrate from python to solely cpp platform

TripleS gravatar imageTripleS ( 2018-09-06 04:29:18 -0600 )edit

I really would avoid this you probably end up with a big monolith(not always bad)- in the begining i wanted to use opencv as a cross dnn platform too. In the end my setup was very complicated (a lot of dependencies) and it didnt performed well (open cl was no good for me . slow prediction times)

Instead i am using a Micro Service Architecture That means i have a main app (spring boot) which will get prediction via rest from my python app(flask with keras on top of tensorflow). I can only recommend that approach as it isolates things and you have no stupid side effects. "Seperation of concerns"

holger gravatar imageholger ( 2018-09-06 06:37:36 -0600 )edit

1 answer

Sort by ยป oldest newest most voted
0

answered 2018-09-06 05:38:06 -0600

kbarni gravatar image

updated 2018-09-06 05:39:46 -0600

Keras->Tensorflow->OpenCV conversion is still shaky. The Keras->Tensorflow conversion is not very optimal, so it adds lots of layers that OpenCV has difficulty to understand (especially the Flatten operation).

To create a network that OpenCV can understand, first you need to freeze the exported tensorflow graph and optimize it for inference. See this issue for code describing how to do it correctly.

The link above also gives some hints to eliminate the problems introduced by the Flatten layer.

edit flag offensive delete link more

Comments

thank you !

TripleS gravatar imageTripleS ( 2018-09-06 05:46:05 -0600 )edit

yeah adding more layers add complexitiy - not suprising.

holger gravatar imageholger ( 2018-09-06 06:46:05 -0600 )edit

it isn't really about adding computational complexity. More like let's add a constant layer as input instead of defining a constant value as parameter...

kbarni gravatar imagekbarni ( 2018-09-06 06:57:01 -0600 )edit

I am more referring to the "Keras->Tensorflow->OpenCV conversion is still shaky"problem. The more frameworks you add and the more you need to convert between them - the higher the complexity and possebility that something wents wrong.

holger gravatar imageholger ( 2018-09-06 07:01:42 -0600 )edit

I agree. Especially that all these converters/importers are not officially maintained (i.e. by the Tensorflow developers).

Unfortunately there is no simple way of using DNNs in C++.The DNN libraries are almost exclusively Python and the OpenCV DNN module is the best way to use them in C++.

Low-level DNN libraries also tend to be overly complicated (TF, Caffee), so a higher-level solution is often preferable (like Keras) especially for a simple application.

Even if Python is simpler, in many cases you must use C++: if you want to extend your existing software already written in C++, or if you need to add a module/plugin to an existing software using a C++ API. Also some industrial cameras have C++ SDKs.

kbarni gravatar imagekbarni ( 2018-09-06 07:36:15 -0600 )edit

There is a solution to these problems - its called micro services and i already suggested this architecture :-). Have small and specialized programs communicating with each other over a message bus (rest / amqp / etc.)

Actually i am using this right now in my java app. Machine learning and java is even more complex(another layer added to the problem) than with c++ where you have alt least some bindings and or frameworks support.So at some point i just decided to separate things and its working very good now.

Also - keras is already on top of various cnn's. No need to add another layer on top of that imho.

holger gravatar imageholger ( 2018-09-06 07:47:09 -0600 )edit

Question Tools

1 follower

Stats

Asked: 2018-09-06 01:54:34 -0600

Seen: 8,481 times

Last updated: Sep 06 '18