Ask Your Question

Revision history [back]

Although, I still wasn't able to get optimize_for_inference to work due to the FusedBatchNorm, thanks for the feedback from @dkurt and https://github.com/keras-team/keras/issues/6775 which explains the keras learning_phase. You have to set the learning phase before loading the model!

load the model with set_learning_phase(0):

import numpy as np
from keras import applications
from keras import backend as K
import tensorflow as tf

K.set_learning_phase(0)  ##
model = applications.densenet.DenseNet121(input_shape=(224, 224, 3), weights='imagenet', include_top=True)
sess = K.get_session()

print(model.input, model.outputs)
## Tensor("input_1:0", shape=(?, 224, 224, 3), dtype=float32) [<tf.Tensor 'fc1000/Softmax:0' shape=(?, 1000) dtype=float32>]

freeze it:

from tensorflow.python.tools import freeze_graph
from tensorflow.python.tools import optimize_for_inference_lib

MODEL_PATH = 'out'
MODEL_NAME = 'test'
input_node_name = 'input_1'
output_node_name = 'fc1000/Softmax'
!rm -rf {MODEL_PATH}/

tf.train.write_graph(sess.graph_def, MODEL_PATH, f'{MODEL_NAME}_graph.pb', as_text=False)
tf.train.write_graph(sess.graph_def, MODEL_PATH, f'{MODEL_NAME}_graph.pbtxt')
tf.train.Saver().save(sess, f'{MODEL_PATH}/{MODEL_NAME}.chkp')

freeze_graph.freeze_graph(f'{MODEL_PATH}/{MODEL_NAME}_graph.pbtxt',
                          None, False,
                          f'{MODEL_PATH}/{MODEL_NAME}.chkp',
                          output_node_name,
                          "save/restore_all",
                          "save/Const:0",
                          f'{MODEL_PATH}/frozen_{MODEL_NAME}.pb',
                          True, "")

then load it with dnn:

import cv2 as cv
net = cv.dnn.readNetFromTensorflow(f'{MODEL_PATH}/frozen_{MODEL_NAME}.pb')

# Smoke test
inp = np.ones([1, 3, 224, 224]).astype(np.float32)
net.setInput(inp)
dnn_out = net.forward()
print(dnn_out.shape, dnn_out[0,:5])
## (1, 1000) [2.0760612e-04 2.6876197e-04 5.9680151e-05 5.5908626e-05 1.4762023e-04]

As said, I wasn't able to get optimize_for_inference to work due to the FusedBatchNorm: WARNING:tensorflow:Didn't find expected Conv2D input to 'conv2_block1_0_bn/FusedBatchNorm_1' opencv-4.0.0/modules/dnn/src/tensorflow/tf_importer.cpp:497: error: (-2:Unspecified error) Input layer not found: conv2_block1_1_bn/FusedBatchNorm_1 in function 'connect' So please let me know if you know a solution for that. Thanks