Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Turn is_training of batchnorm(TensorFlow) to False

net = tf.layers.conv2d(inputs = features, filters = 64, kernel_size = [3, 3], strides = (2, 2), padding = 'same')
net = tf.contrib.layers.batch_norm(net, is_training = True)
net = tf.nn.relu(net)
net = tf.reshape(net, [-1, 64 * 7 * 7]) #
net = tf.layers.dense(inputs = net, units = class_num, kernel_initializer = tf.contrib.layers.xavier_initializer(), name = 'regression_output')

#......
#after training

saver = tf.train.Saver()
saver.save(sess, 'reshape_final.ckpt')
tf.train.write_graph(sess.graph.as_graph_def(), "", 'graph_final.pb')

How could I turn the is_training of the batchnorm to False after I save it?

I try the keywords like "tensorflow batchnorm turn of training", "tensorflow change state", but cannot find out how to do it.

Turn is_training of batchnorm(TensorFlow) to False

net = tf.layers.conv2d(inputs = features, filters = 64, kernel_size = [3, 3], strides = (2, 2), padding = 'same') net = tf.contrib.layers.batch_norm(net, is_training = True) net = tf.nn.relu(net) net = tf.reshape(net, [-1, 64 * 7 * 7]) # net = tf.layers.dense(inputs = net, units = class_num, kernel_initializer = tf.contrib.layers.xavier_initializer(), name = 'regression_output') 'regression_output')

#......
#after training
saver = tf.train.Saver()
saver.save(sess, 'reshape_final.ckpt')
tf.train.write_graph(sess.graph.as_graph_def(), "", 'graph_final.pb')

How could I turn the is_training of the batchnorm to False after I save it?

I try the keywords like "tensorflow batchnorm turn of training", "tensorflow change state", but cannot find out how to do it.

dnn module cannot load the model if is_training is True, I have to change it back to False and save the model again.

click to hide/show revision 3
None

updated 2017-09-30 01:35:35 -0600

berak gravatar image

Turn is_training of batchnorm(TensorFlow) to False

net = tf.layers.conv2d(inputs = features, filters = 64, kernel_size = [3, 3], strides = (2, 2), padding = 'same') net = tf.contrib.layers.batch_norm(net, is_training = True) net = tf.nn.relu(net) net = tf.reshape(net, [-1, 64 * 7 * 7]) # net = tf.layers.dense(inputs = net, units = class_num, kernel_initializer = tf.contrib.layers.xavier_initializer(), name = 'regression_output')

#......
#after training

saver = tf.train.Saver()
saver.save(sess, 'reshape_final.ckpt')
tf.train.write_graph(sess.graph.as_graph_def(), "", 'graph_final.pb')

How could I turn the is_training of the batchnorm to False after I save it?

I try the keywords like "tensorflow batchnorm turn of training", "tensorflow change state", but cannot find out how to do it.

dnn module cannot load the model if is_training is True, I have to change it back to False and save the model again.

click to hide/show revision 4
None

updated 2017-09-30 01:36:00 -0600

berak gravatar image

Turn is_training of batchnorm(TensorFlow) to False


net = tf.layers.conv2d(inputs = features, filters = 64, kernel_size = [3, 3], strides = (2, 2), padding = 'same')
     net = tf.contrib.layers.batch_norm(net, is_training = True)
     net = tf.nn.relu(net)
     net = tf.reshape(net, [-1, 64 * 7 * 7]) #
     net = tf.layers.dense(inputs = net, units = class_num, kernel_initializer = tf.contrib.layers.xavier_initializer(), name = 'regression_output')

'regression_output')
#......
#after training

saver = tf.train.Saver()
saver.save(sess, 'reshape_final.ckpt')
tf.train.write_graph(sess.graph.as_graph_def(), "", 'graph_final.pb')

How could I turn the is_training of the batchnorm to False after I save it?

I try the keywords like "tensorflow batchnorm turn of training", "tensorflow change state", but cannot find out how to do it.

dnn module cannot load the model if is_training is True, I have to change it back to False and save the model again.

Turn is_training of batchnorm(TensorFlow) to False


 net = tf.layers.conv2d(inputs = features, filters = 64, kernel_size = [3, 3], strides = (2, 2), padding = 'same')
    training = tf.placeholder(tf.bool, name = 'training')
    net = tf.contrib.layers.batch_norm(net, is_training = True)
training)
    net = tf.nn.relu(net)
    net = tf.reshape(net, [-1, 64 * 7 * 7]) #
    net = tf.layers.dense(inputs = net, units = class_num, kernel_initializer = tf.contrib.layers.xavier_initializer(), name = 'regression_output')


#......
#after training

training, save the graph and weights

sess.run(loss, feed_dict={features : train_imgs, x : real_delta, training : False})
saver = tf.train.Saver()
saver.save(sess, 'reshape_final.ckpt')
tf.train.write_graph(sess.graph.as_graph_def(), "", 'graph_final.pb')

How could I turn the is_training of the batchnorm to False After that, I freeze the graph->optimize>transform

python3 ~/.keras2/lib/python3.5/site-packages/tensorflow/python/tools/freeze_graph.py --input_graph=graph_final.pb --input_checkpoint=reshape_final.ckpt --output_graph=frozen_graph.pb --output_node_names=regression_output/BiasAdd

python3 ~/.keras2/lib/python3.5/site-packages/tensorflow/python/tools/optimize_for_inference.py --input frozen_graph.pb --output opt_graph.pb --frozen_graph True --input_names input --output_names regression_output/BiasAdd

~/Qt/3rdLibs/tensorflow/bazel-bin/tensorflow/tools/graph_transforms/transform_graph --in_graph=opt_graph.pb --out_graph=fused_graph.pb --inputs=input --outputs=regression_output/BiasAdd --transforms="fold_constants sort_by_execution_order fold_batch_norms fold_old_batch_norms"

I get error message after I save it?

I try the keywords like "tensorflow batchnorm turn of training", "tensorflow change state", but cannot find out how to do it.execute transform_graph:

"You must feed a value for placeholder tensor 'training' with dtype bool"

dnn module cannot load the model if is_training is True, I have to change it back to False and save the model again.

Turn is_training of batchnorm(TensorFlow) to False


    net = tf.layers.conv2d(inputs = features, filters = 64, kernel_size = [3, 3], strides = (2, 2), padding = 'same')
    training = tf.placeholder(tf.bool, name = 'training')
    net = tf.contrib.layers.batch_norm(net, is_training = training)
    net = tf.nn.relu(net)
    net = tf.reshape(net, [-1, 64 * 7 * 7]) #
    net = tf.layers.dense(inputs = net, units = class_num, kernel_initializer = tf.contrib.layers.xavier_initializer(), name = 'regression_output')


#......
#after training, save the graph and weights

sess.run(loss, feed_dict={features : train_imgs, x : real_delta, training : False})
saver = tf.train.Saver()
saver.save(sess, 'reshape_final.ckpt')
tf.train.write_graph(sess.graph.as_graph_def(), "", 'graph_final.pb')

After that, I freeze the graph->optimize>transform

python3 ~/.keras2/lib/python3.5/site-packages/tensorflow/python/tools/freeze_graph.py --input_graph=graph_final.pb --input_checkpoint=reshape_final.ckpt --output_graph=frozen_graph.pb --output_node_names=regression_output/BiasAdd

python3 ~/.keras2/lib/python3.5/site-packages/tensorflow/python/tools/optimize_for_inference.py --input frozen_graph.pb --output opt_graph.pb --frozen_graph True --input_names input --output_names regression_output/BiasAdd

~/Qt/3rdLibs/tensorflow/bazel-bin/tensorflow/tools/graph_transforms/transform_graph --in_graph=opt_graph.pb --out_graph=fused_graph.pb --inputs=input --outputs=regression_output/BiasAdd --transforms="fold_constants sort_by_execution_order fold_batch_norms fold_old_batch_norms"

I get error message after I execute transform_graph:

"You must feed a value for placeholder tensor 'training' with dtype bool"

dnn module cannot load the model if is_training is True, I have to change it back to False and save the model again.

Turn is_training of batchnorm(TensorFlow) Export tensorflow graph with batchnorm to Falseopencv dnn


    net = tf.layers.conv2d(inputs = features, filters = 64, kernel_size = [3, 3], strides = (2, 2), padding = 'same')
    training = tf.placeholder(tf.bool, name = 'training')
    net = tf.contrib.layers.batch_norm(net, is_training = training)
    net = tf.nn.relu(net)
    net = tf.reshape(net, [-1, 64 * 7 * 7]) #
    net = tf.layers.dense(inputs = net, units = class_num, kernel_initializer = tf.contrib.layers.xavier_initializer(), name = 'regression_output')


#......
#after training, save the graph and weights

sess.run(loss, feed_dict={features : train_imgs, x : real_delta, training : False})
saver = tf.train.Saver()
saver.save(sess, 'reshape_final.ckpt')
tf.train.write_graph(sess.graph.as_graph_def(), "", 'graph_final.pb')

After that, I freeze the graph->optimize>transform

python3 ~/.keras2/lib/python3.5/site-packages/tensorflow/python/tools/freeze_graph.py --input_graph=graph_final.pb --input_checkpoint=reshape_final.ckpt --output_graph=frozen_graph.pb --output_node_names=regression_output/BiasAdd

python3 ~/.keras2/lib/python3.5/site-packages/tensorflow/python/tools/optimize_for_inference.py --input frozen_graph.pb --output opt_graph.pb --frozen_graph True --input_names input --output_names regression_output/BiasAdd

~/Qt/3rdLibs/tensorflow/bazel-bin/tensorflow/tools/graph_transforms/transform_graph --in_graph=opt_graph.pb --out_graph=fused_graph.pb --inputs=input --outputs=regression_output/BiasAdd --transforms="fold_constants sort_by_execution_order fold_batch_norms fold_old_batch_norms"

I get error message after I execute transform_graph:

"You must feed a value for placeholder tensor 'training' with dtype bool"

dnn module cannot load the model if is_training is True, I have to change it back to False and save the model again.

Edit :

I could avoid the error of transform_graph by changing placeholder to Variable(other remain the same)

From

training = tf.placeholder(tf.bool, name='training')

To

training = tf.Variable(False, name='training', trainable=False)

But this time when I load the model by opencv dnn,

std::string const model("/home/ramsus/Qt/blogCodes2/deep_homography/cnn/tensorflow/fused_graph.pb");

dnn::Net net = dnn::readNetFromTensorflow(model);
if(net.empty()){
    std::cerr<<"Can't load network by using the mode file:"<<std::endl;
    std::cerr<<model<<std::endl;
    throw std::runtime_error("net is empty");
}

it throw error messages:

BatchNorm/moments/mean:Mean(conv2d/convolution)(BatchNorm/moments/mean/reduction_indices) keep_dims:[ ] Tidx:[ ] T:0 OpenCV Error: Unspecified error (Unknown layer type Mean in op BatchNorm/moments/mean) in populateNet, file /home/ramsus/Qt/3rdLibs/opencv/modules/dnn/src/tensorflow/tf_importer.cpp, line 1077 /home/ramsus/Qt/3rdLibs/opencv/modules/dnn/src/tensorflow/tf_importer.cpp:1077: error: (-2) Unknown layer type Mean in op BatchNorm/moments/mean in function populateNet

Export tensorflow graph with batchnorm to opencv dnn


    net = tf.layers.conv2d(inputs = features, filters = 64, kernel_size = [3, 3], strides = (2, 2), padding = 'same')
    training = tf.placeholder(tf.bool, name = 'training')
    net = tf.contrib.layers.batch_norm(net, is_training = training)
    net = tf.nn.relu(net)
    net = tf.reshape(net, [-1, 64 * 7 * 7]) #
    net = tf.layers.dense(inputs = net, units = class_num, kernel_initializer = tf.contrib.layers.xavier_initializer(), name = 'regression_output')


#......
#after training, save the graph and weights

sess.run(loss, feed_dict={features : train_imgs, x : real_delta, training : False})
saver = tf.train.Saver()
saver.save(sess, 'reshape_final.ckpt')
tf.train.write_graph(sess.graph.as_graph_def(), "", 'graph_final.pb')

After that, I freeze the graph->optimize>transform

python3 ~/.keras2/lib/python3.5/site-packages/tensorflow/python/tools/freeze_graph.py --input_graph=graph_final.pb --input_checkpoint=reshape_final.ckpt --output_graph=frozen_graph.pb --output_node_names=regression_output/BiasAdd

python3 ~/.keras2/lib/python3.5/site-packages/tensorflow/python/tools/optimize_for_inference.py --input frozen_graph.pb --output opt_graph.pb --frozen_graph True --input_names input --output_names regression_output/BiasAdd

~/Qt/3rdLibs/tensorflow/bazel-bin/tensorflow/tools/graph_transforms/transform_graph --in_graph=opt_graph.pb --out_graph=fused_graph.pb --inputs=input --outputs=regression_output/BiasAdd --transforms="fold_constants sort_by_execution_order fold_batch_norms fold_old_batch_norms"
fold_old_batch_norms sort_by_execution_order"

I get error message after I execute transform_graph:

"You must feed a value for placeholder tensor 'training' with dtype bool"

dnn module cannot load the model if is_training is True, I have to change it back to False and save the model again.

Edit :

I could avoid the error of transform_graph by changing placeholder to Variable(other remain the same)

From

training = tf.placeholder(tf.bool, name='training')

To

training = tf.Variable(False, name='training', trainable=False)

But this time when I load the model by opencv dnn,

std::string const model("/home/ramsus/Qt/blogCodes2/deep_homography/cnn/tensorflow/fused_graph.pb");

dnn::Net net = dnn::readNetFromTensorflow(model);
if(net.empty()){
    std::cerr<<"Can't load network by using the mode file:"<<std::endl;
    std::cerr<<model<<std::endl;
    throw std::runtime_error("net is empty");
}

it throw error messages:

BatchNorm/moments/mean:Mean(conv2d/convolution)(BatchNorm/moments/mean/reduction_indices) keep_dims:[ ] Tidx:[ ] T:0 OpenCV Error: Unspecified error (Unknown layer type Mean in op BatchNorm/moments/mean) in populateNet, file /home/ramsus/Qt/3rdLibs/opencv/modules/dnn/src/tensorflow/tf_importer.cpp, line 1077 /home/ramsus/Qt/3rdLibs/opencv/modules/dnn/src/tensorflow/tf_importer.cpp:1077: error: (-2) Unknown layer type Mean in op BatchNorm/moments/mean in function populateNet

Export tensorflow graph with batchnorm to opencv dnn


    net = tf.layers.conv2d(inputs = features, filters = 64, kernel_size = [3, 3], strides = (2, 2), padding = 'same')
    training = tf.placeholder(tf.bool, tf.Variable(False, name = 'training')
    net = tf.contrib.layers.batch_norm(net, is_training = training)
    net = tf.nn.relu(net)
    net = tf.reshape(net, [-1, 64 * 7 * 7]) #
    net = tf.layers.dense(inputs = net, units = class_num, kernel_initializer = tf.contrib.layers.xavier_initializer(), name = 'regression_output')


#......
#after training, save the graph and weights

sess.run(loss, feed_dict={features : train_imgs, x : real_delta, training : False})
saver = tf.train.Saver()
saver.save(sess, 'reshape_final.ckpt')
tf.train.write_graph(sess.graph.as_graph_def(), "", 'graph_final.pb')

After that, I freeze the graph->optimize>transform

python3 ~/.keras2/lib/python3.5/site-packages/tensorflow/python/tools/freeze_graph.py --input_graph=graph_final.pb --input_checkpoint=reshape_final.ckpt --output_graph=frozen_graph.pb --output_node_names=regression_output/BiasAdd

python3 ~/.keras2/lib/python3.5/site-packages/tensorflow/python/tools/optimize_for_inference.py --input frozen_graph.pb --output opt_graph.pb --frozen_graph True --input_names input --output_names regression_output/BiasAdd

~/Qt/3rdLibs/tensorflow/bazel-bin/tensorflow/tools/graph_transforms/transform_graph --in_graph=opt_graph.pb --out_graph=fused_graph.pb --inputs=input --outputs=regression_output/BiasAdd --transforms="fold_constants fold_batch_norms fold_old_batch_norms sort_by_execution_order"

I get error message after I execute transform_graph:

"You must feed a value for placeholder tensor 'training' with dtype bool"

dnn module cannot load the model if is_training is True, I have to change it back to False and save the model again.

Edit :

I could avoid the error of transform_graph by changing placeholder to Variable(other remain the same)

From

training = tf.placeholder(tf.bool, name='training')

To

training = tf.Variable(False, name='training', trainable=False)

But this time when I load Load the model by opencv dnn,

std::string const model("/home/ramsus/Qt/blogCodes2/deep_homography/cnn/tensorflow/fused_graph.pb");

dnn::Net net = dnn::readNetFromTensorflow(model);
if(net.empty()){
    std::cerr<<"Can't load network by using the mode file:"<<std::endl;
    std::cerr<<model<<std::endl;
    throw std::runtime_error("net is empty");
}

it throw error messages:

BatchNorm/moments/mean:Mean(conv2d/convolution)(BatchNorm/moments/mean/reduction_indices) keep_dims:[ ] Tidx:[ ] T:0 OpenCV Error: Unspecified error (Unknown layer type Mean in op BatchNorm/moments/mean) in populateNet, file /home/ramsus/Qt/3rdLibs/opencv/modules/dnn/src/tensorflow/tf_importer.cpp, line 1077 /home/ramsus/Qt/3rdLibs/opencv/modules/dnn/src/tensorflow/tf_importer.cpp:1077: error: (-2) Unknown layer type Mean in op BatchNorm/moments/mean in function populateNet

Export tensorflow graph with batchnorm to opencv dnn


First, describe a net with batchnorm

    net = tf.layers.conv2d(inputs = features, filters = 64, kernel_size = [3, 3], strides = (2, 2), padding = 'same')
    training = tf.Variable(False, name = 'training')
    net = tf.contrib.layers.batch_norm(net, is_training = training)
    net = tf.nn.relu(net)
    net = tf.reshape(net, [-1, 64 * 7 * 7]) #
    net = tf.layers.dense(inputs = net, units = class_num, kernel_initializer = tf.contrib.layers.xavier_initializer(), name = 'regression_output')


#......
#after training, save the graph and weights

sess.run(loss, feed_dict={features : train_imgs, x : real_delta, training : False})
saver = tf.train.Saver()
saver.save(sess, 'reshape_final.ckpt')
tf.train.write_graph(sess.graph.as_graph_def(), "", 'graph_final.pb')

After that, I freeze the graph->optimize>transform

python3 ~/.keras2/lib/python3.5/site-packages/tensorflow/python/tools/freeze_graph.py --input_graph=graph_final.pb --input_checkpoint=reshape_final.ckpt --output_graph=frozen_graph.pb --output_node_names=regression_output/BiasAdd

python3 ~/.keras2/lib/python3.5/site-packages/tensorflow/python/tools/optimize_for_inference.py --input frozen_graph.pb --output opt_graph.pb --frozen_graph True --input_names input --output_names regression_output/BiasAdd

~/Qt/3rdLibs/tensorflow/bazel-bin/tensorflow/tools/graph_transforms/transform_graph --in_graph=opt_graph.pb --out_graph=fused_graph.pb --inputs=input --outputs=regression_output/BiasAdd --transforms="fold_constants fold_batch_norms fold_old_batch_norms sort_by_execution_order"

Load the model by opencv dnn,

std::string const model("/home/ramsus/Qt/blogCodes2/deep_homography/cnn/tensorflow/fused_graph.pb");

dnn::Net net = dnn::readNetFromTensorflow(model);
if(net.empty()){
    std::cerr<<"Can't load network by using the mode file:"<<std::endl;
    std::cerr<<model<<std::endl;
    throw std::runtime_error("net is empty");
}

it throw error messages:

BatchNorm/moments/mean:Mean(conv2d/convolution)(BatchNorm/moments/mean/reduction_indices) keep_dims:[ ] Tidx:[ ] T:0 OpenCV Error: Unspecified error (Unknown layer type Mean in op BatchNorm/moments/mean) in populateNet, file /home/ramsus/Qt/3rdLibs/opencv/modules/dnn/src/tensorflow/tf_importer.cpp, line 1077 /home/ramsus/Qt/3rdLibs/opencv/modules/dnn/src/tensorflow/tf_importer.cpp:1077: error: (-2) Unknown layer type Mean in op BatchNorm/moments/mean in function populateNet