Ask Your Question

Revision history [back]

Trouble while opening a model through "cv.dnn.readNetfromTensorFlow()" that is created in keras model and converted to tf pb file

Environment: Windows 10 Python 3.6 Tensorflow: 1.6.0 Keras: 2.2.4 OpenCV: 3.4 IDE: PyCharm Community edition 2018.1

Description: I have created a model from vgg16 and added my own layers on that for my classification problem. I saved the model, and its weights. I converted the model to .pb and could run prediction using "sess.run". However I need to open this and run the prediction through opencv (as I need to run on .Net ultimately using emgu cv wrapper). But I am unable to open the model using "cv.dnn.readNetfromTensorFlow()". It gave an error "Process finished with exit code -1073741819 (0xC0000005)"

To debug, I used another pre-trained vgg16 model and tried to open that model by following the steps given this article faithfully I removed the shape, stack, prod, strided_slice, connected the Reshape node properly to the pooling layer. I got an error in that also. These were the warning and error messages on runnning the pre-trained vgg16 model [libprotobuf WARNING C:\projects\opencv-python\opencv\3rdparty\protobuf\src\google\protobuf\io\coded_stream.cc:605] Reading dangerously large protocol message. If the message turns out to be larger than 2147483647 bytes, parsing will be halted for security reasons. To increase the limit (or to disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h. [libprotobuf WARNING C:\projects\opencv-python\opencv\3rdparty\protobuf\src\google\protobuf\io\coded_stream.cc:82] The total number of bytes read was 553443385

Process finished with exit code -1073741819 (0xC0000005) It could not open the model and also it did not throw any exceptions

Here is the code: To generate the pb file for keras (borrowed from the same article from keras import applications from keras import backend as K import cv2 as cv import tensorflow as tf from tensorflow.python.framework import graph_util from tensorflow.python.framework import graph_io

model = applications.VGG16(input_shape=(224, 224, 3), weights='imagenet', include_top=True) print("output=", model.outputs) print("input=", model.inputs)

K.set_learning_phase(0)

pred_node_names = [None] pred = [None] for i in range(1): pred_node_names[i] = "output_node"+str(i) pred[i] = tf.identity(model.outputs[i], name=pred_node_names[i])

sess = K.get_session() constant_graph = graph_util.convert_variables_to_constants(sess, sess.graph.as_graph_def(), pred_node_names) graph_io.write_graph(constant_graph, ".", "modelTmp.pb", as_text=False)

Read the graph.

with tf.gfile.FastGFile('modelTmp.pb', "rb") as f: graph_def = tf.GraphDef() graph_def.ParseFromString(f.read())

with tf.Session() as sess: # Restore session sess.graph.as_default() tf.import_graph_def(graph_def, name='') tf.summary.FileWriter('logs1', graph_def) Ran "optimize_for_inference.py" creating a new output model "opt_modelTmp.pb"

Then created opt_modelTmp.pbtxt file Then removed the shape, strided_slice, prod, stack nodes and connected the Reshape node properly. Still got the error while opening the pbtxt and pb file using: cvNet = cv.dnn.readNetFromTensorflow("model.pb", "model.pbtxt")

What is wrong? Is this due to current memory? SHould I change the "SetTotalBytestLimit(). AM I doing something wrong? Thanks in advance for your help.

click to hide/show revision 2
None

updated 2019-05-16 02:01:33 -0600

berak gravatar image

Trouble while opening a model through "cv.dnn.readNetfromTensorFlow()" that is created in keras model and converted to tf pb file

Environment: Environment:

Windows 10
Python 3.6
Tensorflow: 1.6.0
Keras: 2.2.4
OpenCV: 3.4
IDE: PyCharm Community edition 2018.1

2018.1

Description: I have created a model from vgg16 and added my own layers on that for my classification problem. I saved the model, and its weights. I converted the model to .pb and could run prediction using "sess.run". However I need to open this and run the prediction through opencv (as I need to run on .Net ultimately using emgu cv wrapper). But I am unable to open the model using "cv.dnn.readNetfromTensorFlow()". It gave an error

"Process finished with exit code -1073741819 (0xC0000005)"

(0xC0000005)"

To debug, I used another pre-trained vgg16 model and tried to open that model by following the steps given this article faithfully I removed the shape, stack, prod, strided_slice, connected the Reshape node properly to the pooling layer. I got an error in that also. These were the warning and error messages on runnning the pre-trained vgg16 model model:

[libprotobuf WARNING C:\projects\opencv-python\opencv\3rdparty\protobuf\src\google\protobuf\io\coded_stream.cc:605] Reading dangerously large protocol message.  If the message turns out to be larger than 2147483647 bytes, parsing will be halted for security reasons.  To increase the limit (or to disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h.
[libprotobuf WARNING C:\projects\opencv-python\opencv\3rdparty\protobuf\src\google\protobuf\io\coded_stream.cc:82] The total number of bytes read was 553443385

553443385 Process finished with exit code -1073741819 (0xC0000005)

It could not open the model and also it did not throw any exceptions

Here is the code: To generate the pb file for keras (borrowed from the same article

from keras import applications
from keras import backend as K
import cv2 as cv
import tensorflow as tf
from tensorflow.python.framework import graph_util
from tensorflow.python.framework import graph_io

graph_io model = applications.VGG16(input_shape=(224, 224, 3), weights='imagenet', include_top=True) print("output=", model.outputs) print("input=", model.inputs)

K.set_learning_phase(0)

model.inputs) K.set_learning_phase(0) pred_node_names = [None] pred = [None] for i in range(1): pred_node_names[i] = "output_node"+str(i) pred[i] = tf.identity(model.outputs[i], name=pred_node_names[i])

name=pred_node_names[i]) sess = K.get_session() constant_graph = graph_util.convert_variables_to_constants(sess, sess.graph.as_graph_def(), pred_node_names) graph_io.write_graph(constant_graph, ".", "modelTmp.pb", as_text=False)

as_text=False) # Read the graph.

graph. with tf.gfile.FastGFile('modelTmp.pb', "rb") as f: graph_def = tf.GraphDef() graph_def.ParseFromString(f.read())

graph_def.ParseFromString(f.read()) with tf.Session() as sess: # Restore session sess.graph.as_default() tf.import_graph_def(graph_def, name='') tf.summary.FileWriter('logs1', graph_def)

Ran "optimize_for_inference.py" creating a new output model "opt_modelTmp.pb"

Then created opt_modelTmp.pbtxt file file

Then removed the shape, strided_slice, prod, stack nodes and connected the Reshape node properly. Still got the error while opening the pbtxt and pb file using: using:

cvNet = cv.dnn.readNetFromTensorflow("model.pb", "model.pbtxt")

"model.pbtxt")

What is wrong? Is this due to current memory? SHould I change the "SetTotalBytestLimit(). AM I doing something wrong? Thanks in advance for your help.

click to hide/show revision 3
retagged

updated 2019-05-16 02:02:21 -0600

berak gravatar image

Trouble while opening a model through "cv.dnn.readNetfromTensorFlow()" that is created in keras model and converted to tf pb file

Environment:

Windows 10
Python 3.6
Tensorflow: 1.6.0
Keras: 2.2.4
OpenCV: 3.4
IDE: PyCharm Community edition 2018.1

Description: I have created a model from vgg16 and added my own layers on that for my classification problem. I saved the model, and its weights. I converted the model to .pb and could run prediction using "sess.run". However I need to open this and run the prediction through opencv (as I need to run on .Net ultimately using emgu cv wrapper). But I am unable to open the model using "cv.dnn.readNetfromTensorFlow()". It gave an error

"Process finished with exit code -1073741819 (0xC0000005)"

To debug, I used another pre-trained vgg16 model and tried to open that model by following the steps given this article faithfully I removed the shape, stack, prod, strided_slice, connected the Reshape node properly to the pooling layer. I got an error in that also. These were the warning and error messages on runnning the pre-trained vgg16 model:

[libprotobuf WARNING C:\projects\opencv-python\opencv\3rdparty\protobuf\src\google\protobuf\io\coded_stream.cc:605] Reading dangerously large protocol message.  If the message turns out to be larger than 2147483647 bytes, parsing will be halted for security reasons.  To increase the limit (or to disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h.
[libprotobuf WARNING C:\projects\opencv-python\opencv\3rdparty\protobuf\src\google\protobuf\io\coded_stream.cc:82] The total number of bytes read was 553443385

Process finished with exit code -1073741819 (0xC0000005)

It could not open the model and also it did not throw any exceptions

Here is the code: To generate the pb file for keras (borrowed from the same article

from keras import applications
from keras import backend as K
import cv2 as cv
import tensorflow as tf
from tensorflow.python.framework import graph_util
from tensorflow.python.framework import graph_io

model = applications.VGG16(input_shape=(224, 224, 3), weights='imagenet', include_top=True)
print("output=", model.outputs)
print("input=", model.inputs)

K.set_learning_phase(0)

pred_node_names = [None]
pred = [None]
for i in range(1):
    pred_node_names[i] = "output_node"+str(i)
    pred[i] = tf.identity(model.outputs[i], name=pred_node_names[i])

sess = K.get_session()
constant_graph = graph_util.convert_variables_to_constants(sess, sess.graph.as_graph_def(),
                                                           pred_node_names)
graph_io.write_graph(constant_graph, ".", "modelTmp.pb", as_text=False)

# Read the graph.
with tf.gfile.FastGFile('modelTmp.pb', "rb") as f:
    graph_def = tf.GraphDef()
    graph_def.ParseFromString(f.read())

with tf.Session() as sess:
    # Restore session
    sess.graph.as_default()
    tf.import_graph_def(graph_def, name='')
    tf.summary.FileWriter('logs1', graph_def)

Ran "optimize_for_inference.py" creating a new output model "opt_modelTmp.pb"

Then created opt_modelTmp.pbtxt file

Then removed the shape, strided_slice, prod, stack nodes and connected the Reshape node properly. Still got the error while opening the pbtxt and pb file using:

cvNet = cv.dnn.readNetFromTensorflow("model.pb", "model.pbtxt")

What is wrong? Is this due to current memory? SHould I change the "SetTotalBytestLimit(). AM I doing something wrong? Thanks in advance for your help.