System information (version)
- OpenCV => 4.1.1
- Operating System / Platform => Windows 64 Bit
- Compiler => Qt Qreator
Detailed description
I've trained a custom Tensorflow-Model and I can predict my Model inside my training framework (tensorpack) without any issues. Now I want to deploy my Model into openCV to use it in my main project. I know that the following steps have to be made to succesfully load my model into openCV:
- Freeze the Graph and generate a frozenGraph.pb with tf.optimize_for_inference // DONE
- Generate a pbtxt File with tf.train.write_graph(frozenGraph, asText=True) // DONE
However, I cannot load my model into openCV. I've tried differenct approaches and always get an exception. However, the exceptions differs based on the following factors:
A. Net ub6net = readNetFromTensorflow (model) // ONLY PB FILE
A.1 - optmize_for_inference disabled: Input Node not found Exception A.2 - optimize_for_inference enabled: nodesMapIt != nodesMap.end() in func sortByExecutionOrder
B. Net ub6net = readNetFromTensorflow (model, graph) // PB and PBTXT
B.1 - optimize_for_inference disabled: Assertion failed in function 'addConstNodes' B.2 - optimize_for_inference enabled: Assertion failed in function 'addConstNodes'
It does not seem to make a difference if I use optimize_for_inference when I read the network with the pbtxt and pb File, I get the same Exception in both cases.
My Code to generate the pb and pbtxt file:
# File: export.py
import tensorflow as tf
from tensorflow.python.framework import graph_util
from tensorflow.python.platform import gfile
from tensorflow.python.tools import optimize_for_inference_lib
from ..compat import is_tfv2, tfv1
from ..input_source import PlaceholderInput
from ..tfutils.common import get_tensors_by_names, get_tf_version_tuple
from ..tfutils.tower import PredictTowerContext
from ..utils import logger
__all__ = ['ModelExporter']
class ModelExporter(object):
"""Export models for inference."""
def __init__(self, config):
"""Initialise the export process.
Args:
config (PredictConfig): the config to use.
The graph will be built with the tower function defined by this `PredictConfig`.
Then the input / output names will be used to export models for inference.
"""
super(ModelExporter, self).__init__()
self.config = config
def export_compact(self, filename, optimize=True, toco_compatible=False):
"""Create a self-contained inference-only graph and write final graph (in pb format) to disk.
Args:
filename (str): path to the output graph
optimize (bool): whether to use TensorFlow's `optimize_for_inference`
to prune and optimize the graph. This does not work on all types of graphs.
toco_compatible (bool): See TensorFlow's
`optimize_for_inference
<https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/tools/optimize_for_inference.py>`_
for details. Only available after TF 1.8.
"""
if toco_compatible:
assert optimize, "toco_compatible is only effective when optimize=True!"
self.graph = self.config._maybe_create_graph()
with self.graph.as_default():
input = PlaceholderInput()
input.setup(self.config.input_signature)
with PredictTowerContext(''):
self.config.tower_func(*input.get_input_tensors())
input_tensors = get_tensors_by_names(self.config.input_names)
output_tensors = get_tensors_by_names(self.config.output_names)
self.config.session_init._setup_graph()
# we cannot use "self.config.session_creator.create_session()" here since it finalizes the graph
sess = tfv1.Session(config=tfv1.ConfigProto(allow_soft_placement=True))
self.config.session_init._run_init(sess)
dtypes = [n.dtype for n in input_tensors]
# freeze variables to constants
frozen_graph_def = graph_util.convert_variables_to_constants(
sess,
self.graph.as_graph_def(),
[n.name[:-2] for n in output_tensors],
variable_names_whitelist=None,
variable_names_blacklist=None)
# prune unused nodes from graph
if optimize:
toco_args = () if get_tf_version_tuple() < (1, 8) else (toco_compatible, )
frozen_graph_def = optimize_for_inference_lib.optimize_for_inference(
frozen_graph_def,
[n.name[:-2] for n in input_tensors],
[n.name[:-2] for n in output_tensors],
[dtype.as_datatype_enum for dtype in dtypes],
*toco_args)
with gfile.FastGFile(filename, "wb") as f:
f.write(frozen_graph_def.SerializeToString())
logger.info("Output graph written to {}.".format(filename))
tf.train.write_graph(frozen_graph_def, filename, 'ub6_graph.pbtxt', as_text=True)
```
Steps to reproduce
My model: https://www.file-up.org/5hk0vv2l0924
My OpenCV Code:
```
#include <opencv2/dnn.hpp>
String model = "Path/ub6_model.pb";
String graph = "Path/ub6_graph.pbtxt";
Net ub6net = cv::dnn::readNetFromTensorflow(model, graph);
// Net ub6net = cv::dnn::readNetFromTensorflow(model);
ub6net.setPreferableBackend(DNN_BACKEND_OPENCV);
ub6.net.setPreferableTarget(DNN_TARGET_CPU);
```