Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Hi, @berak, DNN doesn't support TensorFlow's flatten op because it computes Shape of input in runtime. Then it does a reshape. There are several ways to make it more simple for DNN:

1. Use reshape op instead but computing input's shape out of the graph:

total = int(np.prod(inp.shape[1:]))
flattened = tf.reshape(inp, [-1, total])

2. More preferable way because it'll solve both flatten and dropout ops import:

2.1. Freeze and optimize graph as you did.

2.2. Call the following script to create a text graph representation:

import tensorflow as tf

# Read the graph.
with tf.gfile.FastGFile('face_opt.pb') as f:
    graph_def = tf.GraphDef()
    graph_def.ParseFromString(f.read())

# Remove Const nodes.
for i in reversed(range(len(graph_def.node))):
    if graph_def.node[i].op == 'Const':
        del graph_def.node[i]
    for attr in ['T', 'data_format', 'Tshape', 'N', 'Tidx', 'Tdim',
                 'use_cudnn_on_gpu', 'Index', 'Tperm', 'is_training',
                 'Tpaddings']:
        if attr in graph_def.node[i].attr:
            del graph_def.node[i].attr[attr]

# Save as text.
tf.train.write_graph(graph_def, "", "text_graph.pbtxt", as_text=True)

2.3. Replace a subgraph of nodes ConvNet/Flatten/Shape, ConvNet/Flatten/Slice, ConvNet/Flatten/Slice_1, ConvNet/Flatten/Prod, ConvNet/Flatten/ExpandDim, ConvNet/Flatten/concat, ConvNet/Flatten/Reshape onto the following node:

node {
  name: "ConvNet/Flatten/Reshape"
  op: "Flatten"
  input: "ConvNet/max_pooling2d_2/MaxPool"
}

2.4. Remove a subgraph from ConvNet/dropout/dropout/Shape to ConvNet/dropout/dropout/mul (inclusive both). Replace ConvNet/dense_2/MatMul's input from ConvNet/dropout/dropout/mul to ConvNet/dense/BiasAdd.

Then use both binary graph and a text one during import: https://docs.opencv.org/master/d6/d0f/group__dnn.html#gad820b280978d06773234ba6841e77e8d .

Hi, @berak, DNN doesn't support TensorFlow's flatten op because it computes Shape of input in runtime. Then it does a reshape. There are several ways to make it more simple for DNN:

1. Use reshape op instead but computing compute input's shape out of the graph:

total = int(np.prod(inp.shape[1:]))
flattened = tf.reshape(inp, [-1, total])

2. More preferable way because it'll solve both flatten and dropout ops import:

2.1. Freeze and optimize graph as you did.

2.2. Call the following script to create a text graph representation:

import tensorflow as tf

# Read the graph.
with tf.gfile.FastGFile('face_opt.pb') as f:
    graph_def = tf.GraphDef()
    graph_def.ParseFromString(f.read())

# Remove Const nodes.
for i in reversed(range(len(graph_def.node))):
    if graph_def.node[i].op == 'Const':
        del graph_def.node[i]
    for attr in ['T', 'data_format', 'Tshape', 'N', 'Tidx', 'Tdim',
                 'use_cudnn_on_gpu', 'Index', 'Tperm', 'is_training',
                 'Tpaddings']:
        if attr in graph_def.node[i].attr:
            del graph_def.node[i].attr[attr]

# Save as text.
tf.train.write_graph(graph_def, "", "text_graph.pbtxt", as_text=True)

2.3. Replace a subgraph of nodes ConvNet/Flatten/Shape, ConvNet/Flatten/Slice, ConvNet/Flatten/Slice_1, ConvNet/Flatten/Prod, ConvNet/Flatten/ExpandDim, ConvNet/Flatten/concat, ConvNet/Flatten/Reshape onto the following node:

node {
  name: "ConvNet/Flatten/Reshape"
  op: "Flatten"
  input: "ConvNet/max_pooling2d_2/MaxPool"
}

2.4. Remove a subgraph from ConvNet/dropout/dropout/Shape to ConvNet/dropout/dropout/mul (inclusive both). Replace ConvNet/dense_2/MatMul's input from ConvNet/dropout/dropout/mul to ConvNet/dense/BiasAdd.

Then use both binary graph and a text one during import: https://docs.opencv.org/master/d6/d0f/group__dnn.html#gad820b280978d06773234ba6841e77e8d .