Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Inception V3 retrained model incorrect predictions after using optimize_for_inference and graph_transform tools

Ladies and Gents, I have model generated using the tensorflow for poets tutorial. I have both binary and 3-class models that are exhibiting the same behavior.

The forzen graph performs as expected but soon as I transform that to be used with opencv dnn module, the predictions are way off, usually favoring one class over others.

I used the transforms described here: http://answers.opencv.org/question/175699/readnetfromtensorflow-fails-on-retrained-nn/

and here https://www.tensorflow.org/mobile/prepare_models

Below is the output of summarize graph after each transformation. Just in case you see something unusual:

original

No inputs spotted. No variables spotted. Found 1 possible outputs: (name=final_result, op=Softmax) Found 21826166 (21.83M) const parameters, 0 (0) variable parameters, and 99 control_edges

Op types used: 489 Const, 101 Identity, 99 CheckNumerics, 94 Relu, 94 BatchNormWithGlobalNormalization, 94 Conv2D, 11 Concat, 9 AvgPool, 5 MaxPool, 1 DecodeJpeg, 1 ExpandDims, 1 Cast, 1 MatMul, 1 Mul, 1 PlaceholderWithDefault, 1 Add, 1 Reshape, 1 ResizeBilinear, 1 Softmax, 1 Sub

After optimize_for_inference

Found 1 possible inputs: (name=DecodeJpeg/contents, type=float(1), shape=None) No variables spotted. Found 1 possible outputs: (name=final_result, op=Softmax) Found 21774517 (21.77M) const parameters, 0 (0) variable parameters, and 0 control_edges

Op types used: 206 Const, 94 BiasAdd, 94 Conv2D, 94 Relu, 11 Concat, 9 AvgPool, 5 MaxPool, 1 Sub, 1 Add, 1 Softmax, 1 ResizeBilinear, 1 Reshape, 1 PlaceholderWithDefault, 1 Placeholder, 1 Mul, 1 MatMul, 1 ExpandDims, 1 DecodeJpeg, 1 Cast

transform_graph with strip_unused

Found 1 possible inputs: (name=Mul, type=float(1), shape=[1,299,299,3]) No variables spotted. Found 1 possible outputs: (name=final_result, op=Softmax)

Found 21826160 (21.83M) const parameters, 0 (0) variable parameters, and 99 control_edges Op types used: 484 Const, 101 Identity, 99 CheckNumerics, 94 BatchNormWithGlobalNormalization, 94 Conv2D, 94 Relu, 11 Concat, 9 AvgPool, 5 MaxPool, 1 Add, 1 MatMul, 1 Placeholder, 1 PlaceholderWithDefault, 1 Reshape, 1 Softmax

I'm wondering what the cause could be and how to maintain the inference performance when deploying the graph. Thank you.

Inception V3 retrained model incorrect predictions after using optimize_for_inference and graph_transform tools

Ladies and Gents, I have model generated using the tensorflow for poets tutorial. I have both binary and 3-class models that are exhibiting the same behavior.

The forzen graph model performs as expected prior to transformation. Predicts well with up to 90 % accuracy. but soon as I transform that to be used with opencv dnn module, the predictions are way off, usually favoring one class over others.

I used the transforms described here: http://answers.opencv.org/question/175699/readnetfromtensorflow-fails-on-retrained-nn/

and here https://www.tensorflow.org/mobile/prepare_models

Below is the output of summarize graph after each transformation. Just in case you see something unusual:

original

No inputs spotted. No variables spotted. Found 1 possible outputs: (name=final_result, op=Softmax) Found 21826166 (21.83M) const parameters, 0 (0) variable parameters, and 99 control_edges

Op types used: 489 Const, 101 Identity, 99 CheckNumerics, 94 Relu, 94 BatchNormWithGlobalNormalization, 94 Conv2D, 11 Concat, 9 AvgPool, 5 MaxPool, 1 DecodeJpeg, 1 ExpandDims, 1 Cast, 1 MatMul, 1 Mul, 1 PlaceholderWithDefault, 1 Add, 1 Reshape, 1 ResizeBilinear, 1 Softmax, 1 Sub

After optimize_for_inference

Found 1 possible inputs: (name=DecodeJpeg/contents, type=float(1), shape=None) No variables spotted. Found 1 possible outputs: (name=final_result, op=Softmax) Found 21774517 (21.77M) const parameters, 0 (0) variable parameters, and 0 control_edges

Op types used: 206 Const, 94 BiasAdd, 94 Conv2D, 94 Relu, 11 Concat, 9 AvgPool, 5 MaxPool, 1 Sub, 1 Add, 1 Softmax, 1 ResizeBilinear, 1 Reshape, 1 PlaceholderWithDefault, 1 Placeholder, 1 Mul, 1 MatMul, 1 ExpandDims, 1 DecodeJpeg, 1 Cast

transform_graph with strip_unused

Found 1 possible inputs: (name=Mul, type=float(1), shape=[1,299,299,3]) No variables spotted. Found 1 possible outputs: (name=final_result, op=Softmax)

Found 21826160 (21.83M) const parameters, 0 (0) variable parameters, and 99 control_edges Op types used: 484 Const, 101 Identity, 99 CheckNumerics, 94 BatchNormWithGlobalNormalization, 94 Conv2D, 94 Relu, 11 Concat, 9 AvgPool, 5 MaxPool, 1 Add, 1 MatMul, 1 Placeholder, 1 PlaceholderWithDefault, 1 Reshape, 1 Softmax

I'm wondering what the cause could be and how to maintain the inference performance when deploying the graph. Thank you. you.