Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

OpenCV dnn - Squeeze & Excitation Module freezing

Problem

Hello I am having the following issue. I am trying to freeze EfficientNet taken from this repo and use the protobuf file for using it with OpenCV dnn module.

The model is a simple classification network. Right after the the Feature Extractor specified in the link, I just try add an extra dense & and an extra classification layer of num_classes.

I have frozen several networks and used them with the dnn library before and I know all the issues that can arise, which I have already tried.

Example code in python:

net = cv2.dnn.readNetFromTensorflow('frozen.pb') 
x_cv = np.random.random((224, 224, 3)).astype(np.uint8)
blob = cv2.dnn.blobFromImage(x_cv, 1.0, (224, 224), (0, 0, 0))
net.setInput(blob)
# (1 x num_classes)
net.forward().shape

However life is not that easy. This is the following error:

OpenCV(4.0.1) /io/opencv/modules/dnn/src/layers/eltwise_layer.cpp:116: error: (-215:Assertion failed) inputs[0] == inputs[i] in function 'getMemoryShapes'

Eventually I Found out that the problem is caused when using the Squeeze and excitation module (SE module for short). If I disable the flag & remove those modules from the network, the forward pass works.

In my opinion is the flow of the graph of the SE Module that is not acceptable from OpenCV from going to an average pooling to 1x1 convs and finally multiplying the features.

Some remarks:

I had this issue with SE modules in the past, with the difference that the model was using more basic operations such as:

  • reduce_mean across the features instead of average pooling

  • fully connected layers instead of 1x1 Convolutions, since they result in the same number of features.

Additionally:

  • I used the optimize_for_inference_lib tool with no success.

  • Using the following command: net.getUnconnectedOutLayersNames() gave me the following results:

    ['block1a_se_squeeze/Mean/flatten', 'block2a_se_squeeze/Mean/flatten', 'block2b_se_squeeze/Mean/flatten', 'block3a_se_squeeze/Mean/flatten', 'block3b_se_squeeze/Mean/flatten', 'block4a_se_squeeze/Mean/flatten', 'block4b_se_squeeze/Mean/flatten', 'block4c_se_squeeze/Mean/flatten', 'block5a_se_squeeze/Mean/flatten', 'block5b_se_squeeze/Mean/flatten', 'block5c_se_squeeze/Mean/flatten', 'block6a_se_squeeze/Mean/flatten', 'block6b_se_squeeze/Mean/flatten', 'block6c_se_squeeze/Mean/flatten', 'block6d_se_squeeze/Mean/flatten', 'block7a_se_squeeze/Mean/flatten', 'avg_pool/Mean/flatten', 'dense_1/Softmax']

Follow up

I would like to know if there is a possible way to add the SE Module either by

  • reshaping the network in a compatible manner
  • transforming the graph and using a .pbtxt file additionally
  • register the module as custom & override the getMemoryShapes()

Of course the easiest solution would be preferrable :)

Platform/Environment

  • Ubuntu 16.04/18.04

  • OpenCV 3.4.21/4.0.1

  • Python 3.5

  • Tensorflow 1.13 with keras 2.2.4-tf

Thank you in advance

OpenCV dnn - Squeeze & Excitation Module freezing

Problem

Hello I am having the following issue. I am trying to freeze EfficientNet taken from this repo and use the protobuf file for using it with OpenCV dnn module.

The model is a simple classification network. Right after the the Feature Extractor specified in the link, I just try add an extra dense & and an extra classification layer of num_classes.

I have frozen several networks and used them with the dnn library before and I know all the issues that can arise, which I have already tried.

Example code in python:

net = cv2.dnn.readNetFromTensorflow('frozen.pb') 
x_cv = np.random.random((224, 224, 3)).astype(np.uint8)
blob = cv2.dnn.blobFromImage(x_cv, 1.0, (224, 224), (0, 0, 0))
net.setInput(blob)
# (1 x num_classes)
net.forward().shape

However life is not that easy. This is the following error:

OpenCV(4.0.1) error: OpenCV(4.1.1) /io/opencv/modules/dnn/src/layers/eltwise_layer.cpp:116: error: (-215:Assertion failed) inputs[0] == inputs[i] in function 'getMemoryShapes'

Eventually I Found out that the problem is caused when using the Squeeze and excitation module (SE module for short). If I disable the flag & remove those modules from the network, the forward pass works.

In my opinion is the flow of the graph of the SE Module that is not acceptable from OpenCV from going to an average pooling to 1x1 convs and finally multiplying the features.

Some remarks:

I had this issue with SE modules in the past, with the difference that the model was using more basic operations such as:

  • reduce_mean across the features instead of average pooling

  • fully connected layers instead of 1x1 Convolutions, since they result in the same number of features.

Additionally:

  • I used the optimize_for_inference_lib tool with no success.

  • Using the following command: net.getUnconnectedOutLayersNames() gave me the following results:

    ['block1a_se_squeeze/Mean/flatten', 'block2a_se_squeeze/Mean/flatten', 'block2b_se_squeeze/Mean/flatten', 'block3a_se_squeeze/Mean/flatten', 'block3b_se_squeeze/Mean/flatten', 'block4a_se_squeeze/Mean/flatten', 'block4b_se_squeeze/Mean/flatten', 'block4c_se_squeeze/Mean/flatten', 'block5a_se_squeeze/Mean/flatten', 'block5b_se_squeeze/Mean/flatten', 'block5c_se_squeeze/Mean/flatten', 'block6a_se_squeeze/Mean/flatten', 'block6b_se_squeeze/Mean/flatten', 'block6c_se_squeeze/Mean/flatten', 'block6d_se_squeeze/Mean/flatten', 'block7a_se_squeeze/Mean/flatten', 'avg_pool/Mean/flatten', 'dense_1/Softmax']

Follow up

I would like to know if there is a possible way to add the SE Module either by

  • reshaping the network in a compatible manner
  • transforming the graph and using a .pbtxt file additionally
  • register the module as custom & override the getMemoryShapes()

Of course the easiest solution would be preferrable :)

Platform/Environment

  • Ubuntu 16.04/18.04

  • OpenCV 3.4.21/4.0.1

  • Python 3.5

  • Tensorflow 1.13 with keras 2.2.4-tf

Thank you in advance

OpenCV dnn - Squeeze & Excitation Module freezing

Problem

Hello I am having the following issue. I am trying to freeze EfficientNet taken from this repo and use the protobuf file for using it with OpenCV dnn module.

The model is a simple classification network. Right after the the Feature Extractor specified in the link, I just try add an extra dense & and an extra classification layer of num_classes.

I have frozen several networks and used them with the dnn library before and I know all the issues that can arise, which I have already tried.

Example code in python:

net = cv2.dnn.readNetFromTensorflow('frozen.pb') 
x_cv = np.random.random((224, 224, 3)).astype(np.uint8)
blob = cv2.dnn.blobFromImage(x_cv, 1.0, (224, 224), (0, 0, 0))
net.setInput(blob)
# (1 x num_classes)
net.forward().shape

However life is not that easy. This is the following error:

error: OpenCV(4.1.1) /io/opencv/modules/dnn/src/layers/eltwise_layer.cpp:116: error: (-215:Assertion failed) inputs[0] == inputs[i] in function 'getMemoryShapes'

Eventually I Found out that the problem is caused when using the Squeeze and excitation module (SE module for short). If I disable the flag & remove those modules from the network, the forward pass works.

In my opinion is the flow of the graph of the SE Module that is not acceptable from OpenCV from going to an average pooling to 1x1 convs and finally multiplying the features.

Some remarks:

I had this issue with SE modules in the past, with the difference that the model was using more basic operations such as:

  • reduce_mean across the features instead of average pooling

  • fully connected layers instead of 1x1 Convolutions, since they result in the same number of features.

Additionally:

  • I used the optimize_for_inference_lib tool with no success.

  • Using the following command: net.getUnconnectedOutLayersNames() gave me the following results:

    ['block1a_se_squeeze/Mean/flatten', 'block2a_se_squeeze/Mean/flatten', 'block2b_se_squeeze/Mean/flatten', 'block3a_se_squeeze/Mean/flatten', 'block3b_se_squeeze/Mean/flatten', 'block4a_se_squeeze/Mean/flatten', 'block4b_se_squeeze/Mean/flatten', 'block4c_se_squeeze/Mean/flatten', 'block5a_se_squeeze/Mean/flatten', 'block5b_se_squeeze/Mean/flatten', 'block5c_se_squeeze/Mean/flatten', 'block6a_se_squeeze/Mean/flatten', 'block6b_se_squeeze/Mean/flatten', 'block6c_se_squeeze/Mean/flatten', 'block6d_se_squeeze/Mean/flatten', 'block7a_se_squeeze/Mean/flatten', 'avg_pool/Mean/flatten', 'dense_1/Softmax']

Follow up

I would like to know if there is a possible way to add the SE Module either by

  • reshaping the network in a compatible manner
  • transforming the graph and using a .pbtxt file additionally
  • register the module as custom & override the getMemoryShapes()

Of course the easiest solution would be preferrable :)

Platform/Environment

  • Ubuntu 16.04/18.04

  • OpenCV 3.4.21/4.0.14.1.1.26 (pip install)

  • Python 3.5

  • Tensorflow 1.13 with keras 2.2.4-tf

Thank you in advance