Opencv dnn import dropout layer error after finetuning Keras vgg16
Hi,
A few days ago I asked a question about importing a pretrained keras vgg16 model into Opencv dnn [1].
Now I finetuned the vgg16 for my own application by excluding the existed imagenet head and adding a new head to the model. Below shows the "pseudocode" how it's done:
baseModel = VGG16(input_shape=(224, 224, 3), weights='imagenet', include_top=False)
headModel = baseModel.output
headModel = Flatten(name="flatten")(headModel)
headModel = Dense(256, activation="relu")(headModel)
headModel = Dropout(0.5)(headModel)
headModel = Dense(5, activation="softmax")(headModel)
model = Model(inputs=baseModel.input, outputs=headModel)
Subsequently, I train the new model with my own data and export it similar to the answer of my previous question. However when I try to read the net into opencv, it returns a ImportError:
cv2.error: C:\projects\opencv-python\opencv\modules\dnn\src\tensorflow\tf_importer.cpp:1487: error: (-2) Unknown layer type PlaceholderWithDefault in op dropout_1/keras_learning_phase in function cv::dnn::experimental_dnn_v3::`anonymous-namespace'::TFImporter::populateNet
I've read on github, there is a solution to include dropout layers (https://github.com/opencv/opencv/pull...). Do you have any suggestions on how to implement this with keras? Or am I just making it myself difficult using Keras on top of tensorflow.
I have one more additional question: Do you ever plan to implement a readNetFromKeras(...) where a config.json and weights.h5 is given?
Edit:
Pbtxt file before (so flatten and dropout layers are included)
... some stuff before ...
node {
name: "flatten/Reshape"
op: "Reshape"
input: "block5_pool/MaxPool"
input: "flatten/stack"
}
node {
name: "dense_1/MatMul"
op: "MatMul"
input: "flatten/Reshape"
input: "dense_1/kernel"
attr {
key: "transpose_a"
value {
b: false
}
}
attr {
key: "transpose_b"
value {
b: false
}
}
}
node {
name: "dense_1/BiasAdd"
op: "BiasAdd"
input: "dense_1/MatMul"
input: "dense_1/bias"
}
node {
name: "dense_1/Relu"
op: "Relu"
input: "dense_1/BiasAdd"
}
node {
name: "dropout_1/keras_learning_phase"
op: "PlaceholderWithDefault"
input: "dropout_1/keras_learning_phase/input"
attr {
key: "dtype"
value {
type: DT_BOOL
}
}
attr {
key: "shape"
value {
shape {
}
}
}
}
node {
name: "dropout_1/cond/Switch"
op: "Switch"
input: "dropout_1/keras_learning_phase"
input: "dropout_1/keras_learning_phase"
}
node {
name: "dropout_1/cond/mul/Switch"
op: "Switch"
input: "dense_1/Relu"
input: "dropout_1/keras_learning_phase"
attr {
key: "_class"
value {
list {
s: "loc:@dense_1/Relu"
}
}
}
}
node {
name: "dropout_1/cond/mul"
op: "Mul"
input: "dropout_1/cond/mul/Switch:1"
input: "dropout_1/cond/mul/y"
}
node {
name: "dropout_1/cond/dropout/Shape"
op: "Shape"
input: "dropout_1/cond/mul"
attr {
key: "out_type"
value {
type: DT_INT32
}
}
}
node {
name: "dropout_1/cond/dropout/random_uniform/RandomUniform"
op: "RandomUniform"
input: "dropout_1/cond/dropout/Shape"
attr {
key: "dtype"
value {
type: DT_FLOAT
}
}
attr {
key: "seed"
value {
i: 87654321
}
}
attr {
key: "seed2"
value {
i: 7788661
}
}
}
node {
name: "dropout_1/cond/dropout/random_uniform/sub"
op: "Sub"
input: "dropout_1/cond/dropout/random_uniform/max"
input: "dropout_1/cond/dropout/random_uniform/min"
}
node {
name: "dropout_1/cond/dropout/random_uniform/mul"
op: "Mul"
input: "dropout_1/cond/dropout/random_uniform/RandomUniform"
input: "dropout_1/cond/dropout/random_uniform/sub"
}
node {
name: "dropout_1/cond/dropout/random_uniform"
op: "Add"
input: "dropout_1/cond/dropout/random_uniform/mul"
input: "dropout_1/cond/dropout/random_uniform/min"
}
node {
name: "dropout_1/cond/dropout/add"
op: "Add"
input: "dropout_1/cond/dropout/keep_prob"
input: "dropout_1 ...
Here is link to the used .pb and .pbtxt file: https://drive.google.com/file/d/120G3...
I edited already the flatten layers in the said pbtxt file
@MennoK, dropout layer actually does nothing during testing phase. You can just exclude all the unused nodes the same way as we did for Flatten node. Create a text graph for this model and keep only used nodes.
Hey, I tried this already (if I remember correctly, you mentioned it in another github issue). So I removed every node with name: "dropout_1/..." and changed the lines where input = dropout_1 to input = Node name above in the pbtxt file.
After this, I'm able to read the model into opencv. However the test results are completely different in opencv compared to the results in Keras itself.
@dkurt I added a snippet of the .pbtxt file before and after the changes.
@MennoK, Have you checked that TensorFlow produces similar results for similar inputs? I mean dropout branch is disabled. Please show how you run a model and pass input tensor and testing phase flag.
I use Keras on top of Tensorflow, so I test the results using Keras which is simply model.predict(image) for each image in the test set. This also means I have to do an extra step, namely converting the Keras model (json and h5) to tensorflow's pb.
I do this as follows:
Code to convert Keras to Tensorflow:
After this step I followed your instructions
@MennoK, I mean you said that results are different between OpenCV and Keras but are you sure that you receive expected results in Keras? It's about dropout. Can you share a way how you compare results from Keras and OpenCV.
@dkurt I'm not sure I understand what you mean. I can't send all the code, but I'm sure I receive the expected results in keras. However a confusion matrix from opencv shows: