net.Forward outputs differ from the keras outputs [closed]
I have an issue concerning the classification when using net.forward. First I have trained my model in Keras with TensorFlow as backend. Then I save the model as .pb. I can import it successfully in OpenCV using 'ReadNetFromTensorflow' in C#. I'm using the OpenCVSharp nuget package. The problem I encounter is that my predictions now are not the same as I get in Keras. The accuracy differs a lot. Here is the link for the .pb file. Btw I was using the Flatten layer but as it is not supported by OpenCV (I was getting a error while loading the model) I have changed it to a Reshape layer. I don't know what else to do. Any help? Here is the code I have used for training:
model = K.models.Sequential()
model.add(K.layers.Conv2D(64,kernel_size=(3,3),strides=(1, 1),input_shape=input_shape, padding='same'))
model.add(K.layers.BatchNormalization())
model.add(K.layers.Activation('relu'))
model.add(K.layers.Conv2D(64,kernel_size=(3,3),strides=(1, 1), padding='same'))
model.add(K.layers.BatchNormalization())
model.add(K.layers.Activation('relu'))
model.add(K.layers.MaxPooling2D(pool_size=(2,2)))
model.add(K.layers.Conv2D(128,kernel_size=(3,3),strides=(1, 1), padding='same'))
model.add(K.layers.BatchNormalization())
model.add(K.layers.Activation('relu'))
model.add(K.layers.Conv2D(128,kernel_size=(3,3),strides=(1, 1), padding='same'))
model.add(K.layers.BatchNormalization())
model.add(K.layers.Activation('relu'))
model.add(K.layers.MaxPooling2D(pool_size=(2,2)))
model.add(K.layers.Conv2D(256,kernel_size=(3,3),strides=(1, 1), padding='same'))
model.add(K.layers.BatchNormalization())
model.add(K.layers.Activation('relu'))
model.add(K.layers.Conv2D(256,kernel_size=(3,3),strides=(1, 1), padding='same'))
model.add(K.layers.BatchNormalization())
model.add(K.layers.Activation('relu'))
model.add(K.layers.AveragePooling2D(pool_size=(2,2)))
#From Flatten to Reshape
a,b,c,d = model.output_shape
a = b*c*d
model.add(K.layers.Permute([1, 2, 3])) # Indicate NHWC data layout
model.add(K.layers.Reshape((a,)))
model.add(K.layers.Dense(num_classes, activation='softmax', name = "output_node"))
sgd = tf.keras.optimizers.SGD(lr=lr, decay=decay, momentum=0.9, nesterov=False)
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer='sgd',
metrics=['accuracy'])
Then I use the following code to obtain the .pb file:
sess = K.backend.get_session()
constant_graph = tf.graph_util.convert_variables_to_constants(sess, sess.graph.as_graph_def(), ['output_node/Softmax'])
tf.train.write_graph(constant_graph, "", "graph.pb", as_text=False)
Then, in C# I use the following code to try inference:
var model = System.IO.Path.Combine(Location, Model);
var net = CvDnn.ReadNetFromTensorflow(model);
var image = System.IO.Path.Combine(Location, sampleImage);
var frame = Cv2.ImRead(image);
var blob = CvDnn.BlobFromImage(frame, 1.0 / 255.0, new OpenCvSharp.Size(32, 15), new Scalar(0, 0, 0), true, false);
net.SetInput(blob);
////get output layer name
var outNames = net.GetUnconnectedOutLayersNames();
////create mats for output layer
var outs = outNames.Select(_ => new Mat()).ToArray();
using (var predictions = net.Forward(outNames[0]))
{
PrintMat ...