Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Feeding image input to OpenCV DNN (Tensorflow network)

I'm using OpenCV 4.3.0. I have a Tensorflow python implementation working and I'm trying to port it to OpenCV DNN

My Tensorflow Python implementation looks like this:

         image = cv2.imread("1.jpg")
         image_resized = cv2.resize(image, (64, 64), interpolation = cv2.INTER_AREA)
         image_normalized = np.add(image, -127) #normalization of the input
         feed_dict = {self.tf_pitch_input_vector : image_normalized}
         out = self._sess.run([self.cnn_pitch_output], feed_dict=feed_dict)

At the beginning of my network there is a reshape layer, that looks like this,

        X = tf.reshape(data, shape=[-1, 64, 64, 3])

The image is fed through the feed_dict and reshaped as shown above in the first layer and the network proceeds. This (Tensorflow python) works well.

My OpenCV DNN implementation looks like this:

        image = cv2.imread("1.jpg")
        net = cv2.dnn.readNetFromTensorflow("model.pb")
        resized = cv2.resize(image, (64, 64),  interpolation=cv2.INTER_AREA)
        input_blob = cv2.dnn.blobFromImage(resized, 1, (64,64), -127, swapRB=False, crop=False)
        print("blob: shape {}".format(blob.shape))
        blob = blob.reshape(-1, 64, 64, 3) 
        print("blob: new shape {}".format(blob.shape))
        net.setInput(input_blob)
        out = net.forward()

The output of the shapes printed in above code looks like this,

        blob: shape (1, 3, 64, 64)
        blob: new shape (1, 64, 64, 3)

Problem: The problem is that the network output is not matching between Tensorflow Python and OpenCV DNN. This is because that the data fed in OpenCV DNN seems different. I'm sure I'm doing something wrong either in blobFromImage() or after that.

Could someone let me know what I'm missing here?

Thanks in Advance!

Feeding image input to OpenCV DNN (Tensorflow network)

I'm using OpenCV 4.3.0. I have a Tensorflow python implementation working and I'm trying to port it to OpenCV DNN

My Tensorflow Python implementation looks like this:

         image = cv2.imread("1.jpg")
         image_resized = cv2.resize(image, (64, 64), interpolation = cv2.INTER_AREA)
         image_normalized = np.add(image, -127) #normalization of the input
         feed_dict = {self.tf_pitch_input_vector : image_normalized}
         out = self._sess.run([self.cnn_pitch_output], feed_dict=feed_dict)

At the beginning of my network there is a reshape layer, that looks like this,

        X = tf.reshape(data, shape=[-1, 64, 64, 3])

The image is fed through the feed_dict and reshaped as shown above in the first layer and the network proceeds. This (Tensorflow python) works well.

My OpenCV DNN implementation looks like this:

        image = cv2.imread("1.jpg")
        net = cv2.dnn.readNetFromTensorflow("model.pb")
        resized = cv2.resize(image, (64, 64),  interpolation=cv2.INTER_AREA)
        input_blob = cv2.dnn.blobFromImage(resized, 1, (64,64), -127, swapRB=False, crop=False)
        print("blob: shape {}".format(blob.shape))
        blob = blob.reshape(-1, 64, 64, 3) 
        print("blob: new shape {}".format(blob.shape))
        net.setInput(input_blob)
        out = net.forward()

The output of the shapes printed in above code looks like this,

        blob: shape (1, 3, 64, 64)
        blob: new shape (1, 64, 64, 3)

Problem: The problem is that the network output is not matching between Tensorflow Python and OpenCV DNN. This is because that the data fed in OpenCV DNN seems different. I'm sure I'm doing something wrong either in blobFromImage() or after that.

Could someone let me know what I'm missing here?

Thanks in Advance!

Feeding image input to OpenCV DNN (Tensorflow network)using cv2.dnn.blobFromImage()

I'm using OpenCV 4.3.0. I have a Tensorflow python implementation working and I'm trying to port it to OpenCV DNN

My Tensorflow Python implementation looks like this:

         image = cv2.imread("1.jpg")
         image_resized = cv2.resize(image, (64, 64), interpolation = cv2.INTER_AREA)
         image_normalized = np.add(image, -127) #normalization of the input
         feed_dict = {self.tf_pitch_input_vector : image_normalized}
         out = self._sess.run([self.cnn_pitch_output], feed_dict=feed_dict)

At the beginning of my network there is a reshape layer, that looks like this,

        X = tf.reshape(data, shape=[-1, 64, 64, 3])

The image is fed through the feed_dict and reshaped as shown above in the first layer and the network proceeds. This (Tensorflow python) works well.

My OpenCV DNN implementation looks like this:

        image = cv2.imread("1.jpg")
        net = cv2.dnn.readNetFromTensorflow("model.pb")
        resized = cv2.resize(image, (64, 64),  interpolation=cv2.INTER_AREA)
        input_blob = cv2.dnn.blobFromImage(resized, 1, (64,64), -127, swapRB=False, crop=False)
        print("blob: shape {}".format(blob.shape))
        blob = blob.reshape(-1, input_blob = input_blob.reshape(-1, 64, 64, 3) 
        print("blob: new shape {}".format(blob.shape))
        net.setInput(input_blob)
        out = net.forward()

The output of the shapes printed in above code looks like this,

        blob: shape (1, 3, 64, 64)
        blob: new shape (1, 64, 64, 3)

Problem: The problem is that the network output is not matching between Tensorflow Python and OpenCV DNN. This is because that the data fed in OpenCV DNN seems different. I'm sure I'm doing something wrong either in blobFromImage() or after that.

Could someone let me know what I'm missing here?

Thanks in Advance!

Feeding image input to OpenCV DNN using cv2.dnn.blobFromImage()

I'm using OpenCV 4.3.0. I have a Tensorflow python implementation working and I'm trying to port it to OpenCV DNN

I'm using https://github.com/mpatacchiola/deepgaze network and the layer details are here.

I've frozen the model, and OpenCV DNN successfully loads the module.

My Tensorflow Python implementation looks like this:

         image = cv2.imread("1.jpg")
         image_resized = cv2.resize(image, (64, 64), interpolation = cv2.INTER_AREA)
         image_normalized = np.add(image, -127) #normalization of the input
         feed_dict = {self.tf_pitch_input_vector : image_normalized}
         out = self._sess.run([self.cnn_pitch_output], feed_dict=feed_dict)

At the beginning of my network there is a reshape layer, that looks like this,

        X = tf.reshape(data, shape=[-1, 64, 64, 3])

The image is fed through the feed_dict and reshaped as shown above in the first layer and the network proceeds. This (Tensorflow python) works well.

My OpenCV DNN implementation looks like this:

        image = cv2.imread("1.jpg")
        net = cv2.dnn.readNetFromTensorflow("model.pb")
        resized = cv2.resize(image, (64, 64),  interpolation=cv2.INTER_AREA)
        input_blob = cv2.dnn.blobFromImage(resized, 1, (64,64), -127, swapRB=False, crop=False)
        print("blob: shape {}".format(blob.shape))
        input_blob = input_blob.reshape(-1, 64, 64, 3) 
        print("blob: new shape {}".format(blob.shape))
        net.setInput(input_blob)
        out = net.forward()

The output of the shapes printed in above code looks like this,

        blob: shape (1, 3, 64, 64)
        blob: new shape (1, 64, 64, 3)

Problem: The problem is that the network output is not matching between Tensorflow Python and OpenCV DNN. This is because that the data fed in OpenCV DNN seems different. I'm sure I'm doing something wrong either in blobFromImage() or after that.

Could someone let me know what I'm missing here?

Thanks in Advance!