Dimension error with inference using TensorFlow exported model

asked 2017-10-20 11:12:22 -0500

jingyibo123 gravatar image

updated 2017-10-24 04:34:43 -0500

Hello everyone,

I'm currently tring to import a Tensorflow trained model (.pb file) with the dnn module, however I'm stuck when referencing.

The Tensorflow model a basic MNIST CNN model here , with some minor changes (removing the randoms).

X = tf.placeholder(tf.float32, [None, 784])
X_img = tf.reshape(X, [-1, 28, 28, 1])
Y = tf.placeholder(tf.float32, [None, 10])

W1 = tf.Variable(tf.ones([3, 3, 1, 32])*0.005)
L1 = tf.nn.conv2d(X_img, W1, strides=[1, 1, 1, 1], padding='SAME')
...

The model is exported after freeze_graph and optimize_for_inference

The only input is the 28*28 gray image X = tf.placeholder(tf.float32, [None, 784]), The following Python opencv code works perfectly

img = cv2.imread(path)
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
gray_float = np.array(1.0 - gray / 255.0, dtype = np.float32)
input gray_float.flatten()

However in C++:

Net net = readNetFromTensorflow("../../resource/save1/model2.pb");

Mat img = imread("2.png");
Mat gray;
cvtColor(img, gray, COLOR_BGR2GRAY);
gray = gray.reshape(1, 1);
bitwise_not(gray, gray);
gray.convertTo(gray, CV_32F);
gray = gray / 255.0;

net.setInput(gray, "Placeholder_1");  // gray is (1, 28*28, CV_32F)

I got this error on line setInput

OpenCV Error: Assertion failed (inpCn % ngroups == 0 && outCn % ngroups == 0) in getMemoryShapes, file C:\Users\E507067\Downloads\openCV\sources\modules\dnn\src\layers\convolution_layer.cpp, line 190

The error to me seems related to the shape of input gray, but I can't get it to work. Does anyone have any idea to solve this? Thanks any help.

edit retag flag offensive close merge delete

Comments

@jingyibo123, please check a value of gray.dims (it should be 2) and values gray.cols, gray.rows: they should be -1 but gray.size[0] and gray.size[1]are 1 and 784 correspondingly. If something is not, do a reshape by https://docs.opencv.org/master/d3/d63... passing vector with shape values are (1, 784). The problem is interpretation: canonical dimensions are batch x channels x height x width. After reshape it has 1 row and 784 columns but in fact it's a batch of 1 vector with 784 values.

dkurt gravatar imagedkurt ( 2017-10-20 14:21:34 -0500 )edit

@dkurt, thanks for you detailed response.

After reshape(1, 1), Mat gray has dims = 2, rows = 1, cols = 784, gray.size[0] = 1, gray.size[1] = 784,. If I change to reshape(1, {1, 784}), Mat gray is no different as the same error occurs.

Passing a new empty Mat gray(1, 28*28, CV_32F); doesn't work either.

jingyibo123 gravatar imagejingyibo123 ( 2017-10-23 02:36:07 -0500 )edit

@jingyibo123, I have same error with MNIST model trained on Tensorflow. But my error occurs when forward() is executed.

Young Min Shin gravatar imageYoung Min Shin ( 2017-10-23 04:19:09 -0500 )edit

@Young Min Shin , could you give more detail, if possible, on how you imported the test image and called setInput? Thanks a lot.

jingyibo123 gravatar imagejingyibo123 ( 2017-10-23 10:30:16 -0500 )edit