Ask Your Question
0

layer tf.shape forbidden in opencv?

asked 2019-04-09 08:55:07 -0600

LBerger gravatar image

updated 2019-04-09 09:15:47 -0600

Hi,

i have got this error :

cv2.error: OpenCV(4.1.0-pre) G:\Lib\opencv\modules\dnn\src\dnn.cpp:524: error: (-2:Unspecified error) Can't create layer "mydataShape" of type "Shape" in function 'cv::dnn::dnn4_v20190122::LayerData::getLayerInstance'

in my graph I have got this :

batch_size = tf.shape(trainDataNode,name="mydataShape")[0]

I read this https://github.com/opencv/opencv/issu... and http://answers.opencv.org/question/17...

Does it mean that tf.shape cannot be used in opencv?

@dkurt

My code (sorry it is 200 lines)is a fork of https://github.com/matroid/dlwithtf . When NOSHAPE is True I can use .pb but when it is false I cannot.

def model(data,train=False):
    # couche de convolution tf.conv2d
    conv1 = tf.nn.conv2d(trainDataNode,conv1Filtres,strides=[1, 1, 1, 1],padding='SAME',name = 'nConv2d1')
    # ajout du biais tf.nn.bias_add
    conv1plusbiais = tf.nn.bias_add(conv1, conv1Biais,name = 'nC1plusB1')
    # Couche RELU pour non linéarité.
    relu1 = tf.nn.relu(conv1plusbiais)
    # Couche Max pooling. recherche du maximum dans un tenseur de taille 1x2x2x1 at décalage de 1x2x2x1
    # résultat un tenseur de taille à peu près égal à la moitié du tenseur d'entrée
    # padding same si un noyau est incomplet il sera complété 
    groupement1 = tf.nn.max_pool(relu1,ksize=[1, 2, 2, 1],strides=[1, 2, 2, 1],padding='SAME',name = 'maxPool1')
    conv2 = tf.nn.conv2d(groupement1,conv2Filtres,strides=[1, 1, 1, 1],padding='SAME')
    relu2 = tf.nn.relu(tf.nn.bias_add(conv2, conv2Biais))
    groupement2 = tf.nn.max_pool(relu2,ksize=[1, 2, 2, 1],strides=[1, 2, 2, 1],padding='SAME',name = 'maxPool2')
    # Redimensionnement de la couche groupement1 pour
    # fully connected layers.
    tailleGroupement2 = groupement2.get_shape().as_list()
    reshape = tf.reshape(groupement2,[tailleLots, tailleGroupement2[1] * tailleGroupement2[2] * tailleGroupement2[3]], name = 'reshape1')
    # Couche de réseau de neurones connectés Fully connected layer. 
    # broadcasts the biases.
    hidden1= tf.nn.relu(tf.matmul(reshape, poidsRN1) + biaisRN1,name='hiddenRN1')
    hidden1d = tf.layers.dropout(hidden1, 0.5, seed=SEEDINIT,name = 'nDrop')
    hidden2 = tf.matmul(hidden1d, poidsRN2,name='matmulRN2') + biaisRN2
    # Predictions for the current training minibatch.
    train_prediction = tf.nn.softmax(hidden2,name = 'noeudPrediction')

    if train:
        train_prediction = tf.nn.softmax(hidden2)
        loss = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(
            labels=trainLabelsNode, logits=hidden2),name = 'noeudPerte')

        # L2 regularization for the fully connected parameters.
        regularizers = (tf.nn.l2_loss(poidsRN1)+ tf.nn.l2_loss(biaisRN1)+tf.nn.l2_loss(poidsRN2)+ tf.nn.l2_loss(biaisRN2))
        # Add the regularization term to the loss.
        loss += 5e-4 * regularizers

        # Optimizer: set up a variable that's incremented once per batch and
        # controls the learning rate decay.
        batch = tf.Variable(0, dtype=tf.float32)
        # Decay once per epoch, using an exponential schedule starting at 0.01.
        learning_rate = tf.train.exponential_decay(
            0.01,                # Base learning rate.
            batch * BATCH_SIZE,  # Current index into the dataset.
            trainSize,          # Decay step.
            0.95,                # Decay rate.
            staircase=True, name = 'noeudTaux')
        # Use simple momentum for the optimization.
        optimizer = tf.train.MomentumOptimizer(learning_rate,0.9).minimize(loss,global_step=batch,name = 'noeudOptimiseur')
        return optimizer, loss, learning_rate
    train_prediction = tf.nn.softmax(hidden2, name='noeudPrediction')
    return train_prediction, None, None


volume = 'f:/'
#Chargement des données ...
(more)
edit retag flag offensive close merge delete

Comments

In the regular cases Shape node is used in subgraphs. In example, Flatten operation in TensorFlow might looks like a subgraph which takes Shape of input node and multiplies dimensions then Reshape. There are a list of common patterns which we can fuse to a single layer: https://github.com/opencv/opencv/blob....

It;s interesting how batch_size is used futher?

dkurt gravatar imagedkurt ( 2019-04-09 09:04:12 -0600 )edit

1 answer

Sort by » oldest newest most voted
0

answered 2019-04-09 09:28:08 -0600

dkurt gravatar image

May I ask you to try to replace

reshape = tf.reshape(groupement2,[tailleLots, tailleGroupement2[1] * tailleGroupement2[2] * tailleGroupement2[3]], name = 'reshape1')

to

reshape = tf.reshape(groupement2,[-1, tailleGroupement2[1] * tailleGroupement2[2] * tailleGroupement2[3]], name = 'reshape1')

?

edit flag offensive delete link more

Comments

thanks. it works now

LBerger gravatar imageLBerger ( 2019-04-09 13:09:17 -0600 )edit

Question Tools

1 follower

Stats

Asked: 2019-04-09 08:55:07 -0600

Seen: 278 times

Last updated: Apr 09 '19