OpenCV3.4 DNN forward custom and pre-trained caffe model

asked 2018-06-05 07:55:58 -0600

SEbert gravatar image

updated 2018-06-05 08:43:08 -0600

Hi!

I get the following error:

C:\build\master_winpack-build-win64-vc15\opencv\modules\dnn\src\dnn.cpp:287: error: (-215) inputs.size() == requiredOutputs in function cv::dnn::experimental_dnn_v3::DataLayer::getMemoryShapes

when I run the following C++ code:

cv::String sCaffeModelPath("net.caffemodel");
cv::String sCaffeProtoPath("net_deploy.prototxt");
cv::String sImageAlive("image.bmp");
cv::dnn::Net net = cv::dnn::readNetFromCaffe(sCaffeProtoPath,sCaffeModelPath);
cv::Mat img = cv::imread(sImageAlive,CV_LOAD_IMAGE_GRAYSCALE);
cv::Mat inputBlob = cv::dnn::blobFromImage(img,1.0f, cv::Size(100,100));
net.setInput(inputBlob);
cv::Mat prob = net.forward();

the error occurs at the forward function.

the prototxt starts with this:

layer {
  name: "data"
  type: "MemoryData"
  top: "data"
  top: "label"
  memory_data_param {
    batch_size: 9
    channels: 1
    height: 95
    width: 95
  }
  transform_param {
    crop_size: 95
  }
}

the last layer is:

layer {
  name: "prob"
  type: "Softmax"
  bottom: "InnerProduct1"
  top: "prob"
}

the input image is a grayscale image with size 100x100. How to use inputBlob? do I need to resize the image to 95x95??
what does the error mean?

Any help would be appreciated very much.

EDIT:

Actually I could solve my initial problem by changing the first layer of my prototxt to:

input: "data"
input_dim: 1
input_dim: 1
input_dim: 95
input_dim: 95

#layer {
#  name: "data"
#  type: "MemoryData"
#  top: "data"
#  top: "label"
#  memory_data_param {
#    batch_size: 9
#    channels: 1
#    height: 95
#    width: 95
#  }
#  transform_param {
#    crop_size: 95
#  }
#}

and resizing the input image:

cv::resize(img, img, cv::Size(95, 95));
cv::Mat inputBlob = cv::dnn::blobFromImage(img,1.0f, cv::Size(95,95));

Unfortunatly the next error occurs:

C:\build\master_winpack-build-win64-vc15\opencv\modules\dnn\src\dnn.cpp:257: error: (-2) Can't create layer "BN1" of type "BN" in function cv::dnn::experimental_dnn_v3::LayerData::getLayerInstance

the beginning of my prototxt looks like:

input: "data"
input_dim: 1
input_dim: 1
input_dim: 95
input_dim: 95

layer {
  name: "Convolution1"
  type: "Convolution"
  bottom: "data"
  top: "Convolution1"
  convolution_param {
    num_output: 32
    bias_term: false
    pad: 3
    kernel_size: 7
    stride: 2
    weight_filler {
      type: "msra"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "BN1"
  type: "BN"
  bottom: "Convolution1"
  top: "BN1"
  param {
    lr_mult: 1.0
    decay_mult: 0.0
  }
  param {
    lr_mult: 1.0
    decay_mult: 0.0
  }
  #bn_param {
  #  scale_filler {
  #    type: "constant"
  #    value: 1.0
  #  }
  #  shift_filler {
  #    type: "constant"
  #    value: 0.0
  #  }
  #  var_eps: 1e-10
  #  moving_average: true
  #  decay: 0.95
  #}
}
layer {
  name: "ReLU1"
  type: "ReLU"
  bottom: "BN1"
  top: "BN1"
}

I had to comment out the bn_param. I am wondering, if the type "BN" is supported or only "BatchNorm" (just replacing BN by BatchNorm does not work...)

Any suggestions?

edit retag flag offensive close merge delete

Comments

do I need to resize the image to 95x95

i think so. try :

cv::Mat inputBlob = cv::dnn::blobFromImage(img,1.0f, cv::Size(95,95));
berak gravatar imageberak ( 2018-06-05 08:34:11 -0600 )edit

I have same problem... Have you found a solution?

Traceback (most recent call last):
File "Object_detection_image.py", line 29, in <module>
    cvOut = cvNet.forward()
cv2.error: OpenCV(3.4.3) C:\projects\opencv-python\opencv\modules\dnn\src\dnn.cpp:565: error: (-215:Assertion failed) inputs.size() == requiredOutputs in function 'cv::dnn::experimental_dnn_34_v7::DataLayer::getMemoryShapes'
soufiane sabiri gravatar imagesoufiane sabiri ( 2019-03-11 07:28:22 -0600 )edit

@soufiane sabiri -- please do not post answers here, if you have a question or a comment, thank you.

berak gravatar imageberak ( 2019-03-11 07:51:52 -0600 )edit

Hey,

my solution was to not use the MemoryData-Layer. Replace your first DataLayer in the prototxt by

input: "data"
input_dim: 1
input_dim: 1
input_dim: 95
input_dim: 95

(you need to adjust the numbers: first is batch count, second image channel count, third height, fourth width)

SEbert gravatar imageSEbert ( 2019-03-12 02:55:45 -0600 )edit