OpenCV3.4 DNN forward custom and pre-trained caffe model
Hi!
I get the following error:
C:\build\master_winpack-build-win64-vc15\opencv\modules\dnn\src\dnn.cpp:287: error: (-215) inputs.size() == requiredOutputs in function cv::dnn::experimental_dnn_v3::DataLayer::getMemoryShapes
when I run the following C++ code:
cv::String sCaffeModelPath("net.caffemodel");
cv::String sCaffeProtoPath("net_deploy.prototxt");
cv::String sImageAlive("image.bmp");
cv::dnn::Net net = cv::dnn::readNetFromCaffe(sCaffeProtoPath,sCaffeModelPath);
cv::Mat img = cv::imread(sImageAlive,CV_LOAD_IMAGE_GRAYSCALE);
cv::Mat inputBlob = cv::dnn::blobFromImage(img,1.0f, cv::Size(100,100));
net.setInput(inputBlob);
cv::Mat prob = net.forward();
the error occurs at the forward function.
the prototxt starts with this:
layer {
name: "data"
type: "MemoryData"
top: "data"
top: "label"
memory_data_param {
batch_size: 9
channels: 1
height: 95
width: 95
}
transform_param {
crop_size: 95
}
}
the last layer is:
layer {
name: "prob"
type: "Softmax"
bottom: "InnerProduct1"
top: "prob"
}
the input image is a grayscale image with size 100x100.
How to use inputBlob? do I need to resize the image to 95x95??
what does the error mean?
Any help would be appreciated very much.
EDIT:
Actually I could solve my initial problem by changing the first layer of my prototxt to:
input: "data"
input_dim: 1
input_dim: 1
input_dim: 95
input_dim: 95
#layer {
# name: "data"
# type: "MemoryData"
# top: "data"
# top: "label"
# memory_data_param {
# batch_size: 9
# channels: 1
# height: 95
# width: 95
# }
# transform_param {
# crop_size: 95
# }
#}
and resizing the input image:
cv::resize(img, img, cv::Size(95, 95));
cv::Mat inputBlob = cv::dnn::blobFromImage(img,1.0f, cv::Size(95,95));
Unfortunatly the next error occurs:
C:\build\master_winpack-build-win64-vc15\opencv\modules\dnn\src\dnn.cpp:257: error: (-2) Can't create layer "BN1" of type "BN" in function cv::dnn::experimental_dnn_v3::LayerData::getLayerInstance
the beginning of my prototxt looks like:
input: "data"
input_dim: 1
input_dim: 1
input_dim: 95
input_dim: 95
layer {
name: "Convolution1"
type: "Convolution"
bottom: "data"
top: "Convolution1"
convolution_param {
num_output: 32
bias_term: false
pad: 3
kernel_size: 7
stride: 2
weight_filler {
type: "msra"
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "BN1"
type: "BN"
bottom: "Convolution1"
top: "BN1"
param {
lr_mult: 1.0
decay_mult: 0.0
}
param {
lr_mult: 1.0
decay_mult: 0.0
}
#bn_param {
# scale_filler {
# type: "constant"
# value: 1.0
# }
# shift_filler {
# type: "constant"
# value: 0.0
# }
# var_eps: 1e-10
# moving_average: true
# decay: 0.95
#}
}
layer {
name: "ReLU1"
type: "ReLU"
bottom: "BN1"
top: "BN1"
}
I had to comment out the bn_param. I am wondering, if the type "BN" is supported or only "BatchNorm" (just replacing BN by BatchNorm does not work...)
Any suggestions?
i think so. try :
I have same problem... Have you found a solution?
@soufiane sabiri -- please do not post answers here, if you have a question or a comment, thank you.
Hey,
my solution was to not use the MemoryData-Layer. Replace your first DataLayer in the prototxt by
(you need to adjust the numbers: first is batch count, second image channel count, third height, fourth width)