Shufflenet has two new network layers, namely ShuffleChannel and ConvolutionDepthwise.

asked 2018-07-08 06:00:27 -0600

huangcl gravatar image

updated 2018-07-09 07:00:01 -0600

berak gravatar image

layer { name: "shuffle3" type: "ShuffleChannel" bottom: "resx3_conv1" top: "shuffle3" shuffle_channel_param { group: 3 } } layer { name: "resx2_conv2" type: "ConvolutionDepthwise" bottom: "shuffle2" top: "resx2_conv2" convolution_param { num_output: 60 kernel_size: 3 stride: 1 pad: 1 bias_term: false weight_filler { type: "msra" } } }

At present, although opencv3.4.2 has ShuffleChannel, it has an interruption shufflenet passed forward. I added mobilenet's ConvolutionDepthwise to opencv3.4.2 and recompiled it. But when I was prequel mobilenet, It also had an interruption. Their interruptions are not generated by calling function cv:: dnn:: readNetFromCaffe (), but when net.forward () is initialized after the network initialization.But there is no information output in the vs output window.The complete shufflenet_deploy.prototxt file is as follows:

name: "shufflenet" input: "data" input_dim: 1 input_dim: 3 input_dim: 224 input_dim: 224 layer { name: "conv1" type: "Convolution" bottom: "data" top: "conv1" convolution_param { num_output: 24 pad: 1 kernel_size: 3 stride: 2 bias_term: false weight_filler { type: "msra" } } } layer { name: "conv1_bn" type: "BatchNorm" bottom: "conv1" top: "conv1" param { lr_mult: 0 decay_mult: 0 } param { lr_mult: 0 decay_mult: 0 } param { lr_mult: 0 decay_mult: 0 } } layer { name: "conv1_scale" bottom: "conv1" top: "conv1" type: "Scale" scale_param { filler { value: 1 } bias_term: true bias_filler { value: 0 } } } layer { name: "conv1_relu" type: "ReLU" bottom: "conv1" top: "conv1" } layer { name: "pool1" type: "Pooling" bottom: "conv1" top: "pool1" pooling_param { pool: MAX kernel_size: 3 stride: 2 } } layer { name: "resx1_match_conv" type: "Pooling" bottom: "pool1" top: "resx1_match_conv" pooling_param { pool: AVE kernel_size: 3 stride: 2 } } layer { name: "resx1_conv1" type: "Convolution" bottom: "pool1" top: "resx1_conv1" convolution_param { num_output: 54 kernel_size: 1 stride: 1 pad: 0 bias_term: false weight_filler { type: "msra" } } } layer { name: "resx1_conv1_bn" type: "BatchNorm" bottom: "resx1_conv1" top: "resx1_conv1" param { lr_mult: 0 decay_mult: 0 } param { lr_mult: 0 decay_mult: 0 } param { lr_mult: 0 decay_mult: 0 } } layer { name: "resx1_conv1_scale" bottom: "resx1_conv1" top: "resx1_conv1" type: "Scale" scale_param { filler { value: 1 } bias_term: true bias_filler { value: 0 } } } layer { name: "resx1_conv1_relu" type: "ReLU" bottom: "resx1_conv1" top: "resx1_conv1" } layer { name: "resx1_conv2" type: "ConvolutionDepthwise" bottom: "resx1_conv1" top: "resx1_conv2" convolution_param { num_output: 54 kernel_size: 3 stride: 2 pad: 1 bias_term: false weight_filler { type: "msra" } } } layer { name: "resx1_conv2_bn" type: "BatchNorm" bottom: "resx1_conv2" top: "resx1_conv2" param { lr_mult: 0 decay_mult: 0 } param { lr_mult: 0 decay_mult: 0 } param { lr_mult: 0 decay_mult: 0 } } layer { name: "resx1_conv2_scale" bottom: "resx1_conv2" top: "resx1_conv2" type: "Scale" scale_param { filler { value: 1 } bias_term: true bias_filler { value: 0 } } } layer { name: "resx1_conv3" type: "Convolution" bottom: "resx1_conv2" top: "resx1_conv3" convolution_param { num_output: 216 kernel_size: 1 stride: 1 pad: 0 group: 3 bias_term: false weight_filler { type: "msra" } } } layer { name: "resx1_conv3_bn" type: "BatchNorm" bottom: "resx1_conv3" top: "resx1_conv3" param { lr_mult: 0 decay_mult: 0 } param { lr_mult: 0 decay_mult: 0 } param { lr_mult: 0 decay_mult: 0 } } layer { name: "resx1_conv3_scale" bottom: "resx1_conv3" top: "resx1_conv3" type: "Scale" scale_param { filler { value: 1 } bias_term: true bias_filler { value: 0 } } } layer { name: "resx1_concat" type: "Concat" bottom: "resx1_match_conv" bottom: "resx1_conv3" top: "resx1_concat" } layer { name: "resx1_concat_relu" type: "ReLU" bottom: "resx1_concat" top: "resx1_concat" } layer { name: "resx2_conv1" type: "Convolution" bottom: "resx1_concat" top: "resx2_conv1" convolution_param { num_output: 60 kernel_size: 1 stride: 1 pad: 0 group: 3 bias_term: false weight_filler { type: "msra" } } } layer { name: "resx2_conv1_bn" type: "BatchNorm" bottom: "resx2_conv1" top: "resx2_conv1" param { lr_mult: 0 decay_mult: 0 } param { lr_mult: 0 decay_mult ... (more)

edit retag flag offensive close merge delete

Comments

congratulations, you broke this website's text editor ;(

do you have an (external) link to the pb / pbtxt files, so we can try this ?

berak gravatar imageberak ( 2018-07-08 06:28:29 -0600 )edit
1

@huangcl, Depthwise convolution in Caffe is implemented by specifying num_output and group parameters of Convolution layer using the same value. I've never seen a separate ConvolutionDepthwise before. Can you add a reference please?

dkurt gravatar imagedkurt ( 2018-07-09 07:27:39 -0600 )edit

i think, it's this one , and the idea using it came from here

@dkurt , so, if the Convolution layer already can handle it, would we just replace ConvolutionDepthwise with Convolution in the prototxt ? (i tried that, using data from here, it runs, but predicts wrong things)

another thing i noticed, is that the ShuffleLayer loses it's name & type string on the way from the importer into the Dnn class (layer->name and layer->type all come up blank). unrelated, and no biggie, just saying.

berak gravatar imageberak ( 2018-07-10 03:04:11 -0600 )edit
2

@berak, This model predicts 281class id (tabby, tabby cat) with 11.1932 score for the following image: https://drive.google.com/file/d/1MUvX... (see a script here). I just replaced all the ConvolutionDepthwise to Convolution (please note that OpenCV can compute group number by shape of convolution kernel but you'd better to add it for Caffe).

dkurt gravatar imagedkurt ( 2018-07-10 03:34:53 -0600 )edit
1

ah, alright. i had swapRB and crop wrong, also the wrong txt labels ;) thanks again !

berak gravatar imageberak ( 2018-07-10 03:47:25 -0600 )edit

@berak , @dkurt, Thank you very much for your help. link text There are the shufflenet and mobilenet models that I have trained.

huangcl gravatar imagehuangcl ( 2018-07-11 09:10:21 -0600 )edit

Here is the implementation of ConvolutionDepthwiseLayercaffe-mobilenet)link text

huangcl gravatar imagehuangcl ( 2018-07-11 09:41:09 -0600 )edit

@berak, @dkurt, Thank you very much for your help.After adding ConvolutionDepthwise to opencv3.4.2, shufflenet and mobilenet can run correctly.I forgot to register ConvolutionDepthwise layer in init.cpp before.But there is another question. Is there only type declaration in the baseconvolutionlayer layer, and there is no concrete implementation? I only see the specific implementation of convolutionlayer and deconvolutionlayer layer.

huangcl gravatar imagehuangcl ( 2018-07-12 03:07:15 -0600 )edit

@huangcl The ConvolutionDepthwise is translated into OpenCV's Convolution layer. It does the correct thing, so don't worry. But it is slightly slower than it would be if optimized. There's talk about how to optimize it in this ticket https://github.com/opencv/opencv/issu...

tjwmmd2 gravatar imagetjwmmd2 ( 2019-10-18 11:14:36 -0600 )edit