get errors while execute forward step with torch model

asked 2018-03-27 02:14:05 -0600

BigMao Chen gravatar image

updated 2018-03-27 02:16:40 -0600

berak gravatar image

I have trained a torch model and it works well in torch/lua. And after I import it into opencv and excute the forward step, the following error occured: "Incorrect size of input array<inconsistent shape for Concatlayer> in cv::dnn::ConcatLayerImpl::getMemoryShapes " file : dnn/src/layers/concat_layer.cpp line 94. All codes can be found in here fan2.lua is the file to create the torch model and main.cpp is the file to load the model and execute the forward step.

edit retag flag offensive close merge delete

Comments

@BigMao Chen, Please attach a serialized model to let people reproduce a error easily without torch installation.

dkurt gravatar imagedkurt ( 2018-03-27 03:15:39 -0600 )edit

@dkurt Sorry I have some trouble in uploading my model, here is the model and I update the new custom layer information(you provided) in github.Thank you ~

BigMao Chen gravatar imageBigMao Chen ( 2018-03-27 03:59:09 -0600 )edit

@dkurt hi, sorry to bother you again. I found that this error appears when excute forward step in concatlayer, whose inputs are in size 4(1 * 256 * 64 * 64,1 * 128 * 32 * 32,1 * 64 * 32 * 32,1 * 64 * 32 * 32) corresponding to the outputs are in size 1(1 * 256 * 64 * 64) could you please give me some advice about why this would happen because I dont understand how concatlayer works in opencv:( the input of my model is 1 * 3 * 256 * 256 and I just use concatlayer to divid inputs into two flow at most, So why could the inputs of concatlayer above be 4. Thanks for you patience.

BigMao Chen gravatar imageBigMao Chen ( 2018-03-29 09:33:26 -0600 )edit

@BigMao Chen, That's OK, don't worry. Concat layer concatenates multidimensional blobs into the one. In example, you can concatenate several images by columns into a single row. This way resulting image will has a number of columns equals a sum of images' widths. If images have different heights we can add just zeros to fit a maximum height. So concat layer do the same but for 3- or 4-dimensional objects.

dkurt gravatar imagedkurt ( 2018-03-29 10:14:10 -0600 )edit

@dkurt is that means I can fix the error above by adding zero pading steps by myself in class ConcatLayerImpl so that the 3rd and 4th dimensional size of inputs can be equal to outputs, and this step

if (curAxis != cAxis && outputs[0][curAxis] != curShape[curAxis])
                    CV_Error(Error::StsBadSize, "Inconsistent shape for ConcatLayer");

will never excute

BigMao Chen gravatar imageBigMao Chen ( 2018-03-29 20:09:05 -0600 )edit

@BigMao Chen, I think this is a bug because only nn.DepthConcat adds zero padding. So we need to reproduce your experiment carefully to figure out if the problem is in importer or in a custom layer you created. Anyway a PR with custom layers is not merged yet so this issue has less priority for now rather ones connected with current master branch.

dkurt gravatar imagedkurt ( 2018-03-30 01:39:53 -0600 )edit

@BigMao Chen, I found that problem is in CAddTable / JoinTable layers import. They are connected to an every unconnected blob. In case of embedded residual connections that's wrong strategy. We're going to fix it. Thanks!

dkurt gravatar imagedkurt ( 2018-03-30 07:50:02 -0600 )edit

@BigMao Chen, Could you please test the changes from a pull request https://github.com/opencv/opencv/pull... ?

dkurt gravatar imagedkurt ( 2018-03-31 03:24:05 -0600 )edit

Oh sorry to reply you so late. I will test it as soon as possible, thanks!

BigMao Chen gravatar imageBigMao Chen ( 2018-04-01 20:50:17 -0600 )edit

@dkurt I have checked that the forward function can work correctly. But I havent finished my code which transfer the forward result to final result so it will take me some times to examine whether the result produced by opencv can match the one produced by torch. Thanks for your help again!! And I will tell you the test result once I finished them.

BigMao Chen gravatar imageBigMao Chen ( 2018-04-02 01:27:48 -0600 )edit