Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Concatenate failing check for #axes?

For quick background, I'm taking an AlexNet-like network and passing an image through. At the FC7 layer, I flatten and then concatenate with hand-crafted features of dimension 20. I was able to create a training.prototxt and train the network without issues, and now, I am trying to deploy it. Here is the error that I get when I try to run:

concat_layer.cpp:38] Check failed: num_axes == bottom[i]->num_axes() (2 vs. 4) All inputs must have the same #axes.

For reference, here is the C++ code; it compiles successfully, but on running, returns the error above.

#define CPU_ONLY
#include <cstring>
#include <cstdlib>
#include <vector>

#include <string>
#include <iostream>
#include <stdio.h>
#include "caffe/caffe.hpp"
#include "caffe/util/io.hpp"
#include "caffe/blob.hpp"

using namespace caffe;
using namespace std;

int main(int argc, char** argv) {

  boost::shared_ptr<Net<float> > net;
  std::shared_ptr<caffe::Solver<float> > solver;
  net.reset(new Net<float>("/path/to/deploy.prototxt", caffe::TEST));
  net->CopyTrainedLayersFrom("/path/to/weights.caffemodel");
return 0;
}

Lastly, here is my deploy.prototxt relevant parts.

name: "Network"
input: "X"
input_dim: 1
input_dim: 1
input_dim: 96
input_dim: 96
input: "XFeat"
input_dim: 1
input_dim: 1
input_dim: 1
input_dim: 20
... Layers applying conv on X here...
# Drop1
layer {
    name: "drop2"
    type: "Dropout"
    bottom: "norm2"
    top: "drop2"
    dropout_param {
        dropout_ratio: 0.5
    }
}

# Flatten to concatenate with features
layer {
    name: "flatten6"
    type: "Flatten"
    bottom: "drop2"
    top: "flatten6"
}
# Concatenate with XFeat
layer {
    name: "concat6"
    type: "Concat"
    bottom: "flatten6"
    bottom: "XFeat"
    top: "concat6"
    concat_param {
        axis: 1
    }
}

Is there something wrong with what I'm doing? It still trained successfully, so I figured this is the way to do it. I've also tried changing the input_dims for XFeat to [1, 20], but that still fails (albeit at a different spot with a different message). Is there an underlying issue with the caffemodel and the deploy.prototxt somehow giving conflicting sizes?

Concatenate failing check for #axes?

For quick background, I'm taking Using OpenCV, I am trying to classify an AlexNet-like image. The neural network and passing I used to train it takes in an image through. At the FC7 layer, I flatten and then concatenate with and hand-crafted features of dimension 20. and late fuses them at the fully-connected layers before classification. I know the image classification only pipeline works properly because I was successfully able to take a trained network and apply it to just images.

As a dummy example... this doesn't seem to run

Size size(n, n);
Mat1f imageBlobMat(size);
// Code to fill values for imageBlobMat here (preprocessing image)

Size sizeFeatures(20,1); // 20 hand-crafted features
Mat1f featBlobMat(sizeFeatures);
// Code to normalize these features here

dnn::Blob imageBlob = dnn::Blob(imageBlobMat);
net.setBlob(".image", imageBlob);
dnn::Blob featBlob = dnn::Blob(featBlobMat);
net.setBlob(".feat", featBlob);
net.forward();
dnn::Blob prob = net.getBlob("prob");

The prototxt file looks like below. I want to reiterate that I trained using the prototxt file and modified the top and bottom to create a training.prototxt and deploy.prototxt so I know that these layers work together when to train the network without issues, and now, I am trying to deploy it. Here is the error that I get when I try to run:

concat_layer.cpp:38] Check failed: num_axes == bottom[i]->num_axes() (2 vs. 4) All inputs must have the same #axes.

For reference, here is the C++ code; it compiles successfully, but on running, returns the error above.

#define CPU_ONLY
#include <cstring>
#include <cstdlib>
#include <vector>

#include <string>
#include <iostream>
#include <stdio.h>
#include "caffe/caffe.hpp"
#include "caffe/util/io.hpp"
#include "caffe/blob.hpp"

using namespace caffe;
using namespace std;

int main(int argc, char** argv) {

  boost::shared_ptr<Net<float> > net;
  std::shared_ptr<caffe::Solver<float> > solver;
  net.reset(new Net<float>("/path/to/deploy.prototxt", caffe::TEST));
  net->CopyTrainedLayersFrom("/path/to/weights.caffemodel");
return 0;
}

Lastly, here is my deploy.prototxt relevant parts.a caffe model.

name: "Network"
"NetworkName"
input: "X"
input_dim: 1
input_dim: 1
input_dim: 96
input_dim: 96
"image"
input_dim: 1
input_dim: 1
input_dim: n
input_dim: n
input: "XFeat"
"feat"
input_dim: 1
input_dim: 1
input_dim: 1
input_dim: 20
... 
...
Conv Layers applying conv on X here...
# Drop1
layer {
Image input
...

layer{
    name: "drop2"
    type: "Dropout"
    bottom: "norm2"
    top: "drop2"
    dropout_param {
        dropout_ratio: 0.5
    }
}

# Flatten to concatenate with features
layer {
    name: "flatten6"
"flatten"
    type: "Flatten"
    bottom: "drop2"
"drop"
    top: "flatten6"
"flatten"
}
# Concatenate with XFeat
layer {
layer{
    name: "concat6"
"concat"
    type: "Concat"
    bottom: "flatten6"
"flatten"
    bottom: "XFeat"
"feat"
    top: "concat6"
    concat_param {
"concat"
    concat_param{
        axis: 1
    }
}

Is there I am getting this error:

OpenCV Error: Assertion failed (curShape.dims() == refShape.dims() && inputs[i]->type() == refType) in allocate, file /opt/opencv_contrib/modules/dnn/src/layers/concat_layer.cpp, line 69
terminate called after throwing an instance of 'cv::Exception'
  what():  /opt/opencv_contrib/modules/dnn/src/layers/concat_layer.cpp:69: error: (-215) curShape.dims() == refShape.dims() && inputs[i]->type() == refType in function allocate

It seems related to the size of the inputs, so I've tried modifying the shape of featBlobMat to be 1x20 and 20x1, and changed the input_dims in all possible ways. [1 1 1 20], [1 1 20 1], [1 20], [20 1], and nothing seems to work. Any pointers?

EDIT: Accidentally posted something wrong with what I'm doing? It still trained successfully, so I figured this is the way to do it. I've also tried changing the input_dims meant for XFeat to [1, 20], but that still fails (albeit at a different spot with a different message). Is there an underlying issue with the caffemodel and the deploy.prototxt somehow giving conflicting sizes?the Caffe question board... Oops! Revised.