Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

hi, @sjhalayka.

1) How do you know that squeezenet has 67 layers, and how do you print out the properties of the last 10 layers?

// 1st, we have to load the darn thing:
std::string modelTxt = "c:/data/mdl/squeezenet/deploy.prototxt";
std::string modelBin = "c:/data/mdl/squeezenet/squeezenet_v1.1.caffemodel";
Net net = dnn::readNetFromCaffe(modelTxt, modelBin);

// it's a long story, but but the network won't know it's (final) size, unless you give it an input image
Mat img = imread(imageFile);
Mat inputBlob = blobFromImage(img, 1.0, Size(227,227), Scalar(), false); 
// ^ yep, you have to *know*, which size it was trained upon, originally. magic here !

// this is more or less the "standard processing"
net.setInput(inputBlob);                    // Set the network input.
Mat prob = net.forward("prob");             // Compute output.

// now we can iterate over the internal layers, and print out it's properties:
MatShape ms1 { inputBlob.size[0], inputBlob.size[1] , inputBlob.size[2], inputBlob.size[3] };
size_t nlayers = net.getLayerNames().size() + 1;        // one off for the hidden input layer
for (size_t i=0; i<nlayers; i++) {
    Ptr<Layer> lyr = net.getLayer((unsigned)i);
    std::vector<MatShape> in,out;
    net.getLayerShapes(ms1,i,in,out);
    cout << format("%-38s %-13s ", (i==0?"data":lyr->name.c_str()), (i==0?"Input":lyr->type.c_str()));
    for (auto j:in)  cout << "i" << Mat(j).t() << "  "; // input(s) size
    for (auto j:out) cout << "o" << Mat(j).t() << "  "; // output(s) size
    for (auto b:lyr->blobs) {                           // what the net trains on, e.g. weights and bias
        cout << "b[" << b.size[0];
        for (size_t d=1; d<b.dims; d++) cout << ", " << b.size[d];
        cout << "]  ";
    }
    cout << endl;
}

2) There is code that states:

Mat_<int> layers(4, 1);
layers << 1000, 400, 100, 2;

yea pure guessworks here (we only know, that there are 1000 inputs, and 2 outputs) and from experience, it seems better to have2 hidden layers here, so we can break it down from 1000 -> 400 ->100 -> 2, instead of going from 1000 -> 2 directly.