Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

DNN linker errors, MSVC++

I'm trying to understand a code found in berak's answer to a previously asked question: http://answers.opencv.org/question/191359/ml-svm-k-nn-image-recognition-examples-in-c/?answer=191837#post-id-191837

Thanks again to @berak for sharing this code with us.

I hope that someone can answer a few questions that I have about the code:

1) How do you know that squeeze net has 67 layers, and how do you print out the properties of the last 10 layers?

2) There is code that states:

Mat_<int> layers(4, 1);
layers << 1000, 400, 100, 2; // the sqeezenet pool10 layer has 1000 neurons

So 1000 is the number of neurons in the input layer (also the squeezenet pool10 layer)? And 2 is the number of one-hot encoding variables (two neurons, one per class)? How does one decide upon on 400 and 100 for the hidden layers? Rule of thumb?

DNN linker errors, MSVC++

I'm trying to understand a code found in berak's answer to a previously asked question: http://answers.opencv.org/question/191359/ml-svm-k-nn-image-recognition-examples-in-c/?answer=191837#post-id-191837

Thanks again to @berak for sharing this code with us.

I hope that someone can answer a few questions that I have about the code:

1) How do you know that squeeze net has 67 layers, and how do you print out the properties of the last 10 layers?

2) There is code that states:

Mat_<int> layers(4, 1);
layers << 1000, 400, 100, 2; // the sqeezenet pool10 layer has 1000 neurons

So 1000 is the number of neurons in the input layer (also the squeezenet pool10 layer)? And 2 is the number of one-hot encoding variables (two neurons, one per class)? How does one decide upon on 400 and 100 for the hidden layers? Rule of thumb?

DNN linker errors, MSVC++questions

I'm trying to understand a code found in berak's answer to a previously asked question: http://answers.opencv.org/question/191359/ml-svm-k-nn-image-recognition-examples-in-c/?answer=191837#post-id-191837

Thanks again to @berak for sharing this code with us.us. The way that berak chops off the last few layers and attaches on a standard MLP ANN, it's like surgery. Nice work.

I hope that someone can answer a few questions that I have about the code:

1) How do you know that squeeze net has 67 layers, and how do you print out the properties of the last 10 layers?

2) There is code that states:

Mat_<int> layers(4, 1);
layers << 1000, 400, 100, 2; // the sqeezenet pool10 layer has 1000 neurons

So 1000 is the number of neurons in the input layer (also the squeezenet pool10 layer)? And 2 is the number of one-hot encoding variables (two neurons, one per class)? How does one decide upon on 400 and 100 for the hidden layers? Rule of thumb?

DNN questions

I'm trying to understand a code found in berak's answer to a previously asked question: http://answers.opencv.org/question/191359/ml-svm-k-nn-image-recognition-examples-in-c/?answer=191837#post-id-191837

Thanks again to @berak for sharing this code with us. The way that berak chops off the last few layers and attaches on a standard MLP ANN, it's like surgery. Nice work.

I hope that someone can answer a few questions that I have about the code:

1) How do you know that squeeze net has 67 layers, and how do you print out the properties of the last 10 layers?

2) There is code that states:

Mat_<int> layers(4, 1);
layers << 1000, 400, 100, 2; // the sqeezenet pool10 layer has 1000 neurons

So 1000 is the number of neurons in the input first hidden layer (also the squeezenet pool10 layer)? And 2 is the number of one-hot encoding variables (two neurons, one per class)? How does one decide upon on 400 and 100 for the other hidden layers? Rule of thumb?

DNN questions

I'm trying to understand a code found in berak's answer to a previously asked question: http://answers.opencv.org/question/191359/ml-svm-k-nn-image-recognition-examples-in-c/?answer=191837#post-id-191837

Thanks again to @berak berak for sharing this code with us. The way that berak chops off the DNN's last few layers and attaches on a standard MLP ANN, it's like surgery. Nice work.

I hope that someone can answer a few questions that I have about the code:

1) How do you know that squeeze net SqueezeNet has 67 layers, and how do you print out the properties of the last 10 layers?

2) There is code that states:

Mat_<int> layers(4, 1);
layers << 1000, 400, 100, 2; // the sqeezenet pool10 layer has 1000 neurons

So 1000 is the number of neurons in the first hidden layer (also the squeezenet pool10 layer)? And 2 is the number of one-hot encoding variables (two neurons, one per class)? How does one decide upon on 400 and 100 for the other hidden layers? Rule of thumb?

DNN questions

I'm trying to understand a code found in berak's answer to a previously asked question: http://answers.opencv.org/question/191359/ml-svm-k-nn-image-recognition-examples-in-c/?answer=191837#post-id-191837

Thanks again to berak for sharing this code with us. The way that berak chops off the DNN's last few layers and attaches on a standard MLP ANN, it's like surgery. Nice work.

I hope that someone can answer a few questions that I have about the code:

1) How do you know that SqueezeNet has 67 layers, and how do you print out the properties of the last 10 layers?

2) There is code that states:

Mat_<int> layers(4, 1);
layers << 1000, 400, 100, 2; // the sqeezenet pool10 layer has 1000 neurons

So 1000 is the number of neurons in the first hidden layer (also layer, which attaches to the squeezenet pool10 layer)? layer? And 2 is the number of one-hot encoding variables (two neurons, one per class)? How does one decide upon on 400 and 100 for the other hidden layers? Rule of thumb?

DNN questions

I'm trying to understand a code found in berak's answer to a previously asked question: http://answers.opencv.org/question/191359/ml-svm-k-nn-image-recognition-examples-in-c/?answer=191837#post-id-191837

Thanks again to berak for sharing this code with us. The way that berak chops off the DNN's last few layers and attaches on a standard MLP ANN, it's like surgery. Nice work.

I hope that someone can answer a few questions that I have about the code:

1) How do you know that SqueezeNet has 67 layers, and how do you print out the properties of the last 10 layers?

2) There is code that states:

Mat_<int> layers(4, 1);
layers << 1000, 400, 100, 2; // the sqeezenet pool10 layer has 1000 neurons

So 1000 is the number of neurons in the first hidden layer, which attaches to the pool10 layer? layer? Or does this 1000 neuron layer replace pool10?

And 2 is the number of one-hot encoding variables (two neurons, one per class)? class), right?

How does one decide upon on 400 and 100 for the other hidden layers? Rule of thumb?