1 | initial version |
your network will need one input per feature (so, 60 x 74 = 4440)
you also need 1 output per class, so 5.
the hidden weights is the place, where the information about your training data is stored, so smaller will yield less accuracy, larger will take more time (and memory).
so, [4440, 2000, 5] , the network, so far ...
then an ann uses "one-hot-encoded" responses (the last network layer), instead of a single label number, like with knn, you need a column of n classes, and set a 1 where your class id is, and the others to 0, like:
[0,0,0,1,0] // 3
[1,0,0,0,0] // 0
from java, maybe:
int numImages = 10;//example
int numImgPerClass = 5;
int numClasses = 5;
Mat labels = new Mat(numImages, numClasses, CV_32F, Scalar.all(0));
int id=2;//example
labels.submat(0,5,id,id+1).setTo(Scalar(1));
id=0;//example
labels.submat(5,5+numImgPerClass,id,id+1).setTo(Scalar(1));
labels.dump()
[0, 0, 1, 0, 0; 0, 0, 1, 0, 0; 0, 0, 1, 0, 0; 0, 0, 1, 0, 0; 0, 0, 1, 0, 0; 1, 0, 0, 0, 0; 1, 0, 0, 0, 0; 1, 0, 0, 0, 0; 1, 0, 0, 0, 0; 1, 0, 0, 0, 0]
2 | No.2 Revision |
your network will need one input per feature (so, 60 x 74 = 4440)
you also need 1 output per class, so 5.
the hidden weights is the place, where the information about your training data is stored, so smaller will yield less accuracy, larger will take more time (and memory).
so, [4440, 2000, 5] , the network, so far ...
then an ann uses "one-hot-encoded" responses (the last network layer), instead of a single label number, like with knn, you need a column of n classes, and set a 1 where your class id is, and the others to 0, like:
[0,0,0,1,0] // 3
[1,0,0,0,0] // 0
from java, maybe:
int numImages = 10;//example
int numImgPerClass = 5;
int numClasses = 5;
Mat labels = new Mat(numImages, numClasses, CV_32F, Scalar.all(0));
int id=2;//example
labels.submat(0,5,id,id+1).setTo(Scalar(1));
id=0;//example
labels.submat(5,5+numImgPerClass,id,id+1).setTo(Scalar(1));
labels.dump()
[0, 0, 1, 0, 0; // image1
0, 0, 1, 0, 0; // image2
0, 0, 1, 0, 0; // ...
0, 0, 1, 0, 0;
0, 0, 1, 0, 0;
1, 0, 0, 0, 0;
1, 0, 0, 0, 0;
1, 0, 0, 0, 0;
1, 0, 0, 0, 0;
1, 0, 0, 0, 0]
[0, 0, 1, 0, 0; 0, 0, 1, 0, 0; 0, 0, 1, 0, 0; 0, 0, 1, 0, 0; 0, 0, 1, 0, 0; 1, 0, 0, 0, 0; 1, 0, 0, 0, 0; 1, 0, 0, 0, 0; 1, 0, 0, 0, 0; 1, 0, 0, 0, 0]
3 | No.3 Revision |
your network will need one input per feature (so, 60 x 74 = 4440)
you also need 1 output per class, so 5.
the hidden weights is the place, where the information about your training data is stored, so smaller will yield less accuracy, larger will take more time (and memory).
so, [4440, 2000, 5] , the network, so far ...
then an ann uses "one-hot-encoded" responses (the last network layer), instead of a single label number, like with knn, you need a column row of n classes, and set a 1 where your class id is, and the others to 0, like:
[0,0,0,1,0] // 3
[1,0,0,0,0] // 0
from java, maybe:
int numImages = 10;//example
int numImgPerClass = 5;
int numClasses = 5;
Mat labels = new Mat(numImages, numClasses, CV_32F, Scalar.all(0));
int id=2;//example
labels.submat(0,5,id,id+1).setTo(Scalar(1));
id=0;//example
labels.submat(5,5+numImgPerClass,id,id+1).setTo(Scalar(1));
labels.dump()
[0, 0, 1, 0, 0; // image1
0, 0, 1, 0, 0; // image2
0, 0, 1, 0, 0; // ...
0, 0, 1, 0, 0;
0, 0, 1, 0, 0;
1, 0, 0, 0, 0;
1, 0, 0, 0, 0;
1, 0, 0, 0, 0;
1, 0, 0, 0, 0;
1, 0, 0, 0, 0]