1 | initial version |
In fact, there are two different examples in the link you give: the first one works directly on the hand written images, the second one uses precalculated features. As mentioned clearly in the document, in the first example, the digit images are flattened from 20x20 resolution downto 400 dimensions row based vectors for training, which means there is no feature extraction step, each image is represented by it raw intensity values. In the second example, you should follow the links given to see how the features are generated. Additionally, the underlying training algorithm of knn is SVM.
2 | No.2 Revision |
In fact, there are two different examples in the link you give: the first one works directly on the hand written images, the second one uses precalculated features. As mentioned clearly in the document, in the first example, the digit images are flattened from 20x20 resolution downto 400 dimensions row based vectors for training, which means there is no real feature extraction step, each image is represented by it raw intensity values. values (this technique is the same as in original eigenfaces method). In the second example, you should follow the links given to see how the features are generated. Additionally, the underlying training algorithm of knn is SVM.
3 | No.3 Revision |
In fact, there are two different examples in the link you give: the first one works directly on the hand written images, the second one uses precalculated features. As mentioned clearly in the document, in the first example, the digit images are flattened from 20x20 resolution downto 400 dimensions row based vectors for training, which that means there is no real feature extraction step, each image is represented by it raw intensity values (this technique is the same as in original eigenfaces method). In the second example, you should follow the links given to see how the features are generated. Additionally, the underlying training algorithm of knn is SVM.