Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

well, again, you need float descriptors / data, and, since you're trying to find out the statistically most relevant features, you need a "representive set" (aka "a lot of data", from many images):

// step 1, offline
Mat a_lot_of_features = ....                   // (say 10000 rows, 128 cols, FLOAT!!!)
PCA pca(a_lot_of_features, Mat(), 0, 64);      // keep e.g. the 64 strongest eigenvecs 
cout << pca.eigenvectors.size() << endl;       // [128 x 64]
// keep pca object around, use it to reduce the image_features

// step 2, online
Mat image_features(100, 128, CV_32F);          // e.g. 100 SURF features from an image
Mat projected = pca.project(image_features);   // [64 x 100]
cout << projected.size() << endl;
// use projected vec instead of original image_features

well, again, you need float descriptors / data, and, since you're trying to find out the statistically most relevant features, you need a "representive set" (aka "a lot of data", from many images):

// step 1, offline
Mat a_lot_of_features = ....                   // (say 10000 rows, 128 cols, FLOAT!!!)
PCA pca(a_lot_of_features, Mat(), 0, 64);      // keep e.g. the 64 strongest eigenvecs 
cout << pca.eigenvectors.size() << endl;       // [128 x 64]
// keep pca object around, use it to reduce the image_features

// step 2, online
Mat image_features(100, 128, CV_32F);          // e.g. 100 SURF features from an image
Mat projected = pca.project(image_features);   // [64 x 100]
it will reduce the feature size, not the count !
cout << projected.size() << endl;
endl;              // [64 x 100] num_eigenvecs x feature count
// use projected vec instead of original image_features