adaboost features selection from large pool
Hi, I want to train adaboost with 10^9 features but it can't stay all to gheter in memory , i procedeed with this approch:
for(i=1 to 1000)
{
-load 10^6 features from the 10^9 pool
-select the best 1 features and store it
}
but if i test the strong classifier on validation set i have results(correct detection-false positive) worst then this approches:
-load 10^6 features from the 10^9 pool
-select the best 1000 features and store it
i think the first approches is more exaustive then the second but the second is better on validation set, someone can tell me anything about this anomaly?
you have 10^6 observation of features? How long it is your single feature vector?
i want create a ranking of most important features of pedestrian , the pool is very large because i use a features simil to haar, and i can generate a large pool of features, the lenght of vector features descriptor is 10^6 byte