Hi, I want to train adaboost with 10^9 features but it can't stay all to gheter in memory , i procedeed with this approch:
for(i=1 to 1000)
{
load 10^6 features from the 10^9 pool
run select the best 1 features and store it
}
but if i test the strong classifier on validation set i have results(correct detection-false positive) worst then this approches:
load 10^6 features from the 10^9 pool
run select the best 1000 features and store it
i think the first approches is more exaustive then the second but the second is better on validation set, someone can tell me anything about this anomaly?