kNN implementation and required input/output

asked 2017-04-04 15:36:11 -0600

Yasser Akram gravatar image

I found a ready code but have some issues mapping my thoughts and what I've implemented in [weka] especially that how do I supply my data to the algorithm. I have: x, y, speed, factor. They all numbers, the [x] & [y] are the coordinates of the object. [speed] is the speed of the object. [factor] is division of the initial speed and the current speed of the object. So I need to build a model based on kNN so I can supply the model with :[x, y, speed] to get [factor] which is predictable based on the model I've built.

I have a sample data as per the following (sample from 50k rows):

x, y, speed, factor

414, 369, 250.004761, 1.1

418, 360, 225.004285, 1.222222221

423, 352, 225.004285, 1.222222221

427, 344, 200.003809, 1.374999998

431, 336, 200.003809, 1.374999998

435, 329, 200.003809, 1.374999998

438, 322, 175.003333, 1.571428568

441, 315, 175.003333, 1.571428568

444, 309, 150.002856, 1.83333334

448, 303, 175.003333, 1.571428568

451, 297, 150.002856, 1.83333334

454, 291, 150.002856, 1.83333334

457, 285, 150.002856, 1.83333334

460, 280, 125.00238, 2.200000008

463, 275, 125.00238, 2.200000008

466, 270, 125.00238, 2.200000008

469, 265, 125.00238, 2.200000008

472, 260, 125.00238, 2.200000008

474, 256, 100.001904, 2.75000001

477, 251, 125.00238, 2.200000008

The code I found and I can't figure out how to map my table and the input (450, 300, 100.002834) in order to get the predicted/calculated (factor)

//Be sure to change number_of_... to fit your data!
Mat matTrainFeatures(0, number_of_train_elements, CV_32F);
Mat matSample(0, number_of_sample_elements, CV_32F);

Mat matTrainLabels(0, number_of_train_elements, CV_32F);
Mat matSampleLabels(0, number_of_sample_elements, CV_32F);

Mat matResults(0, 0, CV_32F);

//etcetera code for loading data into Mat variables suppressed

Ptr<TrainData> trainingData;
Ptr<KNearest> kclassifier = KNearest::create();

trainingData = TrainData::create(matTrainFeatures,
    SampleTypes::ROW_SAMPLE, matTrainLabels);



kclassifier->setIsClassifier(true);
kclassifier->setAlgorithmType(KNearest::Types::BRUTE_FORCE);
kclassifier->setDefaultK(1);

kclassifier->train(trainingData);
kclassifier->findNearest(matSample, kclassifier->getDefaultK(), matResults);

//Just checking the settings
cout << "Training data: " << endl
    << "getNSamples\t" << trainingData->getNSamples() << endl
    << "getSamples\n" << trainingData->getSamples() << endl
    << endl;

cout << "Classifier :" << endl
    << "kclassifier->getDefaultK(): " << kclassifier->getDefaultK() << endl
    << "kclassifier->getIsClassifier()   : " << kclassifier->getIsClassifier() << endl
    << "kclassifier->getAlgorithmType(): " << kclassifier->getAlgorithmType() << endl
    << endl;

//confirming sample order
cout << "matSample: " << endl
    << matSample << endl
    << endl;

//displaying the results
cout << "matResults: " << endl
    << matResults << endl
    << endl;

//etcetera ending for main function

I appreciate any explanation to the declared [Mat]s in the code as they seemed confusing.

edit retag flag offensive close merge delete

Comments

why do you want to use knn for this problem ? (it looks more like a regression, not a classification to me)

berak gravatar imageberak ( 2017-04-06 09:47:20 -0600 )edit

not exactly sure if it can be solved with regression let me know your thoughts

Yasser Akram gravatar imageYasser Akram ( 2017-06-12 05:30:08 -0600 )edit