Machine Learning, After training, how exactly does it get a prediction?

asked 2015-04-01 14:55:39 -0600

tjohnsen gravatar image

So after you have a machine learning algorithm trained, with your layers, nodes, and weights, how exactly does it go about getting a prediction for an input vector? I am using MultiLayer Perceptron (neural networks).

From what I currently understand, you start with your input vector to be predicted. Then you send it to your hidden layer(s) where it adds your bias term to each data point, then adds the sum of the product of each data point and the weight for each node (found in training), then runs that through the same activation function used in training. Repeat for each hidden layer, then does the same for your output layer. Then each node in the output layer is your prediction(s). Is this correct?

I got confused when using opencv to do this, because it says when you use the function predict:

"If you are using the default cvANN_MLP::SIGMOID_SYM activation function with the default parameter values fparam1=0 and fparam2=0 then the function used is y = 1.7159tanh(2/3 * x), so the output will range from [-1.7159, 1.7159], instead of [0,1]." However, when training it is also stated in the documentation that SIGMOID_SYM uses the activation function: " f(x)= beta(1-e^{-alpha x})/(1+e^{-alpha x} ) " Where alpha and beta are user defined variables.

So, I'm not quite sure what this means. Where does the tanh function come into play? Can anyone clear this up please? Thanks for the time!

Since this is a general question, and not code specific, I did not post any code with it.

edit retag flag offensive close merge delete

Comments

There should be a predict function...

thdrksdfthmn gravatar imagethdrksdfthmn ( 2015-04-02 02:13:30 -0600 )edit