AAN MLP Layer Size choice discussion

asked 2014-12-30 18:11:00 -0600

MRDaniel gravatar image

Hello,

What are the prevailing thoughts on choosing a layer size for a aan mlp in OpenCV? Training for large sets of data can take time, so only a number of nets can be trained and tested at anyone time. Essentially, we want to optimize or tune the neural network architecture toward a state of optimal predictions.

Here are a couple of thoughts on how to perhaps approach this.

Geometric Progression?

Simple iterator over the number of layers and the number of neurons per layer. More than likely lead to rectangular architectures, possibly taper them on each subsequent layer.

Genetic algorithmn?

A genetic algorithmn that selects for performance, with the 'genes' being the random layer/neuron configurations and mutations being number of neurons and layers. A series of random values are used to changed configurations of the more successful MLPs. This wouldn't work well for online learning of big problems.

Substarte neural networks?

A set of starting neural nets that are tuned to similar, more generalized problems, that are then updated with the current data from the current problem. Assume data normalization. The weight matrix would be an interesting issue here.

Gradient descent?

How would you construct a gradient descent method for a variable number of variables???

These are just some thoughts at the moment whilst devising a strategy.

Also, are the bias neurons computed from the layer matrix? i.e. Do we have to think about them?

edit retag flag offensive close merge delete