in reply to Re^2: NNflex problems (win32)
in thread NNflex problems (win32)

To evaluate the learning progress of a network, you need to know a little more about how neural nets find solutions.

Imagine the space of all possible solutions (from perfect to awful) as a 3D landscape. The altitude is the error value, the coordinates are internal and external values in the NN sim. Training the neural net is something like a marble rolling along this terrain, tending to roll downhill. Sometimes the marble will stop at the bottom of a sinkhole on a mesa, nowhere near the global minimum error value.

The odd random kick may send the search out of the local minimum toward a better solution. The size of the random kick may be changed over time, such that later kicks tend to be smaller. Multiple training sessions may be run, and the "mean training time" to a certain error limit computed. (Neural nets also benefit from having noisy connections, even after training is complete. )

A single training run may fail to meet the error spec, even if run forever.

Some problem spaces may also have a fractal or chaotic solution space -- slight changes to the starting conditions can drastically alter the solution found.

-QM
--
Quantum Mechanics: The dreams stuff is made of