in reply to RFC: AI::NeuralNet::Simple

The docs say you implement backpropagation which is a method that typically involves a parameter that is often called the learning rate. In my experience it's important for the performance of the learning algorithm to tune it since either the network will converge very slowly (learning rate too small) or the training process will diverge (learning rate too high).

According to the docs, there's no way to control the learning rate, so how do you deal with this?

I do like the idea, it's good to have something simple to play around with. I should even have a Perl implementation of this around somewhere which I whipped together to do a quick test in a project I once did. It would have been nice if it were around at the time.

Just my two cents, -gjb-

Replies are listed 'Best First'.
Re: Re: RFC: AI::NeuralNet::Simple
by Ovid (Cardinal) on Oct 31, 2003 at 22:18 UTC

    D'oh! You're right. I've hardcoded the learning rate in the C, but it's trivial to expose that to the Perl.

    Cheers,
    Ovid

    New address of my CGI Course.