in reply to Re^2: NNflex problems (win32)
in thread NNflex problems (win32)

A step in the right direction:

By tinkering a little with your code, I've got it learning the data set at least some of the time, as follows:

Epoch = 40094 error = 0.0188258375344725 Epoch = 40095 error = 0.0188239027473993 1.99620990178657 999.999223878912 2.99174991748112 3.99621594182089 39.9963246624386 99.9965058634682 100.085765949921 199.996807865184 299.9971098669 19.9962642620954 29.9962944622671 20.0141162793859

(Compare with the code below).

What have I changed? Here's the code as it stands at the moment:

use AI::NNFlex::Backprop; use AI::NNFlex::Dataset; my $network = AI::NNFlex::Backprop->new( learningrate=>.00000001, fahlmanconstant=>0, momentum=>0.4, bias=>1); $network->add_layer( nodes=>2, activationfunction=>"linear"); $network->add_layer( nodes=>2, activationfunction=>"linear"); $network->add_layer( nodes=>1, activationfunction=>"linear"); $network->init(); # Taken from Mesh ex_add.pl my $dataset = AI::NNFlex::Dataset->new([ [ 1, 1 ], [ 2 ], [ 500, 500 ], [ 1000 ], [ 1, 2 ], [ 3 ], [ 2, 2 ], [ 4 ], [ 20, 20 ], [ 40 ], [ 50, 50 ], [ 100 ], [ 60, 40 ], [ 100 ], [ 100, 100 ], [ 200 ], [ 150, 150 ], [ 300 ], [ 10, 10 ], [ 20 ], [ 15, 15 ], [ 30 ], [ 12, 8 ], [ 20 ], ]); my $err = 10; # Stop after 4096 epochs -- don't want to wait more than that for ( my $i = 0; ($err > 0.001) && ($i < 40096); $i++ ) { $err = $dataset->learn($network); print "Epoch = $i error = $err\n"; } foreach (@{$dataset->run($network)}) { foreach (@$_){print $_} print "\n"; } # foreach my $a ( 1..10 ) { # foreach my $b ( 1..10 ) { # my($ans) = $a+$b; # my($nnans) = @{$network->run([$a,$b])}; # print "[$a] [$b] ans=$ans but nnans=$nnans\n" unless $ans == $nn +ans; # } # }

The alterations are:

I'll carry on looking at this - I've never really used this code for non binary represented data, so there will almost certainly be improvements that can be made. Looking at NeuralNet-Mesh, it learns this data set very quickly, so there may be something I can derive from looking at that code. But at least you can now derive and save a weight set that will do additions (although you might have to interrupt and restart a few times to get a good, quick run).

Its likely that it can be improved by:

I'll post again on this thread if I find anything really useful.

Update: Gah! fahlman constant is the default position. I've amended the code to set the fahlman constant to 0, that seems to work better.

--------------------------------------------------------------

g0n, backpropagated monk