in reply to backpropagation accuracy issue

As a quick test I added use bignum; (see bignum) to the test script and obtained essentially the same erroneous values to greater precision so I'd guess the errors are unlikely to be due to rounding issues or accumulated errors. That implies that the algorithm has an error somewhere. How did you obtain the "correct" values? Can you provide a reference to a public online resource that describe the algorithm?

True laziness is hard work

Replies are listed 'Best First'.
Re^2: backpropagation accuracy issue
by perlchris (Initiate) on Feb 19, 2011 at 03:11 UTC

    The correct values were provided and are posted here

    The algorithm is described in Tom M. Mitchell's "Machine Learning" (p. 98) there are pdf slides here slides numbered 88-92 on pages 4 and 5 respectively.

    That's unfortunate that its not a rounding error I will comb through more closely and hopefully find where the problem is. If you have any suggestions it is certainly appreciated. Thank you for your help so far.

      @hiddenweight after one epoch:
      $VAR1 = [ '0.596062038434604', '0.596062038434604', '0.596062038434604' ];

      After two:

      $VAR1 = [ '0.592154093746109', '0.592154093746109', '0.592154093746109' ];

      Shouldn't they be different? (Just guessing here.)

        Yes I thought so too, but it wasn't until going back over the output source you asked for that I realized that some of the vars that are scalars need to be arrays and a few that are arrays need to be arrays of arrays. So I am going to try to make some additional fixes and fix the bugs those ultimately cause ;) and see where I am at.

        Again thank you so much for your help.

        I will post an update as soon as possible

      Doesn't even start the same.

      Him:

      ***** Epoch 1 ***** Maximum RMSE: 0.5435466682137927 Average RMSE: 0.4999991292217466
      You:
      ***** Epoch 1 ***** Maximum RMSE: 0.574001103043358 Average RMSE: 0.50006970432383
Re^2: backpropagation accuracy issue
by ikegami (Patriarch) on Feb 19, 2011 at 03:11 UTC
    Further support that this is not a floating pointing point issue: The max is correct to one digit and the average is correct to 4 digits, but doubles have about 17 digits (53 bits) of precision.

      If I could get both to 6 digits (or perhaps more) it'd be acceptable. The program that made the output to test against though is not written in Perl. I don't know if the casts are implemented significantly different in different languages, but I believe the other program was written in Java.

      We were allowed to write in any language and I thought Perl would be a good way to do it since that's what I've been reading on as of late.

      Again thank you for the help and suggestions. They are very appreciated. I will continue to look through and see if I can identify the problem with my implementation.