thealienz1 has asked for the wisdom of the Perl Monks concerning the following question:
Recently I have been working on a project for my neural netowrk class. I have come along the many posts on the site and decided to use AI::NNFlex::Backprop to implement the network because of its control. But, I wouldn't be asking for help if had worked the 1..Infinity times I have tried to make it work.
The network consists of 3 inputs and 2 outputs. The 3 inputs are some values of radian value(generated radomly) and the 2 outputs are y1 = sin(x1 + x2 + x3) and y2 = cos(x1 - x2 - x3). This data can be seen in data set $train_set. I am trying to train the network to learn the functions of sin and cos to a respectable domain.
use strict;
use AI::NNFlex::Backprop;
use AI::NNFlex::Dataset;
use Data::Dumper;
my $n = 0.4;
my $num_epochs = 100;
my $network = AI::NNFlex::Backprop->new(learningrate=>.1,
bias=>1,
momentum=>0.6
);
$network->add_layer(nodes=>3,activationfunction=>'sigmoid');
$network->add_layer(nodes=>3,activationfunction=>'sigmoid');
$network->add_layer(nodes=>2,activationfunction=>'sigmoid');
$network->add_layer(nodes=>3,activationfunction=>'sigmoid');
$network->add_layer(nodes=>2,activationfunction=>'sigmoid');
$network->init();
my $train_set = AI::NNFlex::Dataset->new([
[5.5340,2.6382,4.2414],[-0.1521,0.2233],
[3.9251,4.1849,4.2922],[-0.1634,-0.1596],
[2.5816,2.6874,0.2919],[-0.6612,0.9220],
[6.2540,5.3605,1.0425],[0.0905,0.9889],
[1.8847,5.9803,1.9172],[-0.3500,0.9637],
[3.8622,2.3186,3.8525],[-0.5717,-0.6729],
[4.0471,0.5837,2.9998],[0.9751,0.8944],
[0.1537,1.6062,2.7142],[-0.9717,-0.5190],
[0.9319,5.4831,2.1002],[0.7892,0.9329],
[5.9391,4.3719,2.0253],[-0.2280,0.8969],
[3.8589,3.6751,2.4908],[-0.5647,-0.6714],
[5.4657,0.6344,4.7407],[-0.9880,0.9959],
[5.0981,2.1070,4.0408],[-0.9688,0.4978],
[5.8363,4.2850,3.2053],[0.6890,-0.0830],
[4.0277,1.2286,3.6781],[0.4710,0.6379],
[0.9807,6.1761,3.6123],[-0.9744,-0.8156],
[0.4826,5.8266,3.1986],[-0.0830,-0.6355],
[5.6870,2.4516,5.0457],[0.5794,-0.2372],
[1.4678,4.5695,1.8276],[0.9999,0.2152],
[6.1370,5.9400,5.7317],[-0.8629,0.7327],
[1.9392,0.4360,0.6980],[0.0682,0.6930],
[1.8824,0.6383,3.1576],[-0.5687,-0.3360],
[3.9325,5.2540,6.0191],[0.4814,0.4910],
[5.5595,3.1260,1.3106],[-0.5408,0.4331],
[3.8381,0.6247,0.4827],[-0.9730,-0.9168],
[4.7271,2.3532,1.3232],[0.8527,0.4970],
[4.4938,0.8037,4.3722],[-0.2426,0.7762],
[2.4121,2.9037,2.5048],[0.9994,-0.9895],
[1.2994,1.8898,3.2962],[0.2009,-0.7351],
[2.5134,1.6276,2.3341],[0.1908,0.1222],
[0.4626,4.4195,1.7705],[0.3611,0.8495],
[4.7609,4.2142,4.7134],[0.9011,-0.5189],
[3.3172,2.6081,0.8349],[0.4591,0.9921],
[2.8259,2.0912,1.1687],[-0.1960,0.9073],
[1.7875,2.6646,4.4482],[0.5008,0.5753],
[1.8996,2.8124,1.6491],[0.0778,-0.8366],
[1.9332,2.5489,0.5224],[-0.9576,0.4193],
[2.3862,4.7004,3.3423],[-0.8437,0.8100],
[0.3561,6.2829,1.3254],[0.9939,0.5661],
[0.7836,0.6747,1.2805],[0.3920,0.3887],
[5.3112,3.1028,1.5374],[-0.5026,0.7832],
[0.4225,5.5171,4.3372],[-0.7527,-1.0000],
[4.2945,3.1031,3.6578],[-0.9982,-0.7806],
[1.6475,1.3222,0.7899],[-0.5794,0.8940],
[2.6780,0.2432,2.5458],[-0.7286,0.9939],
[3.4686,1.6998,6.1486],[-0.9488,-0.3264],
[1.4655,1.0675,2.4023],[-0.9753,-0.4201],
[1.5529,1.2072,2.3019],[-0.9395,-0.3759],
[0.9813,2.1893,2.4928],[-0.5809,-0.8477],
[6.0636,4.0863,0.9625],[-0.9932,0.5278],
[5.6665,0.1528,3.9506],[-0.3384,0.0077],
[3.8016,5.0398,2.4865],[-0.9453,-0.8348],
[3.4299,2.2151,3.4295],[0.3431,-0.6003],
[5.6385,2.9992,2.5394],[-0.9836,0.9950],
[4.5908,2.2741,0.6373],[0.9388,-0.1083],
[3.2202,5.6121,1.0274],[-0.4213,-0.9617],
[3.3479,3.1709,2.5166],[0.3797,-0.6953],
[3.0010,4.5502,2.1642],[-0.2866,-0.8410],
[0.8316,3.0995,4.6960],[0.7158,0.7771],
[4.7967,5.0464,6.1339],[-0.2659,0.9950],
[0.4729,4.8273,1.9162],[0.8035,0.9999],
[0.0993,5.4944,5.8210],[-0.9134,0.2187],
[3.7814,5.7043,4.5935],[0.9983,0.9729],
[4.8448,2.6848,0.0043],[0.9492,-0.5522],
[2.7279,5.8683,1.4343],[-0.5694,-0.1372],
[5.8139,4.9424,1.3369],[-0.4557,0.8936],
[3.5784,0.2923,3.6134],[0.9323,0.9469],
[6.1181,0.7732,4.3905],[-0.9593,0.5780],
[0.4081,2.2920,4.2574],[0.6243,0.9900],
[5.9607,4.8815,1.9736],[0.2469,0.6259],
[3.3337,1.7212,0.0541],[-0.9224,0.0124],
[1.7599,3.7055,5.2455],[-0.9597,0.6154],
[0.4852,0.1804,2.4029],[0.0729,-0.5032],
[3.1002,4.0584,1.1857],[0.8822,-0.5422],
[3.3059,3.7911,4.0935],[-0.9810,-0.1333],
[0.5245,3.6482,5.8982],[-0.6021,-0.9200],
[1.5618,6.2071,1.9522],[-0.2920,0.9510],
[1.1077,0.7994,5.6519],[0.9568,0.5901],
[2.6820,4.5110,2.6853],[-0.4382,-0.1968],
[3.4817,4.5394,3.6957],[-0.7510,0.0410],
[6.0996,2.6882,0.2202],[0.4048,-0.9988],
[5.1756,2.8179,0.2649],[0.9193,-0.4986],
[4.1947,2.9655,4.0844],[-0.9691,-0.9593],
[6.1281,1.9555,4.5045],[0.0217,0.9454],
[3.1109,3.4934,1.9679],[0.7531,-0.7030],
[0.4649,0.2924,3.1929],[-0.7233,-0.9927],
[3.0673,1.5909,2.8188],[0.9298,0.2264],
[3.6315,5.8460,3.3502],[0.2583,0.7528],
[4.8192,5.8070,1.6455],[-0.2904,-0.8736],
[0.3948,3.2503,2.0376],[-0.5651,0.1797],
[6.0185,4.9840,3.6957],[0.8467,-0.8868],
[5.2400,1.8391,0.1164],[0.7909,-0.9898],
[4.4132,4.7653,4.4866],[0.8907,0.1260],
[1.1368,1.6150,4.5218],[0.8363,0.2837],
[1.2945,3.0269,4.1392],[0.8216,0.9165],
[2.1304,1.6573,2.3976],[-0.0977,-0.3464],
[1.6617,5.9765,1.6080],[0.1777,0.9358],
[3.8018,6.1612,4.4297],[0.9675,0.8748],
[1.0944,1.6855,0.0794],[0.2786,0.7836],
[0.1884,3.1365,0.5390],[-0.6611,-0.9409],
[2.0268,2.5542,1.7887],[0.0865,-0.6782],
[5.0352,0.0222,3.7997],[0.5377,0.3499],
[4.2030,2.6134,0.3378],[0.7650,0.3136],
[3.0253,0.9904,0.0346],[-0.7887,-0.4165],
[1.8271,1.6109,5.8611],[0.1253,0.8031],
[5.3684,2.0890,0.9048],[0.8736,-0.7200],
[3.6329,5.3425,2.8755],[-0.6560,-0.1270],
[4.9383,2.3600,0.3676],[0.9823,-0.5972],
[3.7106,0.6559,2.0575],[0.1403,0.5427],
[4.5009,5.0899,4.7424],[0.9809,0.5801],
[2.2240,3.4914,2.6341],[0.8797,-0.7249],
[0.4496,2.9264,0.0610],[-0.2911,-0.8232],
[1.9339,2.0413,5.2684],[0.1801,0.6158],
[0.0672,4.7777,5.0418],[-0.4456,-0.9469],
[3.3425,2.5392,6.2484],[-0.4226,0.6688],
[0.1650,3.0103,3.4485],[0.3341,0.9999],
[0.7814,4.1311,2.4528],[0.8830,0.8867],
[0.8563,2.1011,5.8930],[0.5434,0.6566],
[2.5677,1.4636,1.7530],[-0.4785,0.7968],
[6.2694,5.8874,4.7573],[-0.9343,-0.3307],
[5.7473,5.7283,4.6135],[-0.3720,-0.1177],
[1.0047,3.0440,5.7552],[-0.3700,0.0594],
[3.2436,4.7168,2.6994],[-0.9441,-0.5140],
[5.5943,1.6110,3.4079],[-0.9278,0.8390],
[5.5595,4.5983,4.3449],[0.9339,-0.9708],
[2.9657,4.5754,0.4729],[0.9872,-0.4897],
[1.2764,3.5884,0.6863],[-0.6684,-0.9898],
[2.5378,6.0620,5.8285],[0.9579,-0.9974],
[6.1402,4.3040,4.2097],[0.8694,-0.7192],
[5.1266,3.0453,3.1896],[-0.9338,0.4462],
[0.9140,0.1484,4.1952],[-0.8550,-0.9588],
[1.9633,0.0985,2.6169],[-0.9994,0.7302],
[3.2475,4.4928,2.0868],[-0.3915,-0.9819],
[0.2560,1.7329,2.7378],[-0.9999,-0.4774],
[4.2695,2.1248,2.8712],[0.1587,0.7475],
[4.8536,5.5924,1.6403],[-0.4618,-0.7231],
[5.5306,2.9369,4.7678],[0.6202,-0.5673],
[3.2925,5.2756,5.1452],[0.9115,0.6636],
[2.3980,5.4450,5.8478],[0.9020,-0.8628],
[5.2609,1.3351,5.4096],[-0.5318,0.0868],
[5.3450,3.2210,0.8752],[-0.0164,0.3164],
[2.8786,4.1886,6.2505],[0.6827,0.2893],
[4.6595,3.1193,1.6553],[-0.0094,0.9934],
[5.2902,0.3591,5.0356],[-0.9521,0.9945],
[2.4589,5.5967,1.5014],[-0.1319,-0.0731],
[1.1989,2.9440,4.2920],[0.8359,0.9699],
[1.4007,4.3817,5.7958],[-0.8350,-0.7973],
[2.8374,1.2346,0.4524],[-0.9824,0.4081],
[5.1153,0.8750,1.3218],[0.8567,-0.9752],
[1.2237,0.3685,4.9964],[0.3006,-0.5407],
[3.8471,2.3281,0.4555],[0.3406,0.4858],
[2.7967,1.3191,4.5017],[0.7224,-0.9931],
[5.2220,2.8499,4.7082],[0.2121,-0.6927],
[2.8660,2.2244,4.7338],[-0.3889,-0.5812],
[4.3218,0.3358,4.0157],[0.6827,0.9996],
[4.1818,3.5398,4.8490],[0.0042,-0.4842],
[1.2552,2.7262,5.8577],[-0.4025,0.5015],
[3.7468,0.6937,5.5425],[-0.5297,-0.7947],
[4.6458,0.4160,5.6699],[-0.9654,0.1304],
[3.8282,4.2185,3.8997],[-0.5810,-0.4100],
[4.2166,6.2005,0.3450],[-0.9729,-0.6875],
[4.1739,2.0613,3.3268],[-0.1367,0.3490],
[5.8329,5.7748,2.2769],[0.9683,-0.6036],
[2.3621,1.3956,2.3874],[-0.1376,0.1494],
[4.0352,0.2616,2.6201],[0.5922,0.4053],
[3.6630,2.5063,6.0631],[-0.3278,0.1928],
[2.8453,0.4987,2.3738],[-0.5357,0.9996],
[5.3606,0.4756,1.7312],[0.9592,-0.9999],
[2.0618,2.4722,4.6249],[0.2628,0.3174],
[5.2130,1.4297,3.1636],[-0.3723,0.8140],
[4.9403,5.7189,2.6434],[0.6715,-0.9609],
[1.1577,1.3957,0.2038],[0.3750,0.9040],
[4.9633,6.0459,5.8599],[-0.9172,0.7904],
[1.5738,0.6150,2.3907],[-0.9912,0.1385],
[1.0474,5.8627,1.5949],[0.7955,0.9919],
[3.4741,1.3534,3.6510],[0.8113,0.0405],
[4.9391,1.4557,3.7931],[-0.6912,0.9524],
[2.6547,4.1940,2.9256],[-0.3424,-0.2449],
[0.3649,1.6665,3.6264],[-0.5855,0.2139],
[5.2070,1.8115,0.6447],[0.9818,-0.9246],
[0.0335,2.9267,0.0999],[0.0815,-0.9890],
[4.3960,2.9445,1.2468],[0.7430,0.9791],
[3.0136,1.0945,4.5197],[0.7152,-0.8572],
[3.3508,5.2102,5.6917],[0.9933,0.2983],
[4.7758,1.6340,5.1324],[-0.8543,-0.4075],
[5.1793,5.2910,0.4546],[-0.9975,0.8439],
[0.9531,4.7461,2.4497],[0.9568,0.9992],
[1.3712,5.5541,2.7578],[-0.2554,0.7915],
[6.2456,0.1370,3.3101],[-0.2647,-0.9417],
[3.8731,3.8342,2.6510],[-0.8037,-0.8631],
[0.3462,0.9295,5.2808],[0.2700,0.9135],
[4.1782,5.5745,0.1970],[-0.5011,-0.0226],
[2.6387,2.8487,5.4376],[-0.9975,0.8047],
[5.7901,4.0368,1.1864],[-0.9998,0.8435],
[4.4426,5.5650,3.7703],[0.9361,0.1793],
[1.4368,2.9068,4.2683],[0.7263,0.8552],
[3.4556,0.4719,5.9739],[-0.4587,-0.9886],
[2.6051,0.1948,1.1812],[-0.7443,0.3351],
[4.1460,4.0784,1.2412],[-0.0408,0.3869],
[1.3771,0.8185,1.9272],[-0.8312,0.2009]
]);
my $epoch = 1;
my $err = 1;
while($err > .001 && $epoch < 100) {
$err = $train_set->learn($network);
#$outputsRef = $train_set->run($network);
print "Error: $err\n";
$epoch++;
}
foreach (@{$train_set->run($network)})
{
foreach (@$_){print $_}
print "\n";
}
I however have it a snag. From what I understand of the Module the error should decrease with each training set. Unfortunetly, this is not the case. The error increase over time.
$ perl test1.pl
Error: 37268.9380637208
Error: 36366.1126525255
Error: 36397.7874751512
Error: 36401.0039794455
Error: 36401.2560517885
Error: 36401.2580844506
Error: 36401.2576249638
Error: 36401.2576128607
Error: 36401.2576127725
Error: 36401.2576127723
Error: 36401.2576127723
Error: 36401.2576127723
Error: 36401.2576127723
Error: 36401.2576127723
Error: 36401.2576127723
Error: 36401.2576127723
Error: 36401.2576127723
Error: 36401.2576127723
Error: 36401.2576127723
Error: 36401.2576127723
Any insight of why this might happen. I think it has something to do with the fact the values I have for my y1 and y2 are always below one, so that somewhere in the Module values are being rounded. Usually, the neural networks presented in the examples have binary output values. Is this always supposed to be true?
Mess around with the code see what you think?
Thanks for any comments...
Re: AI::NNFlex::Backprop error not decreasing
by g0n (Priest) on Mar 21, 2005 at 09:57 UTC
|
Hi,
Your problem is probably partly my fault. The 'sigmoid' activation function uses a formula that I haven't worked out how to integrate yet, so there is no corresponding sigmoid_slope function to return the slope of the error curve. I should really have taken that activation function out - apologies for the oversight.
I would suggest you use the tanh activation function instead. I'll correct the module & documentation for the next release.
Could some kind monk tell me the 1st order derivative of this function:
(1+exp(-$value))**-1
so I can correct the code?
You've also got several layers defined. While there is no theoretical reason why you shouldn't (and I wrote the code with that in mind) it is more usual to use 3 layers, and adjust the number of nodes in the hidden layer to reflect the number of values you need the network to learn.
Update: Oops, didn't spot the question at the bottom. Theoretically there is no reason why you shouldn't have analogue values learned by the network although again it's unusual, and you'll lose a bit of precision on the output.
I must admit I've never tried implementing an analogue net with AI::NNFlex::Backprop though, so I can't guarantee it will work.
While analogue nets are possible, it's an unusual approach, and takes a good deal of thinking about. Backprop nets are what my tutor likes to call a 'universal approximator'. Given the precision and size of your data set, my feeling is that trying to teach a backprop net this kind of data in this form is likely to fail - the output values will always be too approximate, so the error slope will never have a true 'solution'.
The fact that the module didn't fail when unable to find a slope function suggests that you are using 0.2. This bug is fixed in 0.21, which is a lot faster as well, so you might want to get that version from CPAN.
| [reply] |
|
| [reply] |
|
Thanks frodo72, that seems to do the job. I'll put that in the code for the next release, although I'd still recommend the OP uses tanh, as it seems to be more effective with this implementation of backprop.
| [reply] |
|
Just a slight correction to frodo72's derivative:
-1 * exp(-$value) * ((1 + exp(-$value)) ** -2)
Update: nevermind, forgot a negative that canceled. =\
| [reply] [d/l] |
|
I have some experience with what you are referring to as an analog network. In fact I've only ever used analog output, so the digital output networks are the exception for me. Actually, there is little difference between the two other than the interpretation of the values that the network produces. The ability of the network to approximate your analog training sample is going to be dependant on the suitability of underlying network's multi-dimensional nonlinear polynomial (of sorts) to approximate the function. Since a sinusoidal function can be pretty well approximated (cose to the origin) by a low order Taylor series expansion, I would expect a suitably designed and trained NN to perform nearly as well. A look at the number of free paramaters in such a series expansion would give you a good hint at the size of network you would need (my guess is not very large).
From looking at your training data and the error over training iterations, I'd say that what you see doesn't look very odd. You can see that the error does indeed decrease from the first training iteration, it then goes to a low point and then levels out at a slightly higher value. This behavior is expected for a network with its number of weights and number of training samples roughly the same order of magnitude. It shows a tendancy for the network to become overtrained: for the training samples to become hardwired into the networks weights. To fix this you would either have to add many more training samples, or reduce the number of layers in your network. To me the network you've chosen looks too complex for the task at hand , therefore much more likely to become overtrained. Try a single hidden layer of 4-5 nodes.
Another way to avoid overtraining is to parition your sample data into two sets, train on one set and then after each epoch, test the error on the other data set. You should aim for a minimized error in the second data set.
| [reply] |
|
use strict;
use AI::NNFlex::Backprop;
use AI::NNFlex::Dataset;
use Data::Dumper;
my $n = 0.4;
my $num_epochs = 100;
my $network = AI::NNFlex::Backprop->new(learningrate=>.9,
bias=>1,
);
$network->add_layer(nodes=>3,activationfunction=>'tanh');
#$network->add_layer(nodes=>3,activationfunction=>'tanh');
#$network->add_layer(nodes=>2,activationfunction=>'tanh');
#$network->add_layer(nodes=>3,activationfunction=>'tanh');
$network->add_layer(nodes=>5,activationfunction=>'tanh');
$network->add_layer(nodes=>2,activationfunction=>'sigmoid');
$network->init();
my $test_set = AI::NNFlex::Dataset->new([
[6.28318,1.570795,0], [1,0],
[6.28318,1.570795,1.570795], [0,-1],
[6.28318,1.570795,3.14159], [-1,0],
[6.28318,1.570795,4.712385], [0,1],
[6.28318,1.570795,6.28318], [1,0],
[6.28318,1.570795,7.853975], [0,-1],
[6.28318,3.14159,0], [0,-1],
[6.28318,3.14159,1.570795], [-1,0],
[6.28318,3.14159,3.14159], [0,1],
[6.28318,3.14159,4.712385], [1,0],
[6.28318,3.14159,6.28318], [0,-1],
[6.28318,3.14159,7.853975], [-1,0],
[6.28318,4.712385,0], [-1,0],
[6.28318,4.712385,1.570795], [0,1],
[6.28318,4.712385,3.14159], [1,0],
[6.28318,4.712385,4.712385], [0,-1],
[6.28318,4.712385,6.28318], [-1,0],
[6.28318,4.712385,7.853975], [0,1],
[6.28318,6.28318,0], [0,1],
[6.28318,6.28318,1.570795], [1,0],
[6.28318,6.28318,3.14159], [0,-1],
[6.28318,6.28318,4.712385], [-1,0],
[6.28318,6.28318,6.28318], [0,1],
[6.28318,6.28318,7.853975], [1,0],
[6.28318,7.853975,0], [1,0],
[6.28318,7.853975,1.570795], [0,-1],
[6.28318,7.853975,3.14159], [-1,0],
[6.28318,7.853975,4.712385], [0,1],
[6.28318,7.853975,6.28318], [1,0],
[6.28318,7.853975,7.853975], [0,-1],
[7.853975,0,0], [1,0],
[7.853975,0,1.570795], [0,-1],
[7.853975,0,3.14159], [-1,0],
[7.853975,0,4.712385], [0,1],
[7.853975,0,6.28318], [1,0],
[7.853975,0,7.853975], [0,-1],
[7.853975,1.570795,0], [0,-1],
[7.853975,1.570795,1.570795], [-1,0],
[7.853975,1.570795,3.14159], [0,1],
[7.853975,1.570795,4.712385], [1,0],
[7.853975,1.570795,6.28318], [0,-1],
[7.853975,1.570795,7.853975], [-1,0],
[7.853975,3.14159,0], [-1,0],
[7.853975,3.14159,1.570795], [0,1],
[7.853975,3.14159,3.14159], [1,0],
[7.853975,3.14159,4.712385], [0,-1],
[7.853975,3.14159,6.28318], [-1,0],
[7.853975,3.14159,7.853975], [0,1],
[7.853975,4.712385,0], [0,1],
[7.853975,4.712385,1.570795], [1,0],
[7.853975,4.712385,3.14159], [0,-1],
[7.853975,4.712385,4.712385], [-1,0],
[7.853975,4.712385,6.28318], [0,1],
[7.853975,4.712385,7.853975], [1,0],
[7.853975,6.28318,0], [1,0],
[7.853975,6.28318,1.570795], [0,-1],
[7.853975,6.28318,3.14159], [-1,0],
[7.853975,6.28318,4.712385], [0,1],
[7.853975,6.28318,6.28318], [1,0],
[7.853975,6.28318,7.853975], [0,-1],
[7.853975,7.853975,0], [0,-1],
[7.853975,7.853975,1.570795], [-1,0],
[7.853975,7.853975,3.14159], [0,1],
[7.853975,7.853975,4.712385], [1,0],
[7.853975,7.853975,6.28318], [0,-1],
[7.853975,7.853975,7.853975], [-1,0]
]);
my $train_set = AI::NNFlex::Dataset->new([
[0,0,0], [0,1],
[0,0,1.570795], [1,0],
[0,0,3.14159], [0,-1],
[0,0,4.712385], [-1,0],
[0,0,6.28318], [0,1],
[0,0,7.853975], [1,0],
[0,1.570795,0], [1,0],
[0,1.570795,1.570795], [0,-1],
[0,1.570795,3.14159], [-1,0],
[0,1.570795,4.712385], [0,1],
[0,1.570795,6.28318], [1,0],
[0,1.570795,7.853975], [0,-1],
[0,3.14159,0], [0,-1],
[0,3.14159,1.570795], [-1,0],
[0,3.14159,3.14159], [0,1],
[0,3.14159,4.712385], [1,0],
[0,3.14159,6.28318], [0,-1],
[0,3.14159,7.853975], [-1,0],
[0,4.712385,0], [-1,0],
[0,4.712385,1.570795], [0,1],
[0,4.712385,3.14159], [1,0],
[0,4.712385,4.712385], [0,-1],
[0,4.712385,6.28318], [-1,0],
[0,4.712385,7.853975], [0,1],
[0,6.28318,0], [0,1],
[0,6.28318,1.570795], [1,0],
[0,6.28318,3.14159], [0,-1],
[0,6.28318,4.712385], [-1,0],
[0,6.28318,6.28318], [0,1],
[0,6.28318,7.853975], [1,0],
[0,7.853975,0], [1,0],
[0,7.853975,1.570795], [0,-1],
[0,7.853975,3.14159], [-1,0],
[0,7.853975,4.712385], [0,1],
[0,7.853975,6.28318], [1,0],
[0,7.853975,7.853975], [0,-1],
[1.570795,0,0], [1,0],
[1.570795,0,1.570795], [0,-1],
[1.570795,0,3.14159], [-1,0],
[1.570795,0,4.712385], [0,1],
[1.570795,0,6.28318], [1,0],
[1.570795,0,7.853975], [0,-1],
[1.570795,1.570795,0], [0,-1],
[1.570795,1.570795,1.570795], [-1,0],
[1.570795,1.570795,3.14159], [0,1],
[1.570795,1.570795,4.712385], [1,0],
[1.570795,1.570795,6.28318], [0,-1],
[1.570795,1.570795,7.853975], [-1,0],
[1.570795,3.14159,0], [-1,0],
[1.570795,3.14159,1.570795], [0,1],
[1.570795,3.14159,3.14159], [1,0],
[1.570795,3.14159,4.712385], [0,-1],
[1.570795,3.14159,6.28318], [-1,0],
[1.570795,3.14159,7.853975], [0,1],
[1.570795,4.712385,0], [0,1],
[1.570795,4.712385,1.570795], [1,0],
[1.570795,4.712385,3.14159], [0,-1],
[1.570795,4.712385,4.712385], [-1,0],
[1.570795,4.712385,6.28318], [0,1],
[1.570795,4.712385,7.853975], [1,0],
[1.570795,6.28318,0], [1,0],
[1.570795,6.28318,1.570795], [0,-1],
[1.570795,6.28318,3.14159], [-1,0],
[1.570795,6.28318,4.712385], [0,1],
[1.570795,6.28318,6.28318], [1,0],
[1.570795,6.28318,7.853975], [0,-1],
[1.570795,7.853975,0], [0,-1],
[1.570795,7.853975,1.570795], [-1,0],
[1.570795,7.853975,3.14159], [0,1],
[1.570795,7.853975,4.712385], [1,0],
[1.570795,7.853975,6.28318], [0,-1],
[1.570795,7.853975,7.853975], [-1,0],
[3.14159,0,0], [0,-1],
[3.14159,0,1.570795], [-1,0],
[3.14159,0,3.14159], [0,1],
[3.14159,0,4.712385], [1,0],
[3.14159,0,6.28318], [0,-1],
[3.14159,0,7.853975], [-1,0],
[3.14159,1.570795,0], [-1,0],
[3.14159,1.570795,1.570795], [0,1],
[3.14159,1.570795,3.14159], [1,0],
[3.14159,1.570795,4.712385], [0,-1],
[3.14159,1.570795,6.28318], [-1,0],
[3.14159,1.570795,7.853975], [0,1],
[3.14159,3.14159,0], [0,1],
[3.14159,3.14159,1.570795], [1,0],
[3.14159,3.14159,3.14159], [0,-1],
[3.14159,3.14159,4.712385], [-1,0],
[3.14159,3.14159,6.28318], [0,1],
[3.14159,3.14159,7.853975], [1,0],
[3.14159,4.712385,0], [1,0],
[3.14159,4.712385,1.570795], [0,-1],
[3.14159,4.712385,3.14159], [-1,0],
[3.14159,4.712385,4.712385], [0,1],
[3.14159,4.712385,6.28318], [1,0],
[3.14159,4.712385,7.853975], [0,-1],
[3.14159,6.28318,0], [0,-1],
[3.14159,6.28318,1.570795], [-1,0],
[3.14159,6.28318,3.14159], [0,1],
[3.14159,6.28318,4.712385], [1,0],
[3.14159,6.28318,6.28318], [0,-1],
[3.14159,6.28318,7.853975], [-1,0],
[3.14159,7.853975,0], [-1,0],
[3.14159,7.853975,1.570795], [0,1],
[3.14159,7.853975,3.14159], [1,0],
[3.14159,7.853975,4.712385], [0,-1],
[3.14159,7.853975,6.28318], [-1,0],
[3.14159,7.853975,7.853975], [0,1],
[4.712385,0,0], [-1,0],
[4.712385,0,1.570795], [0,1],
[4.712385,0,3.14159], [1,0],
[4.712385,0,4.712385], [0,-1],
[4.712385,0,6.28318], [-1,0],
[4.712385,0,7.853975], [0,1],
[4.712385,1.570795,0], [0,1],
[4.712385,1.570795,1.570795], [1,0],
[4.712385,1.570795,3.14159], [0,-1],
[4.712385,1.570795,4.712385], [-1,0],
[4.712385,1.570795,6.28318], [0,1],
[4.712385,1.570795,7.853975], [1,0],
[4.712385,3.14159,0], [1,0],
[4.712385,3.14159,1.570795], [0,-1],
[4.712385,3.14159,3.14159], [-1,0],
[4.712385,3.14159,4.712385], [0,1],
[4.712385,3.14159,6.28318], [1,0],
[4.712385,3.14159,7.853975], [0,-1],
[4.712385,4.712385,0], [0,-1],
[4.712385,4.712385,1.570795], [-1,0],
[4.712385,4.712385,3.14159], [0,1],
[4.712385,4.712385,4.712385], [1,0],
[4.712385,4.712385,6.28318], [0,-1],
[4.712385,4.712385,7.853975], [-1,0],
[4.712385,6.28318,0], [-1,0],
[4.712385,6.28318,1.570795], [0,1],
[4.712385,6.28318,3.14159], [1,0],
[4.712385,6.28318,4.712385], [0,-1],
[4.712385,6.28318,6.28318], [-1,0],
[4.712385,6.28318,7.853975], [0,1],
[4.712385,7.853975,0], [0,1],
[4.712385,7.853975,1.570795], [1,0],
[4.712385,7.853975,3.14159], [0,-1],
[4.712385,7.853975,4.712385], [-1,0],
[4.712385,7.853975,6.28318], [0,1],
[4.712385,7.853975,7.853975], [1,0],
[6.28318,0,0], [0,1],
[6.28318,0,1.570795], [1,0],
[6.28318,0,3.14159], [0,-1],
[6.28318,0,4.712385], [-1,0],
[6.28318,0,6.28318], [0,1],
[6.28318,0,7.853975], [1,0]
]);
my $epoch = 1;
my $err = 1;
while($err > .001 && $epoch < 100) {
$err = $train_set->learn($network);
my $outputsRef = $test_set->run($network);
print Dumper($outputsRef);
print "Error: $err\n";
$epoch++;
}
The output of the network with the test set gives the following.
$ perl test1.pl
$VAR1 = [
[
'2.22776546277668e-07',
'0.011408329955622'
],
[
'2.22776546277668e-07',
'0.011408329955622'
],
[
'2.22776546277668e-07',
'0.011408329955622'
],
[
'2.22776546277668e-07',
'0.011408329955622'
],
[
'2.22776546277668e-07',
'0.011408329955622'
],
[
'2.22776546277668e-07',
'0.011408329955622'
],
....
....
Am I handling the output of the network in correctly? The module says that Runs the dataset through the network and returns a reference to an array of output patterns. I guess I am not handling the reference array correctly.
Thanks for all the help. | [reply] [d/l] [select] |
|
|
I appreciate all your feedback for this programming project I have been working on. But, I just wanted to let you know that in then end I just simplifed the solution I was looking for. Basically, created two neural networks for each output I was looking for. It seems to me that a single network of complexity to recognize to functions such that would be larger than I needed/wanted. The calculations for weight, epochs, etc... increased so much when I made the networks bigger. Actually, creating separate networks and then training them cut down on time.
I came to this conclusion when I finally gave in and starting using Matlab for the Neural Network Toolbox that my school has. When I could view the output and easily graph it. The network needed would be too huge for my needs.
I actually, when I have the time, would like to print out your code and really try to understand your implementation. This is something I have recently become interested in since I started taking a class on the subject. Its something I would to learn further and perhaps do my master's project (not thesis) on.
Thank you again for your help and the excellent code you have written.
Regards, JT Archie
| [reply] |
Re: AI::NNFlex::Backprop error not decreasing
by Anonymous Monk on Mar 21, 2005 at 21:25 UTC
|
You shouldn't use sigmoid in the hidden layers, since sigmoid should be used only in the output layer. You should use tanh, and only some hidden layers +-2, and not all of that! About the inputs of a NN, is recomended to use a value between 0 and 1, actualy 0 and 1, so, your decimal values are not wrong! | [reply] |
|
|