in reply to Re: Re: Re: Testing Inline::C Modules
in thread Testing Inline::C Modules

I understand the one node for the output layer, but I can't use a linear activation function for a backprop network because that uses the derivative of the activation function for propogating the error back through the network. Sample error progogation code:

for (out = 0; out < network.size.output; out++) { network.error.output[out] = (network.neuron.target[out] - network.neuron.output[out] +) * sigmoid_derivative(network.neuron.output[out]); }

That fails because the derivative of a linear function will be 1.0, thus not allowing the network to learn from errors. Am I missing something basic here?

Cheers,
Ovid

New address of my CGI Course.

Replies are listed 'Best First'.
Re: Re: Re: Re: Re: Testing Inline::C Modules
by chance (Beadle) on Feb 11, 2004 at 20:10 UTC
    Am I missing something basic here?

    hmmm .... I think so. not sure what exactly. so I'll just make some random statements and hope they help

    1) if the derivative is one, the derivative is one. back propagation still works. everything 's cool.

    2) your code is good if its intent is to back prop a layer that is known to be sigmoid. if you want to back prop a layer that is known to be linear you can just drop the derivative term altogether. you'll have to have some kind of layer-labelling scheme er something to decide which loop to call. (ditto for feed forwards).