in reply to Re: Re: Testing Inline::C Modules
in thread Testing Inline::C Modules

What I really would like to understand is how I can map multiple inputs to a single output.

hidden layer sigmoids are fine to nonlinearize things. but if you want your output to be one real number, you should have your output layer only have one node, and use linear activation.

Replies are listed 'Best First'.
Re: Re: Re: Re: Testing Inline::C Modules
by Ovid (Cardinal) on Feb 11, 2004 at 19:23 UTC

    I understand the one node for the output layer, but I can't use a linear activation function for a backprop network because that uses the derivative of the activation function for propogating the error back through the network. Sample error progogation code:

    for (out = 0; out < network.size.output; out++) { network.error.output[out] = (network.neuron.target[out] - network.neuron.output[out] +) * sigmoid_derivative(network.neuron.output[out]); }

    That fails because the derivative of a linear function will be 1.0, thus not allowing the network to learn from errors. Am I missing something basic here?

    Cheers,
    Ovid

    New address of my CGI Course.

      Am I missing something basic here?

      hmmm .... I think so. not sure what exactly. so I'll just make some random statements and hope they help

      1) if the derivative is one, the derivative is one. back propagation still works. everything 's cool.

      2) your code is good if its intent is to back prop a layer that is known to be sigmoid. if you want to back prop a layer that is known to be linear you can just drop the derivative term altogether. you'll have to have some kind of layer-labelling scheme er something to decide which loop to call. (ditto for feed forwards).