I push virtually everything down to the C level. The only reason Perl is there is to allow Perl programmers to have an easy API. I wrote this module to teach myself how neural networks work since the existing CPAN modules that I saw assumed that you already knew what you were doing. It's sort of a "training" module for both myself and the programmers who might use it.
To be quite frank, while I roughly understand what's going on, I still have a lot of work to do. For example, I use a sigmoid activation function. I don't understand quite how that differs from linear (useless for a backprop network like the one I use) or Tahn activation functions. Also, this NN only seems useful for a "winner take all" strategy, thus you must have a fixed number of possible outputs. I'm hoping to figure out how to adapt it to be more flexible, but I'm playing with deep magic that I don't understand terribly well.
What I really would like to understand is how I can map multiple inputs to a single output. In other words, let's say a particular product grosses $X million in direct sales. Given the possible variables (cost, advertising budget, demographics of buyers, etc.), project rentals of said product (I'm being deliberately vague). With a winner take all strategy, I can't do that. I'm wondering if it's something as simple as choosing an appropriate activation function?
Cheers,
Ovid
New address of my CGI Course.
In reply to Re: Re: Testing Inline::C Modules
by Ovid
in thread Testing Inline::C Modules
by Ovid
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |