I'm working on a pet project, feeding lots of data into a neural net. My neural net results (how fast the net trains) might be (positively!) affected by using a transformative function. What I'm trying to do is to "separate" the values apart so that one value is easier for the neural network to discriminate the values while training. In my uneducated attempts this seems like a "kernel" method (forcing values into higher dimensions for better network value discrimination). One of the common functions I found is tanh. The Wolfram site had a definition of tanh using e (exp) so I wrote my own tanh for experimentation. Currently I'm not using tanh, I'm just squaring the calculated values before the values are fed to the neural network.
What other functions might be transformative functions?
TIA
-
Are you posting in the right place? Check out Where do I post X? to know for sure.
-
Posts may use any of the Perl Monks Approved HTML tags. Currently these include the following:
<code> <a> <b> <big>
<blockquote> <br /> <dd>
<dl> <dt> <em> <font>
<h1> <h2> <h3> <h4>
<h5> <h6> <hr /> <i>
<li> <nbsp> <ol> <p>
<small> <strike> <strong>
<sub> <sup> <table>
<td> <th> <tr> <tt>
<u> <ul>
-
Snippets of code should be wrapped in
<code> tags not
<pre> tags. In fact, <pre>
tags should generally be avoided. If they must
be used, extreme care should be
taken to ensure that their contents do not
have long lines (<70 chars), in order to prevent
horizontal scrolling (and possible janitor
intervention).
-
Want more info? How to link
or How to display code and escape characters
are good places to start.
|