in reply to Re(*): Neural Nets and Verbal SQL
in thread Testing Inline::C Modules

Your example sounds more like an expert-systems kind of deal to me.

From what I follow, neural-nets map given input to expected output based on some function, but there is pretty much no way a neural net could respond to personal details of Jim. Unless of course you are talking about writing a neural-net to understand natural language, which is another animal entirely. A very cool animal, but one with spines, pointy teeth, and a bad disposition.

You're right on the "it's role is to guess" based on prior experience. Dead on. A genetic algorithm, on the other hand, seems to understand not the answer to a few questions, but the concept of "between these two, which is better", and it goes from there. Now, the real question is, which is the better choice for natural language-analysis? :)

Probably neither, right off, first some sort of model needs to be constructed, almost like those evil sentence diagrams I had to do back in English class. Man, I hated those.

Most neural-net work I see being done (for exposure to the field) is in photo-analysis. I.e, "is this a stop sign". The problem there is identifing what is a reasonable input -- percentage of red is not the best input, and writing a "eccentricity from standard octagon" detector might be complete heck to implement.

What I'm shooting at is, I guess, is what criteria are used to determine what inputs are quality and how to map the problem-space into reasonable-inputs? For any arbitrary problem, finding the right inputs are key to the neural-net, as you already know the output values you are using to train the net.

This is wonderfully off-topic, but I love it. Good to see there is an A.I. following in Perl as well as Lisp. (Lisp is fun, but it's like coding in Brain[] sometimes).

Replies are listed 'Best First'.
Re: Re: Re(*): Neural Nets and Verbal SQL
by Ovid (Cardinal) on Feb 11, 2004 at 19:05 UTC

    Parsing English wasn't my intention. I was just generalizing as an example of how neural nets can take incorrect or incomplete information and still make a guess. Their robustness is one of their best features -- so long as the caveats are understood.

    For a clearer (i.e., programmatic) example of what I meant, take a look in the game AI example of the neural net module. There, you tell the NN its health, weapons, and the number of enemies it sees and it will suggest an appropriate course of action. While the number of inputs is relatively few, it would be trivial to extend that example to cover a broader range of inputs and still get good answers if your inputs are not complete. Right now, the number of inputs is probably too few to really demonstrate that.

    As for genetic algorithms, they would be useless to apply to this sort of problem, but once I work in exposing the error rate in the network, one could use genetic algorithms to design a neural network that could generate relatively accurate answers (much faster than designing by hand) by using a lower error rate as a measure of fitness.

    Cheers,
    Ovid

    New address of my CGI Course.

      Good stuff. I think I know where my next area of research is going to be for a while. I was screwing around with some game development in my spare time, and I finally realized (as the UNIX type I am) writing GUI's bore me and I want to get back to my math/engineering type origins.

      A fine example too. I still think it's a little scary when the inputs are not neccessarily mathematical (i.e. "octagon edge detectors", as input to the "stop sign detector" problem), but I need to tackle the essential mathematical problems first to get a better understanding of the fundamentals. After all, I know how edge detectors are written, and finding octagons can be a neural net that would feed ANOTHER neural net, used to determine where the octagon was a stop sign.

      Cool on so many levels. Thanks again. Now I have motivation! (Umh, flyingmoose, you need to get back to work...)