in reply to Re: Re: Re: Testing Inline::C Modules
in thread Testing Inline::C Modules

I've wanted to play with neural nets seriously since they were omitted during my A.I. class in college, and Russel and Norvig (still shelved, mostly unread), while canonical, isn't exactly clear-as-mud on the subject.

While R&N is an excellent general textbook, I'd hesitate to call it canonical.

If you're interested I'd recommend taking a look at Rumelhart & McClelland's classic two volume set "Parallel Distributed Processing: Explorations in the Microstructure of Cognition". While it was first published back in the mid-eighties it's still a great overview of the basics.

Masters' "Practical Neural Network Recipes in C++" is a good practical introduction.

Update: You may find the AI Depot a source of useful info and links too.

I good quote somewhere, that is entirely irrelevant, and mostly forgotten, went something like this: "The two worst ways to solve a problem are neural networks and genetic algorithms". It's not really an insult, but more of a statement of the A.I. pathology: The former must know when it has found the answer, and the later must know the solution and works on how to get there from the problem.

This is a slightly unfair characterisation. GAs progress by having a way of comparing solutions. Knowing whether one thing is better worse than another is very different from knowing what the ideal solution is. The same applies to neural nets.

The AI "pathology" that I have come across is people ignoring slightly less cool solutions. For example, given equal amounts of time, you'll often find that GAs will lose out to other stochastic methods like simulated annealing.

I can't find it, but there was a neural net somewhere used to reproduce circuits humans had already invented. The designs left some extra resistors in strange places, and in many cases, scientists weren't exactly sure what they did. It is possible, in the future, to see more of this kind of work -- many problems exist where we can define the inputs and the outputs, but can't invent the middle layer.

You may be thinking of Adrian Thompson's work (whose robots occasionally prevented me getting to my office back when I worked at Sussex Uni ;-) He evolved hardware solutions by applying evolutionary algorithms to field-programmable gate array's.

This is from memory - so I may be getting the details wrong.

As I recall he had evolved a chip to recognise different tones. When he looked at the solution on the chip it turned out that some of the cells were not connected to input or output, but when he disconnected them the chip suddenly stopped recognising the tones.

It turned out that the solution was taking advantage of capacitive/inductive affects between the connected and disconnected bits. It was hard to investigate because the FGPA only gave digital output so it was impossible to measure directly the analogue values that were causing the affect. Didn't even work if you setup the same configuration on a different chip.

I think they were going to build some specialised hardware to take analogue measurements - can't remember or never knew if they got anywhere.

Fascinating stuff.