Uhm... I must admit, that maybe I just quit them to easily. (I specialize in different things). I was shown the (mathematical) structure of them (during just one lecture), and what, I think, appears to You - the simplicity of the network - was for me rejecting. I just lacked respect for something that is not more complicated that a set of linear equations. I'm sure you know that... uhm.. that's kind of embarassing to say, mathematicians sometimes tend to ignore too simple things, which is of course wrong... But I understand that I shouldn't criticize the whole range of NN's applications, so maybe I should look it through again few times...

Think about what a modern computern is. The Turing machine, which is extremely simple, can do the same than any modern computer. I will even dare to say that the simple lineal ecuation of the perceptron is more complex than the Turing machine rules, as multiplications are complex beasts.

Looking at the few rules that define a Turing machine, you can not extrapolate marvels like a 3d editor, or videogames, or voice recognition programs...

Have in mind that neural networks are like hardware, being the software the topology of the network and the values of the input coeficients.

The possible applications of neural nets are as much as the computer ones. I know there are limits for what the Turing machine can calculate, which were shown by the mathematicians; however the same limits fits the human mind also!

I just think that NNs cannot solve any problem that cannot be solved by, call them, 'classical' alghoritms. No revolution there.

As far as we are talking about systems based on finite quantities of discrete values, all of them can be implemented using a digital computer or a Turing machine.

The revolution, or the advance, if you prefer, is the nature of the algorythm in question.

A neural network is one of the (very few) formal systems we have developed which can "learn".

We can use a neural network or an equivalent algorythm to solve "fuzzy" problems by analogy. For example we can train a NN to visually recognize the letter "A" in lots of shapes, in a similar manner than a living being. In that sense, we can solve problems without having to analize them previously, which has been our method until now.

This has an interesting consequence: a NN can solve problems we don't know how to solve. This is especially true when combining NN with genetic algorythms. As far as we know the problem and a possible solution the NN can potentially bridge the gap.

But there's no discovered limit of activity (which, in my understanding, would have to be of the form of some quantitative characteristic of brain) , where "normality" ends, and the genius begins. That's the simple consequence of the fact, that as You well know, the "normality" is just a statistical median, and nothing more. No limits are well-defined... At least I am not familiar with it.

Well, there are very clear examples; we have the very famous case of Asperger syndrome's victim Kim Peek:

http://www.wisconsinmedicalsociety.org/savant/kimpeek.cfmKim Peek habilities are rare or unheard in other humans: allegedly, he can recall 7600 books (word by word) he only read once, read two pages at the same time (using one eye for each page), or add tons of numbers instantly. And his brain is quite different from a normal one.

Aside from that, I understand that you are talking about the ability of any common human to push his intellect very far. Yes, I reckon the brain is very powerful and has, like a muscle, a plastic nature, the capability to get stronger or adapt to different conditions.

However, I am back to my original point: different brains host different minds and different intellects. Some more intelligent than others. If machines obtain the capacity of thinking, I don't see any reason to believe it's intelligence must be as limited as the human one. Our hardware is limited; the machine's one is not.

I once again repeat, that I don't mean to quarell or be sarcastic. I feel I have some lacks of knowledge about the applications of Neural Networks, I'd be happy to change that.

I have searched for an introductory article that covers a good portion of NN research. As I don't see any, I collected a few links; I hope you will find them interesting:

http://www.doc.ic.ac.uk/~nd/surprise_96/journal/vol4/cs11/report.htmlhttp://www-cse.stanford.edu/classes/sophomore-college/projects-00/neural-networks/Applications/http://www.geocities.com/CapeCanaveral/Lab/3765/neural.htmlAnyway, a few moments in google will give you a hint about the size, power and huge scope of NN research.