Â
April 28, 2010
New Scientist has an interesting article on whether single cells can be considered intelligent. The piece is by biologist Brian Ford who implicitly raises the question of how we define intelligence and whether it is just the ability to autonomously solve problems. If so, then individual cells such as neurons might be considered ‘intelligent’ even when viewed in isolation.
However, he finishes on a bit of an odd flourish:
For me, the brain is not a supercomputer in which the neurons are transistors; rather it is as if each individual neuron is itself a computer, and the brain a vast community of microscopic computers. But even this model is probably too simplistic since the neuron processes data flexibly and on disparate levels, and is therefore far superior to any digital system. If I am right, the human brain may be a trillion times more capable than we imagine, and “artificial intelligence” a grandiose misnomer.
It’s odd because it reads like blue-sky speculation when, in fact, the idea that neurons could work like “a vast community of microscopic computers” is an accepted and developed concept in the field supposedly doomed by this idea – namely, artificial intelligence.
Traditionally, AI had two main approaches both of which emerged from the legendary 1956 Dartmouth Conference.
One was the symbol manipulation approach, championed by Marvin Minsky, and the other was the artificial neural network approach, championed by Frank Rosenblatt.
Symbol manipulation AI builds software around problems where data structures are used to explicitly represent aspects of the world. For example, a chess playing computer would have a representation of the board and each of the pieces and in its memory and it works by running the simulation to test out and solve problems.
In contrast, artificial neural networks are ideal for pattern recognition and often need training. For example, to get one to recognise faces you put a picture into the network and it ‘guesses’ whether it is a face or not. You tell it whether it is right, and if it isn’t, it adjusts the connections to try and be more accurate next time. After being trained enough the network learns to make similar distinctions on pictures it has never seen before.
As is common in science, these started out as tools but became ideologies and a fierce battle broke out over which could or couldn’t ever form the basis of an artificial mind.
At the time of the Dartmouth Conference, the neural network approach existed largely as a simple set-up called the perceptron which was good at recognising patterns.
Perceptrons were hugely influential until Minksy and Seymour Papert published a book showing that they couldn’t learn certain responses (most notable a logical operation called a XOR function).
This killed the artificial neural network approach dead – for almost three decades – and contributed to what is ominously known as the AI winter.
It wasn’t until 1986 when two young researchers, David Rumelhart and James McClelland, solved the XOR problem and revived neural networks. Their approach was called ‘parallel distributed processing‘ and, essentially, it treats simulated neurons as if they are a ‘a vast community of microscopic computers’ just as Brian Ford proposes in his New Scientist article.
Artificial neural networks has evolved a great deal and the symbol manipulation approach, although still useful, is now ironically called GOFAI or ‘Good old fashioned artificial intelligence’ as it seems, well, a bit old fashioned.
How we define intelligence is another matter and saying that individual cells have it is actually quite hard to dismiss when they seem to be solving a whole range of problems they might never have encountered before.
Artificial intelligence seem cursed though, as true intelligence is usually defined as being just beyond whatever AI can currently do.
Link to NewSci on intelligence and the single cell (thanks Mauricio!)