This is going to be one of those annoying, rambling, touchy-feely, new-years blogs. You know the kind. One that tries to make you think about what’s passed, and what’s coming. One that tries to impress you with foresight and erudition. One that claims profundity and depth. You know the kind.
Still, that’s the mood I’m in. So here goes.
I was sitting at my kitchen table working on my Clean Architecture manuscript. I was writing part of Chapter One where I use the Human Body as a metaphor for software architecture. And then I started thinking about the body, the brain, nerves, computers, and – well – my train of thought went something like this.
Nerve impulses are electrochemical not electrical. They don’t move at the speed of light. Indeed, the mechanism involves the motion of atoms through a liquid; which is limited by something like the speed of sound in that liquid. In any case, nerve impulses travel from one end of a nerve axon to the other at about 200 miles per hour.
Now, relative to a human body, that’s very fast. A nerve impulse can get from your toe to your brain in about twenty milliseconds. That’s fast enough. On the other hand, it’s three million times slower than the speed of light. And that implies that if your nerve fibers were made of copper wire, you could be three million times bigger than you are, and still get signals from your brain to your toe in 20ms.
Hmmm. Three million times 6 feet is 18 million feet. That’s 3400 miles. That’s just under half the diameter of the Earth. So if my nerve fibers were wires, I could be roughly the size of a small planet and have the reaction time of a human. Hmmm.
OK, wait. What about switching time? Nerve cells have a switching rate of about a millisecond. Reaction time is about 600 ms. So it stands to reason that from the moment you see an event, until the moment you react to it, the information has passed through 600 layers of neurons. Or rather, the longest pathway taken by that information involved 600 synapses.
Now imagine that those neurons were transistors that can switch signals in picoseconds; many millions of times faster than a neuron. And imagine you wanted to preserve a reaction time of 600 ms using 600 processing nodes composed of transistors. You’d have to separate those nodes by an average of about 200 miles.
The human brain has about a trillion neurons. A smartphone has about one tenth that many transistors. So 10 iPhones have a trillion transistors or so.
I’m sure you can see where I’m going with this. We live on a planet that is encased in a globe spanning network that connects hundreds of millions of smartphones, tablets, laptops, desktops, and mainframe computers together. The computer equipment on our planet has the reaction time, connectivity, and many millions of times the processing power, of a human brain.
Those of you who read The Moon is a Harsh Mistress might now be guessing that my next question is: “Why doesn’t it wake up?”. But you’d be wrong. The hardware on our planet is not wired to be a brain. It’s wired to allow us to play Angry Birds. We have not connected the hardware on our planet in order to make the planet wake up. Indeed, we likely don’t know how.
But that brings up a very interesting question. Are we close to knowing how?
Consider Watson, the computer that plays Jeopardy. Playing Jeopardy is something we would normally associate with human cognition; and the fact that Watson beat the world’s champion Jeopardy players gives us pause. Was Watson thinking, or was it’s processing an analog of human thought? It’s tempting to stretch our definition of thought and give Watson the benefit of the doubt. It’s tempting to believe that if Watson could beat the best Jeopardy players, that it must be using something akin to human thought.
But then consider this question that Watson got wrong:
In the category U.S. CITIES (“Its largest airport was named for a World War II hero; its second largest, for a World War II battle”). Watson’s response was "What is Toronto?
WTF! Excuse me? What planet does Watson live on? Has Watson decided to compete for a beauty contest? Has he heard of Chicago? O’Hare field? Midway Airport? How in hell could a smart program like Watson make a blunder and blindingly stupid as that?
But there is it. Watson was not thinking. No thinking being with access to the huge databases available to Watson would – could – make that mistake. The category was U.S.Cities. Toronto is in Canada. Toronto’s airports are named “Pearson” and “Buttonville”. And although Lester B. Pearson was an influential Prime Minister of Canada, he was not the hero of the great WWII battle of Buttonville!
We programmers recognize this kind of error. We understand that computers are deeply moronic. They do precisely what they have been programmed to do, no more – no less. We see errors like this all the time. We call them bugs. And we know who’s to blame for them. We are.
Or consider Deep Blue, the computer that beat Gary Kasparov in a chess match. In 1996 Deep Blue won a single game against Kasparov. This was the first time that a computer had ever beaten a grand master. A year later it won a full match of 6 games, 3.5 to 2.5. Since that time you’d think that computers would have gotten so good that they could blow away all the grand masters. But that’s not the case. Though they can now consistently win matches; they do not consistently win all the games. What’s going on?
Deep Blue could search 200 million chess positions per second. Kasparov couldn’t do anything like that. Deep Blue’s strategy was based on exhaustive searching. Kasparov’s could not have been. So then how did Kasparov beat Deep Blue in even one game, let alone 2.5? If we knew the answer to that…
Oh, one answer is that Kasparov learned to play differently against Deep Blue than against a human. The strategy to win against a machine is different from the strategy to win against a human. And that is telling. The computer does not behave like a human. It is not employing human-like thought.
Why do the grand masters keep winning games against the computers? Because they are employing human like thought; and that thought is something that computers are still unable to approach.
So, then, are we close to knowing how to build a thinking machine? After all, we have the hardware. Could we make that hardware think?
The two examples I’ve posed are discouraging in that regard. They make it clear that we, programmers, have not yet identified The Human Algorithm. We don’t know how to simulate human creativity, human reasoning, human learning, human inference, and/or human emotion. If we knew the algorithms, we could wire the hardware – we could wake the planet up. But that algorithm continues to elude us.