About two months ago I had the distinct pleasure to read Self Aware Patterns’ fascinating article on artificial intelligence. Mike Smith, author of that excellent blog, argued that AI need not fail because of Kurt Godel’s incompleteness theorem. He approached this problem from an empirical and deterministic theoretical framework.
This inspired Tina Forsee, the writer behind Diotima’s Ladder, to approach the problem from a totally different perspective – namely the phenomenological problems AI would seem to need to solve. The results are, in my opinion, brilliant and mind expanding.
So now it’s my turn to ride along on Mike and Tina’s coat-tails and attempt an AI critique of my own. I hope you’ll bear with me as I take yet another completely different perspective on the problem, the unholy union of Socratic skepticism, indeterminism and mathematical chaos theory.
So, are we nearing the time when computers achieve intelligence? Is a real, creative, adaptable intelligence of the type that allows organisms to survive in competitive environments – not the “AI” that blinks out of existence when you can’t find a new AAA battery quick enough – anywhere close to being at hand?
I tend to think the answer is no. The reason I’m skeptical is because modern computers are algorithmic. Algorithms, in my observation, are not how higher types of intelligence seem to work.
I should probably explain what an algorithm is. An algorithm is a mathematical system that always behaves logically (within its internal set of rules) and is always deterministic. This does not mean it’s infallible, indeed it’s perfectly possible to build an algorithm for predicting the weather based on the premise that the atmosphere is made of jelly beans. That logarithm will likely be useless for the weekly forecast, but it will still be logically consistent within its own set of (in this case mistaken) rules and assumptions.
Austro-American logician and mathematician Kurt Godel said that human minds cannot be algorithmic because algorithms are constantly flummoxed by problems human beings can solve. The flummoxing mechanism is called the incompleteness theorem. It goes something like this:
“Any effectively generated theory capable of expressing elementary arithmetic cannot be both consistent and complete. In particular, for any consistent, effectively generated formal theory that proves certain basic arithmetic truths, there is an arithmetical statement that is true, but not provable within the theory itself.”
In layman’s terms, this basically means there can never be a complete and consistent theory of mathematics because any system will be based on assumptions it cannot prove internally. Godel goes on to claim that human brains must function on a different kind of logic because this limitation does not seem to apply to human cognition.
Mike said that Godel is overreaching in this assumption because he is basically assuming that the brain is a single algorithmic structure. It seems far more likely, Mike said, that the brain is composed of many algorithmic systems that communicate. Therefore, when Part X of the brain is constrained by the incompleteness theorem, it simply shuffles the problem off to Part Y of the brain, which would operate with a different set of constraints. Assuming the brain is indeed algorithmic, I think Mike is very likely correct.
However, I depart from Mike and Kurt Godel in that I think that algothmic behavior is impossible, or at least very unlikely, at the small scales and vast complexities involved in human cognition.
My reasons for this are rooted in the nature of algorithms – those determined and perfectly logical systems. The first problem is that the brain seems to be a deeply chaotic system. This does not mean it’s random, or at least not completely random. In the mathematical sense chaos just means “a system in which tiny variations in initial conditions result in huge differences in outcome.”
A simple example of a chaotic system is a double pendulum.
I should emphasize that this does not mean that the double pendulum, or any other chaotic system, operates on magic, it just means that it shows an extreme sensitivity to initial conditions. On a practical level, this means that we need very sophisticated tools and lots of computational power to predict the path of a double pendulum.
If you agree with me that the brain is likely to be a very chaotic system, this means we are going to need enormous amounts of computation power to predict its actions, perhaps more computational power than exists in the universe. This makes algorithmic duplication of the brain impractical but not theoretically impossible.
However, even granting the possibility of building a computer powerful enough to determine the algorithmic action of a human mind, we run into the next layer of problems with the chaotic systems.
The billions of double pendulums in our brains are incredibly tiny. This makes them sensitive to problems like the Heisenberg Uncertainty Principal and quantum uncertainty. This is probably getting to confusing, so I hope you’ll forgive me for taking a moment to explain Heisenberg, quantum uncertainty, determinism and why I think they won’t play nicely together.
The Heisenberg Uncertainty Principal demonstrates that the more accurately we know the position of an object the less accurately we know its velocity and vice versa. We can exactly determine the velocity of an electron, for example, provided we’re okay with being very imprecise on that electron’s position. I should emphasize that this is not a problem we will solve with better instrumentation, uncertainty is as fundamental to the nature of the mathematics as wetness is fundamental to the nature of water.
Quantum uncertainty is the observable fact that tiny particles don’t really exist until we look at them. A rock is not a rock if we aren’t there to observe it. It’s a statistical fuzz. This is weird and counter-intuitive, but it’s also on very firm scientific foundations.
Determinism is the assumption that the universe is perfectly causal. In other words, if we knew the position and velocity of every particle in the universe we could run time back and forth like a DVD. Calculate everything forward and you have a perfectly reliable picture of your great granddaughter’s first day at kindergarten. Calculate everything backwards and you can tell exactly how many ticks Bob the Tyrannosaur had in 66 million b.c.
Cause -> effect -> cause -> effect -> ad infinitum.
And this is why I think a deterministic, algorithmic AI is doomed. We are taking uncertain, quantum states that, thanks to Heisenberg, we can’t know in any case and plugging them into an ultra chaotic brain that’s super sensitive to these tiny uncertainties and then assuming we can get a nice, orderly algorithm to pop out.
To put it another way, trying to impose algorithms, always logical and deterministic, on a chaotic system with initial conditions that are, by definition, unknowable and random, seems to me very unlikely to succeed.
This is not to say that AI is impossible, though. Indeed, I think it highly likely that a computer of sufficient complexity will end up being both chaotic and sensitive to quantum effects. However, this AI will no longer be algorithmic and thus its nature will be fundamentally distinct from the computers we know today.
If you enjoyed this article, please consider buying the author’s novel.
http://www.amazon.com/The-Blackguard-Ben-Garrido/dp/1939051746
For customers living in East Asia.
http://www.whatthebook.com/book/9781939051745
P.S.
If you’d like to spend an hour on mathematical chaos, this video is excellent.