About two months ago I had the distinct pleasure to read Self Aware Patterns’ fascinating article on artificial intelligence. Mike Smith, author of that excellent blog, argued that AI need not fail because of Kurt Godel’s incompleteness theorem. He approached this problem from an empirical and deterministic theoretical framework.
This inspired Tina Forsee, the writer behind Diotima’s Ladder, to approach the problem from a totally different perspective – namely the phenomenological problems AI would seem to need to solve. The results are, in my opinion, brilliant and mind expanding.
So now it’s my turn to ride along on Mike and Tina’s coat-tails and attempt an AI critique of my own. I hope you’ll bear with me as I take yet another completely different perspective on the problem, the unholy union of Socratic skepticism, indeterminism and mathematical chaos theory.
So, are we nearing the time when computers achieve intelligence? Is a real, creative, adaptable intelligence of the type that allows organisms to survive in competitive environments – not the “AI” that blinks out of existence when you can’t find a new AAA battery quick enough – anywhere close to being at hand?
I tend to think the answer is no. The reason I’m skeptical is because modern computers are algorithmic. Algorithms, in my observation, are not how higher types of intelligence seem to work.
I should probably explain what an algorithm is. An algorithm is a mathematical system that always behaves logically (within its internal set of rules) and is always deterministic. This does not mean it’s infallible, indeed it’s perfectly possible to build an algorithm for predicting the weather based on the premise that the atmosphere is made of jelly beans. That logarithm will likely be useless for the weekly forecast, but it will still be logically consistent within its own set of (in this case mistaken) rules and assumptions.
Austro-American logician and mathematician Kurt Godel said that human minds cannot be algorithmic because algorithms are constantly flummoxed by problems human beings can solve. The flummoxing mechanism is called the incompleteness theorem. It goes something like this:
“Any effectively generated theory capable of expressing elementary arithmetic cannot be both consistent and complete. In particular, for any consistent, effectively generated formal theory that proves certain basic arithmetic truths, there is an arithmetical statement that is true, but not provable within the theory itself.”
In layman’s terms, this basically means there can never be a complete and consistent theory of mathematics because any system will be based on assumptions it cannot prove internally. Godel goes on to claim that human brains must function on a different kind of logic because this limitation does not seem to apply to human cognition.
Mike said that Godel is overreaching in this assumption because he is basically assuming that the brain is a single algorithmic structure. It seems far more likely, Mike said, that the brain is composed of many algorithmic systems that communicate. Therefore, when Part X of the brain is constrained by the incompleteness theorem, it simply shuffles the problem off to Part Y of the brain, which would operate with a different set of constraints. Assuming the brain is indeed algorithmic, I think Mike is very likely correct.
However, I depart from Mike and Kurt Godel in that I think that algothmic behavior is impossible, or at least very unlikely, at the small scales and vast complexities involved in human cognition.
My reasons for this are rooted in the nature of algorithms – those determined and perfectly logical systems. The first problem is that the brain seems to be a deeply chaotic system. This does not mean it’s random, or at least not completely random. In the mathematical sense chaos just means “a system in which tiny variations in initial conditions result in huge differences in outcome.”
A simple example of a chaotic system is a double pendulum.
I should emphasize that this does not mean that the double pendulum, or any other chaotic system, operates on magic, it just means that it shows an extreme sensitivity to initial conditions. On a practical level, this means that we need very sophisticated tools and lots of computational power to predict the path of a double pendulum.
If you agree with me that the brain is likely to be a very chaotic system, this means we are going to need enormous amounts of computation power to predict its actions, perhaps more computational power than exists in the universe. This makes algorithmic duplication of the brain impractical but not theoretically impossible.
However, even granting the possibility of building a computer powerful enough to determine the algorithmic action of a human mind, we run into the next layer of problems with the chaotic systems.
The billions of double pendulums in our brains are incredibly tiny. This makes them sensitive to problems like the Heisenberg Uncertainty Principal and quantum uncertainty. This is probably getting to confusing, so I hope you’ll forgive me for taking a moment to explain Heisenberg, quantum uncertainty, determinism and why I think they won’t play nicely together.
The Heisenberg Uncertainty Principal demonstrates that the more accurately we know the position of an object the less accurately we know its velocity and vice versa. We can exactly determine the velocity of an electron, for example, provided we’re okay with being very imprecise on that electron’s position. I should emphasize that this is not a problem we will solve with better instrumentation, uncertainty is as fundamental to the nature of the mathematics as wetness is fundamental to the nature of water.
Quantum uncertainty is the observable fact that tiny particles don’t really exist until we look at them. A rock is not a rock if we aren’t there to observe it. It’s a statistical fuzz. This is weird and counter-intuitive, but it’s also on very firm scientific foundations.
Determinism is the assumption that the universe is perfectly causal. In other words, if we knew the position and velocity of every particle in the universe we could run time back and forth like a DVD. Calculate everything forward and you have a perfectly reliable picture of your great granddaughter’s first day at kindergarten. Calculate everything backwards and you can tell exactly how many ticks Bob the Tyrannosaur had in 66 million b.c.
Cause -> effect -> cause -> effect -> ad infinitum.
And this is why I think a deterministic, algorithmic AI is doomed. We are taking uncertain, quantum states that, thanks to Heisenberg, we can’t know in any case and plugging them into an ultra chaotic brain that’s super sensitive to these tiny uncertainties and then assuming we can get a nice, orderly algorithm to pop out.
To put it another way, trying to impose algorithms, always logical and deterministic, on a chaotic system with initial conditions that are, by definition, unknowable and random, seems to me very unlikely to succeed.
This is not to say that AI is impossible, though. Indeed, I think it highly likely that a computer of sufficient complexity will end up being both chaotic and sensitive to quantum effects. However, this AI will no longer be algorithmic and thus its nature will be fundamentally distinct from the computers we know today.
If you enjoyed this article, please consider buying the author’s novel.
http://www.amazon.com/The-Blackguard-Ben-Garrido/dp/1939051746
For customers living in East Asia.
http://www.whatthebook.com/book/9781939051745
P.S.
If you’d like to spend an hour on mathematical chaos, this video is excellent.
Thanks for advertising my post and for your kind words. 🙂
I tend to think that AI will have to progress in a way that allows it to evolve with the environment…not in the slow biological way, of course, but in some manner yet to be determined. I don’t know that strong AI is impossible (being really outside of that realm of expertise, I couldn’t even make predictions.) But it seems that phenomenology has often been used to demonstrate that strong AI is impossible. I’m trying to turn that upside-down by showing that phenomenology might be useful, perhaps only to a small extent.
“To put it another way, trying to impose logarithms, always logical and deterministic, on a chaotic system with initial conditions that are, by definition, unknowable and random, seems to me very unlikely to succeed.”
What if we ask different questions alongside this complicated study of how to replicate our brains and try to come up with rules for experience that are not wholly deterministic? (On a grand scale, yes, we have certain rules that determine what we experience, but on a case-by-case basis we incorporate new information that can alter previous assumptions?)
My next post on the phenomenology series is about the role environment plays in our perception and experience. There’s a really big problem I want to address there.
“This makes logarithmic duplication of the brain impractical but not theoretically impossible.”
It’s assumed now that brain duplication will be the key. It makes sense to think so, but as you say, it’s an enormous challenge, which is why I’m so keen to dodge brain duplication in the short term. The questions I want to ask are: How much of our physical makeup is necessarily involved in how we do what we do? Is being able to perceive things through embodiment crucial to learning, flexibility, adaptability? If so, to what extent and for what purposes? Does embodiment have to be biological? In other words, is it possible to look outside of our biology and brains to find a different sort of embodiment as a shortcut to at least improved weak AI?
In other words, even supposing our brains are chaotic systems, etc., must we be confined to their replication?
“Mike said that Godel is overreaching in this assumption because he is basically assuming that the brain is a single logarithmic structure. It seems far more likely, Mike said, that the brain is composed of many logarithmic systems that communicate. Therefore, when Part X of the brain is constrained by the incompleteness theorem, it simply shuffles the problem off to Part Y of the brain, which would operate with a different set of constraints. Assuming the brain is indeed logarithmic, I think Mike is very likely correct.”
So take all of the above and subtract the brain. Why can’t we replicate distinct but compatible systems that feed off of each other? This question I’d leave to Mike and experts in this realm…I have no idea how computers work or if this is possible or feasible.
Tina, you really have me looking forward to your next AI post. It sounds like it will delve into an area I’ve been pondering lately: to what degree does the mind end at the nervous system? To what extent is the content from the outside world an inseparable component of what we call “consciousness”.
All very good questions…unfortunately I won’t have the answers to those in my next post. You know me. I don’t ever have answers. 🙂
But the second question is definitely on my radar…
I only wish I’d been able to keep up with your posts. I’m barely treading water in the blogosphere (bathroom remodel happening now, plus fits and bursts of novel writing when it should be novel editing.)
Asking the questions is what leads to the fun discussions! (I’ve sometimes fretted that my habit of dumping my own opinion sometimes stifles discussion.)
As for keeping up with the posts, no worries. The nice thing about blog posts is that they’re there when (if) you’re ready for them.
I’m not even sure what consciousness is, except as a subjective experience. Y’all are blowing my mind.
I think what we intuitively label “consciousness” includes a lot of things: interaction with the environment, memory, a feedback model of the system’s internal state (inner experience), and goals similar to our most primal ones. I’m sure there are other components. It makes having any precise conversation about consciousness difficult.
My pleasure. I really enjoy your blog in general and if I can perform some small service in promoting more Diotima’s Ladders, I consider that a success.
“What if we ask different questions alongside this complicated study of how to replicate our brains and try to come up with rules for experience that are not wholly deterministic? ”
I think we need to. The deterministic nature of an algorithm is why I think algorithmic AI is doomed. That said, there is no rule stating we can only use deterministic systems like algorithms.
“Why can’t we replicate distinct but compatible systems that feed off of each other?”
I think we can. However, I don’t think such a system, if it’s still algorithmic (purely deterministic) will operate similarly to a brain. I think it’s possible it could do really cool stuff, but it would be a lot different from intelligence as we know it.
“In other words, even supposing our brains are chaotic systems, etc., must we be confined to their replication?”
Ah! This is an excellent question. I wanted to explain in the article why I think that we need to copy the brain, at least as far as broad strokes, if we want to move close to AI. The reason is that even more than computational power – and the human brain is still the most powerful computational device on earth by quite a bit – is efficiency.
By animal standards, our brains are ridiculous gas guzzlers. 25% of our oxygen consumption is taken up by the brain along with about 20% of our total energy output. I don’t think there are any animal nervous systems that require more.
However, ungodly inefficient by animal standards is still about the same amount of energy that it takes to run the lightbulb in your living room.
Contrast this with the K computer in Japan. This machine recently simulated 1% of 1 second’s brain activity. It required 40 minutes and consumed enough power to feed a city block.
This makes me think the future lies in being more like brains and less like semi-conductors.
“How much of our physical makeup is necessarily involved in how we do what we do? Is being able to perceive things through embodiment crucial to learning, flexibility, adaptability? If so, to what extent and for what purposes? Does embodiment have to be biological? … My next post on the phenomenology series is about the role environment plays in our perception and experience. There’s a really big problem I want to address there.”
No idea. I look forward to your next post to give me one. 😛
It’s astounding how much energy our brains use…but then I think of all the junk going on in my own, it’s no wonder. 🙂
Haha, me too.
By the way, how would you define consciousness? When I tried the only thing that came to mind was the sort of trans-rational radical subjectivity Kierkegaard is always going on about.
Good lord! You’ve dropped the C-bomb on me!
I have no idea. I know it when I see it. 😉
Hahaha, well, I feel a little bit better. I just sit here in my bubble of radical subjectivity and ignore the outside world whenever it bothers me. 😛
That’s what I do, minus the “radical.” I’m not quite Cartesian enough for all that doubt. Actually, just too lazy. It’s so much easier to believe I’m talking to a conscious person right now.
I wonder what a phenomenonolgist would say about this.
http://www.theatlantic.com/science/archive/2016/04/the-illusion-of-reality/479559/
I have no idea. I can’t make sense of the article from any POV, much less from a phenomenologist’s.
On the one hand, we have these statements:
“A professor of cognitive science argues that the world is nothing like the one we experience through our senses.”
“The world presented to us by our perceptions is nothing like reality.”
Then Hoffman himself says:
“As a conscious realist, I am postulating conscious experiences as ontological primitives, the most basic ingredients of the world. I’m claiming that experiences are the real coin of the realm. The experiences of everyday life—my real feeling of a headache, my real taste of chocolate—that really is the ultimate nature of reality.”
There is no “reality beyond the illusion” of our senses, according to this last quote. Did the writer of this article COMPLETELY misunderstand Hoffman’s point? Or am I missing something?
I think the author is having a hard time conceptualizing “no external reality” in the first part of the article.
That makes sense. So if we ignore the author and stick to the interview, I’d say he’s not doing phenomenology at all. He begins with quantum mechanics to get to the idea that physical objects don’t exist…he’d probably add “outside of perceivers” or something like that, if he were pushed on the matter…no pun intended. 🙂
Phenomenology doesn’t care what quantum mechanics finds. That field of study is bracketed, necessarily.
The conclusions are similar, so I can see why you’d wonder about the connection. But the conclusions are still not the same since phenomenology (Husserl’s, anyway) doesn’t make ontological claims about reality “in-itself.” (Heidegger does, but once again, not through science.) So Husserl wouldn’t say, “We have no brains.” That’s too strong a statement about reality in-itself. In fact, he’d say we do experience having brains (especially brain surgeons and those in close contact with brains) and the rest of us experience brains on TV or in books or whatever. So we’d have to say we have brains in phenomenology because they simply DO appear. The only qualification here would be that, when doing phenomenology, we don’t make causal connections in the natural attitude—we don’t say the brain causes x phenomenon. We just keep silent on that. But the experience of having brains could be explored phenomenologically, so long as we steer clear of those strong assertions that reduce the phenomena to the brain or vice versa.
That said, I don’t think Husserl talks about brains. That would be so very confusing, especially since his goal was to hammer in the method of phenomenology, to clear away what he called the “natural attitude”. He was confusing enough. If he’d talked about brains, that would’ve been a nightmare. Plus, I don’t think he was interested in doing the phenomenology of our experience of brains.
Ben,
I’m grateful to you for linking to my post and discussing its contents. This is an area that I never seem to get tired of discussing. Your views seem similar to those of Roger Penrose and Stuart Hammeroff.
It seems like you’re discussing two different concepts in this post, although maybe you see them as one and the same: duplicating a human brain (i.e. mind uploading), and constructing an engineered mind (strong AI). While I do think we’ll have to understand human (or at least animal) minds in order to engineer strong AI, I can’t see that it will require reproducing the exact processing of an organic brain. On the other hand, if we want to upload someone’s mind, then reproducing the brain’s functionality becomes a much more central question.
“If you agree with me that the brain is likely to be a very chaotic system, this means we are going to need enormous amounts of computation power to predict its actions, perhaps more computational power than exists in the universe.”
It might surprise you to read that I totally agree with what you say here. However, I disagree that it’s necessary to precisely predict what an individual brain would do. My brain isn’t guaranteed to do the same thing today that it did yesterday, much less last year. The brain is constantly being altered by inputs from the peripheral nervous system, ongoing maintenance, and aging, among other things. Exactly duplicating the state my brain is in on Friday, March 4 at 6:01PM CST is probably impossible, but given that my own brain will never again duplicate that state, I think setting that as the goal for mind uploading is a false standard.
Perhaps duplicating a mind does require that level of duplication, but everything I know about neuroscience makes it seem unlikely. It seems more likely that we could get by with the upload version operating within the same operational variances as the original, which although still difficult (impossible with current technologies), it is a far easier goal.
“The billions of double pendulums in our brains are incredibly tiny. This makes them sensitive to problems like the Heisenberg Uncertainty Principal and quantum uncertainty. ”
It is possible that quantum uncertainty enters into neural processing, but there’s not currently any evidence for it. Quantum physics, of course, figures in every physical system in the universe. But except in isolated systems, the uncertainties cancel out each other by the time we get up to the molecular level, the level at which neural machinery does its unique processing.
The molecular environment inside of brains is quite noisy. The molecular machinery doesn’t get the isolation it would need for quantum effects to predominate. Modern neuroscience understands the operations of neurons, synapses, and glial cells without reference to quantum physics. Still, we can’t rule out that they don’t have some effects, but the probability that they’re all over the brain is pretty low.
It’s also worth noting that quantum effects don’t doom algorithmic processing. Quantum processors exist in a lot of labs around the world. They usually have to run at near 0 Kelvin to keep the quantum superpositions intact, which again show the extraordinary isolation necessary to keep quantum effects relevant for information processing. Still, if the brain somehow evolved a way to do quantum information processing, that shouldn’t represent an insurmountable barrier to doing it technologically.
All that being said, I do agree that a technological system that we’d be tempted to see as a mind will be very different from modern computing systems. Moore’s Law appears to be sputtering, meaning that alternate architectures will have to be explored for continued progress. And continued progress is necessary if we want to have any hope of building a device with the processing power of a human brain. And I also agree that we’re still a long way from doing that, probably several decades, if not centuries. Still, if nature can do it, there shouldn’t, in principle, be anything that prevents us from doing it technologically, eventually.
Very much my pleasure, Mike. I really enjoy geeking out on stuff like this.
I am conflating duplication and hard AI, though I was worried this post was getting too confusing and so didn’t explain my rationale for doing so. As I said to Tina, I think that organic intelligences have massive advantages on semiconductors that make it likely we’ll end up more and more copying the way brains work. Thus, I think it’s likely the first hard AIs will be enough like brains that making the AI and making uploads will involve a lot of the same technical challenges. That said, I completely agree with you that exact duplication isn’t really a problem that needs to be solved.
The reason I brought up the immense complexity of duplication was to illustrate the ways that determinism, and thus algorithms, break down irretrievably at brain levels of complexity, not really to argue that we need to duplicate your or my exact brain states at 10:39 p.m. on March 5th.
To restate, I’m trying to make the case that algorithmic AI can’t work because algorithms are purely deterministic and determinism breaks when things get small and/or complicated.
“It is possible that quantum uncertainty enters into neural processing, but there’s not currently any evidence for it. Quantum physics, of course, figures in every physical system in the universe. But except in isolated systems, the uncertainties cancel out each other by the time we get up to the molecular level, the level at which neural machinery does its unique processing.
The molecular environment inside of brains is quite noisy. The molecular machinery doesn’t get the isolation it would need for quantum effects to predominate. Modern neuroscience understands the operations of neurons, synapses, and glial cells without reference to quantum physics. Still, we can’t rule out that they don’t have some effects, but the probability that they’re all over the brain is pretty low.”
I’m not sure I understand this. I thought that electrons, for example, pop into and out of existence regardless of temperature and “noise.” I also thought that quantum computing was premised on holding a quantum state in stable isolation. If so, I think we’re talking about slightly different things. I’m not really saying the brain uses quantum states to compute things (though it could), I’m more saying that the combination of Heisenberg Uncertainty and quantum fuzz fosters a certain amount of randomness. I would think that randomness then gets amplified through the many layers of chaotic intra-cellular and inter-cellular processes to once more break any potential algorithm.
I’m basically critiquing determinism.
By the way, thank you very much for this discussion. I feel like I’m learning a ton from you guys.
Thanks Ben.
On algorithms, I think the main difference between us is in how broad or narrow a conception of things like “algorithm” or “computation” we’re willing to use. In my mind, an algorithm is any process with a goal, but many people, and it sounds like you’re one of them, see an algorithm as only a deterministic process with discrete stages with a goal, in other words, an algorithm only applies to digital processes. That more narrow definition works for modern electronic computers, because they’re engineered for that purpose.
However, there are non-deterministic algorithms and analog computers. Analog computers were once used for tasks that weren’t well suited to old digital computers, but as the digital processors have increased in power and capacity, the need for analog ones have declined. It’s important to note that no digital computer can ever perfectly model an analog computer’s operation, although it’s also worth noting that no analog computer can perfectly model another analog computer’s operations; even if the two analog computers are of the same model, there would be manufacturing variances that preclude it.
Brains are usually considered to be analog systems. (Although I read some stuff recently that there are indications that the strength of synapses may come in effectively discrete states, which may make neural processing more discrete than we thought.)
So, is an analog computer following an algorithm? With the narrow definition, we’d have to say no, although I’d have to wonder what we might call what it is doing.
On quantum effects, rather than get into the weeds, I think I’ll note that I agree that, within the margins of error that exist within chaos theory dynamics, quantum effects could cause a butterfly effect leading to indeterministic results. (I actually did a post awhile back making this point.) We know quantum effects do bleed into the macroscopic world because, well, we know about them, although again observing them has historically required carefully isolated laboratory conditions. But I think it’s fair to point out that the initial effect would be profoundly slight, and establishing that it didn’t arise simply from that margin of error in measurement may be forever impossible. In other words, distinguishing the behavior of such a system from a truly deterministic one may effectively be impossible.
All of which is to say, that I think we actually do agree more than we disagree. Our difference here may simply be a matter of terminology. Unless I’m missing something?
I kind of got the same feeling about terminology and basically agreeing. 🙂
Could you link to that article? I’d like to read it.
Also, I’m fascinated by your description of analog computation. Would a nonlinear equation be able to describe the action of an analog computer?
The synapse article? I think this is the one I got it from. It’s just a brief allude:
“That difference might seem small, but when they plugged the value into their algorithms, they calculated a total of 26 unique synapse sizes.”
http://www.scientificamerican.com/article/new-estimate-boosts-the-human-brain-s-memory-capacity-10-fold/
Although re-reading it, it’s not clear whether they’re talking about what is versus what can be measured. I wish they’d gone into more detail on this point.
On analog computers, the wikipedia article has a lot of info in it. One of its points is that an analog computer was used to work on differential equations. Of course, this was in a time when digital computers were far more limited than they are today.
I came across this recently. Thought you might have some interesting things to say about it.
http://www.theatlantic.com/science/archive/2016/04/the-illusion-of-reality/479559/
Hoffman seems to be getting a lot of attention recently. I think I feel a new post coming on.
My quick response is to ask, if Hoffman is right, what follows? If reality is an illusion, that illusion seems to exact painful consequences for us not taking it seriously. It doesn’t seem like we have any choice but to play along, which means the illusion effectively is still our reality.
That’s certainly one way to look at it, but another possible take is that we might be able to manipulate things WITH perception alone.
I’m not sure that’s true, but it is something trippy to think about.
It is, but we’d be wise to require extraordinary evidence for extraordinary claims.
Also consider that if reality is an illusion, then we already are manipulating things with our mind alone. It’s what we would be doing when we think we’re manipulating things physically. Indeed, maybe the illusion of manipulating things physically is the mental technique we must go through to manipulate things with perception. Of course, that just puts us back to the illusion of reality being our reality.
I think he’d say that we already know that reality is at least *partially* an illusion.
Where that leads, yeah, I don’t know. It brings to my mind all sorts of metaphysical monism – Spinoza and Hegel. Maybe I need to read more Spinoza.