Friday, October 21, 2005

Book Review: The Emperor's New Mind

Author: Roger Penrose
Review by Murray Chapman


A subject that's always fascinated me is the nature of consciousness: how is it that we are aware of ourselves? Are other animals aware of themselves in the same way? What about computers, or calculators? Most people make what I feel is a fairly reasonable assumption that human beings are aware of themselves in a way unlike any other animal or object. Certainly we're more sentient or self-aware than a calculator appears to be; we are also capable of things that monkeys aren't. But monkeys are capable of things that calculators aren't. So are cats. And mice, and grasshoppers. What about plankton? Even these tiniest of creatures exhibit complex, non-deterministic behavior that is markedly different to that of even the most powerful computer in existence. Our attempts at creating or even simulating intelligence are clumsy. Clearly, there is something missing from our understanding about the nature of consciousness, and in this book, Roger Penrose presents a possible explanation.

The book is extremely well written and organized; the subjects covered are so mind-bogglingly large and diverse that without a solid logical progression tying them together the book would be a disorganized and confusing mess. Penrose manages to guide the reader step by step, essentially from first principles, to his final conclusion regarding the nature of consciousness. It's a wild and crazy journey: you look at everything from the smallest subatomic particles to huge galaxies and supernovae, with detours into abstract mathematics, fractals, teleportation, time-travel, biology, billiards, geometry, relativity, the 11th dimension, and quantum mechanics. Before embarking on the journey, it might seem ludicrous that all of these things could be tied somehow into consciousness absent some kind of new-age astrology involving spirits and "magic" -- yet without breaking any laws of physics or nature, and indeed acknowledging when he's bending them even slightly -- Penrose makes a solid, fact-based scientific argument.

Penrose uses an extremely clever technique to present his claims: rather than stating what his conclusion is and then attempting to prove it to be true, he begins by examining exactly what it is that we're trying to find. Then, starting with the most elemental of concepts, he slowly adds layer upon layer in an attempt to move closer towards a solution. The results are profound: rather than feeling like you're being badgered into believing his specific theory, the reader is taken on an entirely plausible journey of discovery, eventually arriving at what feels like a rational and solid explanation.

But what is he saying?

Penrose begins by asking what is "consciousness"? This question reminds me of the "Deep Thought" sequence in The Hitch-Hiker's Guide to the Galaxy, in which it became apparent that the answer to the Ultimate Question of Life, the Universe, and Everything was much, much simpler than the question itself -- which could be very very difficult to state precisely. "You know... everything!" But what, precisely? Rather than trying to convince you of a definition that suits his goals, the author guides the reader through the very nature of the universe, deriving meaning by adding new evidence to that which has been previously discovered and accepted.

Penrose begins by examining fundamental concepts that we will need to precisely define in order to know what we're trying to solve. Example: knowledge. Well, that's "facts plus consciousness". Ok, let's leave "consciousness" out of it for now; what is a "fact"? That's something that is true. But how do we know if something is true or not? Because it can be proved or measured to be so. Right: so after three chapters, we know that we need to solve the "proof" and "measurement" issues before we can possibly understand consciousness.

Now here is where things start to get interesting. Both "proof" and "measurement," despite what philosophers, parents, teachers, and theologians would tell you, turn out to be extremely complex and surprising things that don't play by the normal rules.

Proof

In 1931, Kurt Gödel proved that every precise, complete, and self-consistent mathematical system of rules and propositions must contain statements that can be neither proved nor disproved by the rules of the system. This was a revolution in mathematics; some scholars took this as undeniable proof that everything was pointless and arbitrary, or that the perfect universe revealed by mathematics was a sham, and that there was no underlying order or logic on which nature operated. If there were things that we could prove we couldn't ever know, I mean, what was the point?

It turns out that Gödel's theorem actually opened up more possibilities than it closed; if you have something you can't determine, then you introduce an infinite number of equally valid solutions. This is the same mental leap you have to make to comprehend algebra: "3 + x = 5" -- how can arithmetic make sense when you're not given all the information? To solve it, you need to be able to quarantine the "x" into a "don't worry about it for now" part of your brain, and use what you do know to try and reduce the number of possible solutions. Another similar cognitive leap is required to understand that equations such "x * x = 25" allow more than one correct solution, something that used to result in a detention if you tried it as an excuse for getting your homework wrong.

Gödel's theorem raised interesting questions about computability, which is important for questions about knowledge and proof: what can you determine, and what can you never determine? Can you prove that something can ever be learned? Alan Turing took Gödel's work one step further and proved that - in general - you cannot write a set of instructions that works out whether another set of instructions will ever stop. This halting problem is the pandora's box of computer science: you have to open it (ie following the instructions; run the program) to work out what it does.

Proof no longer seems like the black and white, right or wrong concept it was when we started. We continue on to the rest of the book with a large number of items in our "don't worry about it know" collection.

Measurement

Isaac Newton formulated much of the kinetic physics which are taught in schools today: motion, action, reaction, momentum, velocity, etc. Equal and opposite reactions; conservation of momentum, and all that. These beautiful, elegant, and simple laws explained and predicted the actions of nature, and allowed an explosion of scientific advancement.

As we became better and better at measuring the world around us, and could look at extremes of measurement (the very small and the very large) we started to notice small problems with Newton's equations. At first, these were thought to be experimental errors, but after decades of experimentation and careful confirmation, scientists began to suspect that Newton's laws weren't perfect.

For things on the scale that humans typically encounter, Newtonian (or "classical") physics works essentially perfectly: centimeters, hours, kilograms, etc. But when you start looking at astronomical or miniscule quantities (eg light years, or sub-molecular distances), observed reality begins to drift away from the Newtonian predictions.

This is where Albert Einstein comes in; his theories of general and special relativity introduced compensation factors for these extreme quantities; based on his theories we've entered a whole new era of scientific discovery (astronomy, nuclear physics, etc). (It's interesting to note that Einstein himself was dissastisfied with his corrections to Newton, and suspected that there was another truth lurking still deeper within nature, but he was never able to discover it. Modern physicists concur, and are to this day searching for the Grand Unified Theory (GUT) that would tie together Newton, Einstein, and other observed anomalies in gravity and electromagnetic fields.)

As we delved deeper and deeper into the extremes of nature, we find some truly startling things about reality. Things that we have always taken for granted no longer seem to apply. For instance: the faster you travel, the slower that time goes. You would need to get very close to the speed of light (300,000 km/s) for a human to notice it, but modern clocks are accurate enough to notice time slowing down on airplanes, which only move at around 1,000 km/h.

Similarly, some very bizarre things happen when you start to look at extremely small objects. Let's illustrate by way of an example: suppose you want to pick up a tennis ball that is rolling across the floor. In order to do so, you would need to know (a) where it is, and (b) how fast it is moving. This seems like a perfectly reasonable proposition, similarilar to things that we do without thinking multiple times per day. It appears, however, that when you get down to tiny, subatomic distances, it's just plain impossible to be able to answer both questions at the same time.

Huh? What does that mean? Surely you just look at this small object and say "there it is." Unfortunately, it's not that simple. We're talking about measurements so small -- smaller than the wavelength of light -- that they are by definition invisible. It's like trying to locate a pair of socks in your house by ZIP code. Theory, backed by experimentation, shows that the more accurately you measure the position of a subatomic particle, the less accurately it is possible to measure its velocity - and vice-versa. Physics nerds call this the Heisenberg Uncertaincy Principle, thus explaining the joke written on nearly every bathroom door in university physics departments: "Heisenberg may have been here."

This is not some abstract notion cooked up for the hell of it by a bunch of absent-minded egg-head professors looking for research funding. This explanation, and the branch of physics that has grown up around it, is the simplest, most rational, fact-based explanation of what has been experimentally observed. Various models of the underlying components of subatomic physics have been proposed; this is quantum mechanics, named after the further curious observation that events in the sub-sub-atomic realm happen not in any arbitrary sizes, but only at well-defined chunks, or quanta. It is clear that at some point, when you're looking at smaller and smaller objects, there is a sudden change in the rules that govern reality.

So what?

Well, consider this: we starting with the question "how is it that we are aware of ourselves?" and, by following a logical path of exploration, we've ended up examining the very smallest parts of nature, and bumped into quantum mechanics and Einstein's theories of relativity. And so it is with Penrose's book. We've only spent a few paragraphs talking about it here; Penrose fills 482 pages of this kind of background material before he even reaches the subject of consciousness.

Given this volume of groundwork, you may expect that the book gives an incredibly detailed and fully-explained account of each step along the path. Ironically, however, my only complaint with the book is that we're asked to accept so much as "given": Penrose gives copious references for his facts, but the speed at which we (necessarily) move through the subject matter leaves little time to become comfortable with accepting one proposition before it is used to crack open reality in a new and different way, leading to yet another fantastic brain-bending concept.

Penrose needs to cover essentially several hundred years work of the most brilliant minds in mathematics and physics in order to even start making his own arguments, so this is perhaps forgivable. Still, it is rather startling to come across something like, say, this on page 517:

When the time is reached at which it might have struck the retina, and so long as the linear role U of quantum theory holds true (i.e. deterministic Schrödinger state-vector evolution, cf. p. 323), we can have a complex linear superposition of a nerve signal and not have a nerve signal.


....and realize that you actually understand what he's talking about. I'm not suggesting that everyone reading this review should immediately be able to comprehend this quote - indeed when I first encountered it, I had to flip back and forth to previous chapters in order to refresh my memory of the terms - but merely to show that by the time I encountered it in the book, I understood it. Had I seen it cold, as you perhaps are now, then I would have laughed at the thought of ever being able to comprehend it.

But what about thinking?

At long last, Penrose gets down to the nitty-gritty, and makes his point: he doesn't believe in strong artificial intelligence, which claims that intelligence and awareness is merely the result of a sufficiently complex system. Penrose argues, and I agree, that there is something missing. Computers are immeasurably more complex now than they were 50 years ago, yet they are really no closer to being sentient now than they were then. They are faster, and can do more, but they are still a dumb machine, forever doomed to follow a predestined course of action. Philosophical issues of free will aside, humans (and even grasshoppers or plankton) exhibit a non-determinism that we've yet to find outside of organic life.

Penrose suggests that this missing factor is in fact quantum mechanics: that quixotic and (currently) unknowable set of rules that govern subatomic reality, and which, under specific circumstances, can affect the classical/Newtonian world. This theory has met with a great deal of criticism, and has resulted in Penrose's conclusions today being considered "interesting but wrong". A significant problem with the theory is that there is compelling evidence to suggest that interaction between the quantum world and the classical world is only possible at very, very low temperatures - above which the kinetic energy of heat obliterates the tiny influence of the quantum world. Human body (brain) temperature is at least 100 degrees too hot for quantum theory (as we understand it today) to be able to have an observable influence onbehaviorviour.

Still, as with most adventures, the destination is not as important as the journey. We may have not answered our Ultimate Question, but we've explored our own curiosity - and found that we need to be more curious in order to understand it. Philosophers have long debated the question of whether it's possible for a human brain to be able to comprehend how it itself works -- the answer appears to be "not yet", but if you are interested in pushing your brain in this direction, Penrose's book is an excellent starting point.

Monday, October 03, 2005

New Land Speed Record in Hypocrisy!


The Majority Leader of the House, Tom DeLay (R-Texas) was a featured participant in a previous article I wrote on hypocrisy -- you will remember him as the one who fought to deny the ability of a brain-dead comatose patient's husband to make decisions regarding her care. This, despite having pulled the plug on his own father in order to alleviate his suffering. Based on his actions today, he may have set a new world record for the quickest hypocritical statement by a politician.

Having already been admonished three times in the last twelve months for ethics violations, he was late last week formally indicted on conspiracy charges, relating to his fundraising activities in Texas. There aren't many rules restricting campaign finance in Texas -- in fact there is precisely one: you can't accept money from corporations and give it to candidates. DeLay and his associates are accused of collecting money from corporations via their "Texans for a Republican Majority" committee, then sending the money on to the Republican National Committee, who then cut a check for the exact same amount to give directly to candidates. Shady? Definitely. Illegal? We shall see!

If this sounds to you like money laundering, then you're not alone: prosecutor Ronnie Earle today (Monday) further indicted DeLay on two money-laundering charges, making DeLay by far the most indicted US politician in the last 100 years.

When the original indictment came out on Wednesday, DeLay was "outraged" at how he was being singled out for political persecution by a "partisan hack Democrat" (Earle). DeLay did not explain how Earle's prosecution record (13 Democrats and 3 Republicans) is proof of a bias against Republicans.

So on Friday, DeLay let rip with this zinger:

My defense in this case will not be technical or legalistic. It will be categorical and absolute.

Yet today Fox News reports:

[DeLay's] lawyers asked a judge Monday to throw out the first indictment, arguing that the charge of conspiring to violate campaign finance laws was based on a statute that didn't take effect until 2003 — a year after the alleged acts.

Congratulations, Mr DeLay: a categorical and absolute denial of the charges, based on solid fact-based refutations of the evidence. Guinness will be contacting you shortly.

And while on the subject of alcohol, another politican has made the headlines for unbelieveable antics. David Graves (Republican, naturally) attempted to get out of a drunk-driving charge in Georgia by invoking a two-hundred year old law granting immunity from arrest to government officials travelling to or from official meetings during legislative sessions. The law was enacted to prevent corrupt local sheriffs from arresting and detaining politicians in transit just long enough to miss a vote -- hardly something that is common-place today.

This story would be funny enough, even without the other salient points: the "official meeting" that he was returning home from was basically a private party at which he and a bunch of colleages got roaring drunk -- he would have us believe this is government business as usual! More hilariously, though, the scandal has forced him to resign his position on an official state government committee: the one regulating the sale of alcohol.