« February 2006 | Main | April 2006 »
March 20, 2006
It's a Monty Hall universe, after all
Attention conservation notice: This mini-essay was written after one of my family members asked me what I thought of the idea of predestination. What follows is a rephrasing of my response, along with a few additional thoughts that connect this topic to evolutionary game theory.
Traditionally, the idea of predestination means that each person has a fixed path, or sequence of events, that constitute their life. From the popular perspective, this gets translated into "everything happens for a reason" - a statement that raises little red flags with me whenever someone states it earnestly. In my experience, this platitude mostly gets used to rationalize a notable coincidence (e.g., bumping into someone you know at the airport) or a particularly tragic event, and that people don't actually behave as if they believe it (more on this later).
I suspect that most people who find predestination appealing believe, at some level, in a supernatural force that has predetermined every little event in the world. But, when you get right down to it, there's nothing in our collective experience of the physical universe that supports either this idea or the existence of any supernatural force. But, there are aspects of the universe that are, in a sense, compatible with the idea of predestination. Similarly, there are aspects that are wholly incompatible, and I'll discuss both momentarily. The problem of course, is that these aspects are not the ones that a casual observer would focus on when considering predestination.
The aspect of the universe that is compatible with the idea of predestination comes from the fact that the universe is patterned. That is, there are rules (such as the laws of physics) that prescribe the consequences for each action. If the universe were totally unpredictable in every way possible, then there is no cause-and-effect, while a consistent and physically realistic universe requires that it exists. As a toy example, if you push a ball off a table, it then falls to the ground. To be precise, your push is the cause, the fall is the effect, and gravity is the natural mechanism by which cause leads to effect. If the universe were totally unpredictable, then that mechanism wouldn't exist, and the ball might just as well fly up to the ceiling, or whiz around your head in circles, as fall to the ground. So, the fact that cause-and-effect exists means that there's a sequence of consequences that naturally follow from any action, and this is a lot like predestination.
But, there's an aspect of the universe that is fundamentally incompatible with predestination: quantum physics. At the smallest scales, the idea of cause-and-effect becomes problematic, and the world decomposes into a fuzzy mass of unpredictability. There are still patterns there, but the strongest statements that can be made are only statistical in nature. That is, while individual events themselves totally unpredictable, when taken together in larger numbers, you observe that certain kinds of events are more likely than others; individual events themselves are random, while en mass they are not. Indeed, the utilization of this fact is what underlies the operation of virtually every electronic device.
Einstein struggled with the apparent conflict between the randomness of quantum theory and the regularity of the macroscopic world we human inhabit. This struggle was the source of his infamous complaint that God does not play dice with the universe. Partially because of his struggle, physicists have probed very deeply into the possibility that the randomness of the universe at the smallest level is just an illusion, and that there is some missing piece of the picture that would dispel the fog of randomness, returning us to the predictable world of Newton. But, these tests for "hidden variables" have always failed, and the probabilistic model of the universe, i.e., quantum physics, has been validated over, and over, and over. So, it appears that God really does play dice with the universe, at least at the very smallest level.
But, how does this connect with the kind of universe that we experience? Although the motions of the water molecules in the air around me are basically random, does that also mean that my entire life experience is also basically unpredictable? Well, yes, actually. In the 1960s, physicists discovered that large systems like the weather are "mathematically chaotic". This is the idea that very small disturbances in one place can be amplified, by natural processes, to become very large disturbances in another place. This idea was popularized by the idea that a butterfly can flap its wings in Brazil and cause a tornado in Texas. And, physicists have shown that indeed, the unpredictability of tiny atoms and electrons can and do cause unpredictability in large systems like the rhythm of your heart, the weather patterns all over the world and even the way water splashes out of a boiling pot.
So basically, predestination exists, but only in a statistical sense. Technically, this is what's called "probabilistic determinism", and it means that while hindsight is perfect (after all, the past has already happened, so it can only be one way), the future is unknown and is, at least until it happens, undetermined except in the statistical sense. Put another way, the broad brushstrokes of the future are predetermined (because the universe operates through natural and consistent forces), but the minute details are not (because the universe is fundamentally probabilistic in nature). If some supernatural force is at play in the universe and has predetermined certain features of the universe, then they are only very vague and general ones like life probably evolving on a planet somewhere (the "blind watchmaker" idea, basically), and not like the kind of events that most people consider when they think about predestination, such as attending a certain graduate school or falling in love with a certain person.
In summary, what I've tried to establish in the above paragraphs is that the belief in predestination is an irrational one because it's not actually supported by any physical evidence from the real world. But, the idea of predestination has a certain utility for intelligent beings who can only see a very small piece of the world around them. From any one person's perspective, the world is a very confusing and unpredictable place, and I think it's very comforting to believe that such an impression is false and that the world is actually very ordered beyond our horizon of knowledge. That is, holding this belief makes it easier to actually make a decision in one's own life, because it diminishes the fear that one's choice will result in really bad consequences, and it does this by asserting that whatever decision one makes, the outcome was already determined. So, it frees the decision-maker from the responsibility of the consequences, for better, or for worse. And, in a world where decisions must be made in a timely fashion, i.e., where one cannot spend hours, days, or years pondering which choice is best, which is to say in the world that we inhabit, that freedom from consequence is really useful.
In fact, this mental freedom is necessary for survival (although getting it via a belief in predestination is not the only way to acquire it). The alternative is a dead species - dead from a paralysis of indecision - which clearly has been selected against in our evolutionary history. But also, there are clear problems with having an extreme amount of such freedom. No decision making being can fully believe in the disconnect between their decisions and the subsequent consequences. Otherwise, that being would have no reason to think at all, and could make decisions essentially at random. So, there must then be a tension between the consequence-free and consequence-centric modes of decision making, and indeed, we see exactly this tension in humans today. Or rather, humans seem to apply both modes of decision making depending on the situation, with consequence-free thinking perhaps applied retrospectively more often than prospectively. Ultimately, the trick is to become conscious of these modes and to learn how to apply them toward the end goal of making better decisions, i.e., determining the relative gain in the quality of the decision against the extra time it took to come upon it. Of course, humans are notoriously bad judges of the quality of their decisions (e.g., here and here), so spending a little extra time to consider your choices may be a reasonable way to insure that you're happy with whatever choice you end up making.
posted March 20, 2006 05:43 PM in Thinking Aloud | permalink | Comments (1)
March 13, 2006
Quantum mysticism, the new black
When I was much younger and in my very early days of understanding the conflicting assertions about the nature of the world made by religious and scientific authorities, I became curious about what Eastern philosophy had to say about the subject. The usual questions troubled my thoughts: How can everything happen for a reason if we have free will? or, How can one reconcile the claims about Creation from the Bible (Torah, Koran, Vedas, whatever) with factual and scientifically verified statements about the Universe, e.g., the Big Bang, evolution, heliocentrism, etc.? and so forth.
Eastern philosophy (and its brother Eastern mysticism), to my Western-primed brain, seemed like a possible third-way to resolve these conundrums. At first, my imagination was captured by books like The Tao of Physics, which offered the appearance of a resolution through the mysteries of quantum physics. But, as I delved more deeply into Physics itself, and indeed actually started taking physics courses in high school and college, I became increasingly disenchanted with the slipperiness of New Age thought (which is, for better or for worse, the Western heir of Eastern mysticism). The end result was a complete rejection of the entire religio-cultural framework of New Age-ism on the basis of it being irrational, subjective and rooted in the ubiquitous but visceral desire to confirm the special place of humans, and more importantly yourself, in the Universe - the same desire that lays at the foundation of much of organized religious thought. But possibly what provoked the strongest revulsion from it was the fact that New Age-ism claims a pseudo-scientific tradition, much like modern creationism (a.k.a. intelligent design), in which the entire apparatus of repeatable experiments, testable hypotheses, and the belief in an objective reality (which implies a willingness to change your mind when confronted with overwhelming evidence) is ignored in favor of the ridiculous contradictory argument that because science hasn't proved it to be false (and more importantly, but usually not admittedly, because it seems like it should be right), it must therefore be true.
Fast-forward many years to the release of the film version of Eastern mysticism-meets-Physics: "What the #$!%* Do We Know!?". Naturally, I avoided this film like the plague - it veritably reeked of everything I disliked about New Age-ism. But now, apparently, there's a sequel to it. Alas, I doubt that the many of the faithful who flock to see the film will be reading this thoughtful essay on it in the New York Times "Far Out, Man. But Is It Quantum Physics?" by National Desk correspondent Dennis Overbye. His conclusion about the movie and its apparent popularity among the New Agers, in his own words, runs like so
When it comes to physics, people seem to need to kid themselves. There is a presumption, Dr. Albert [a professor of philosophy and physics at Columbia] said, that if you look deeply enough you will find "some reaffirmation of your own centrality to the world, a reaffirmation of your ability to take control of your own destiny." We want to know that God loves us, that we are the pinnacle of evolution.
But one of the most valuable aspects of science, he said, is precisely the way it resists that temptation to find the answer we want. That is the test that quantum mysticism flunks, and on some level we all flunk.
That is, we are fundamentally irrational beings, and are intimately attached to both our own convictions and the affirmation of them. Indeed, we're so naturally attached to them that it takes years of mental training to be otherwise. That's what getting a degree in science is about - training your brain to intuitively believe in a reality that is external to your own perception of it, that is ordered (even if in an apparently confusing way) and predictable (if only probabilistically), and that is accessible to the human mind. Overbye again,
I'd like to believe that like Galileo, I would have the courage to see the world clearly, in all its cruelty and beauty, "without hope or fear," as the Greek writer Nikos Kazantzakis put it. Take free will. Everything I know about physics and neuroscience tells me it's a myth. But I need that illusion to get out of bed in the morning. Of all the durable and necessary creations of atoms, the evolution of the illusion of the self and of free will are perhaps the most miraculous. That belief is necessary to my survival.
Overbye is, in his colloquial way, concluding that irrationality has some positive utility for any decision-making being with incomplete information about the world it lives in. We need to believe (at some level) that we have the ability to decide our actions independently from the world we inhabit in order to not succumb to the fatalistic conclusion that everything happens for no reason. But understanding this aspect of ourselves at least gives us the hope of recognizing when that irrationality is serving us well, and when it is not. This, I think, is one of the main reasons why science education should be universally accessible - to help us make better decisions on the whole, rather than being slaves to our ignorance and the whim of external forces (be they physical or otherwise).
posted March 13, 2006 08:29 PM in Thinking Aloud | permalink | Comments (3)
March 07, 2006
Running a conference (redux)
Once again, for the past eight months or so, I've been heavily involved in running a small conference. The second annual Computer Science UNM Student Conference (CSUSC) happened this past Friday and was, in every sense of the word, a resounding success. Originally, this little shindig was conceived as a way for students to show off their research to each other, to the faculty, and to folks at Sandia National Labs. As such, this year's forum was just as strong as last year's inaugural session, having ten well-done research talks and more than a dozen poster presentations. Our keynote address was delivered by the friendly and soft-spoken David Brooks (no, not that one) from Harvard University, on power efficiency in computing. (Naturally, power density has been an important constraint on computing for a long time.)
Having organized this conference twice now, I have a very healthy respect for how much time is involved in making such an event a success. Although most of one's time is spent making sure all the gears are turning at the proper speeds (which includes, metaphorically, keeping the wheels greased and free of obstructions) so that each part completes in time to hand-off to the next, I'm also happy with how much of a learning experience its seems to have been for everyone involved (including me). This year's success was largely due to the excellent and tireless work of the Executive Committee, while, I'm confident saying that, all of the little hiccoughs we encountered were oversights on my part. Perhaps next year, those things will be done better by my successor.
But, the future success of the CSUSC is far from guaranteed: the probability of a fatal dip in the inertia of student interest in organizing it is non-trivial. This is a risk, I believe, that every small venue faces, since there are only ever a handful of students interested in taking time away from their usual menu of research and course work to try their hand at professional service. I wonder, What fraction of researchers are ever involved in organizing a conference? Reviewing papers is a standard professional duty, but the level of commitment required to run a conference is significantly larger - it takes a special degree of willingness (masochism?) and is yet another of the many parts of academic life that you have to learn in the trenches. For the CSUSC, I simply hope that the goodness that we've created so far continues on for a few more years, and am personally just glad we had such a good run over the past two.
With this out of the way, my conference calendar isn't quite empty, and is already rapidly refilling. Concurrent to my duties to the CSUSC, I've also been serving on the Program Committee for the 5th International Workshop on Experimental Algorithms (WEA), a medium-sized conference on the design, analysis and implementation of algorithms. An interesting experience, in itself, in part for broadening my perspective on the kind of research being done in algorithms. In May, always my busiest month for conferences, I'll be attending two events on network science. The first is CAIDA's Workshop on Internet Topology (WIT) in San Diego, while the second is the NetSci 2006 in Bloomington, Indiana.
posted March 7, 2006 04:37 AM in Simply Academic | permalink | Comments (0)
March 01, 2006
The scenic view
In my formal training in physics and computer science, I never did get much exposure to statistics and probability theory, yet I have found myself consistently using them in my research (partially on account of the fact that I deal with real data quite often). What little formal exposure I did receive was always in some specific context and never focused on probability as a topic itself (e.g., statistical mechanics, which could hardly be called a good introduction to probability theory). Generally, my training played-out in the crisp and clean neighborhoods of logical reasoning, algebra and calculus, with the occasional day-trip to the ghetto of probability. David Mumford, a Professor of Mathematics at Brown University, opines about ongoing spread of that ghetto throughout the rest science and mathematics, i.e., how probability theory deserves a respect at least equal to that of abstract algebra, in a piece from 1999 on The Dawning of the Age of Stochasticity. From the abstract,
For over two millennia, Aristotle's logic has rules over the thinking of western intellectuals. All precise theories, all scientific models, even models of the process of thinking itself, have in principle conformed to the straight-jacket of logic. But from its shady beginnings devising gambling strategies and counting corpses in medieval London, probability theory and statistical inference now emerge as better foundations for scientific models ... [and] even the foundations of mathematics itself.
It may sound it, but I doubt that Mumford is actually overstating his case here, especially given the deep connection between probability theory, quantum mechanics (c.f. the recent counter-intuitive result on quantum interrogation) and complexity theory.
A neighborhood I'm more familiar with is that of special functions; things like the Gamma distribution, the Riemann Zeta function (a personal favorite), and the Airy functions. Sadly, these familiar friends show up very rarely in the neighborhood of traditional computer science, but instead hang out in the district of mathematical modeling. Robert Batterman, a Professor of Philosophy at Ohio State University, writes about why exactly these functions are so interesting in On the Specialness of Special Functions (The Nonrandom Effusions of the Divine Mathematician).
From the point of view presented here, the shared mathematical features that serve to unify the special functions - the universal form of their asymptotic expansions - depends upon certain features of the world.
(Emphasis his.) That is, the physical world itself, by presenting a patterned appearance, must be governed by a self-consistent set of rules that create that pattern. In mathematical modeling, these rules are best represented by asymptotic analysis and, you guessed it, special functions, that reveal the universal structure of reality in their asymptotic behavior. Certainly this approach to modeling has been hugely successful, and remains so in current research (including my own).
My current digs, however, are located in the small nexus that butts up against these neighborhoods and those in computer science. Scott Aaronson, who occupies an equivalent juncture between computer science and physics, has written several highly readable and extremely interesting pieces on the commonalities he sees in his respective locale. I've found them to be a particularly valuable way to see beyond the unfortunately shallow exploration of computational complexity that is given in most graduate-level introductory classes.
In NP-complete Problems and Physical Reality Aaronson looks out of his East-facing window toward physics for hints about ways to solve NP-complete problems by using physical processes (e.g., simulated annealing). That is, can physical reality efficiently solve instances of "hard" problems? Although he concludes that the evidence is not promising, he points to a fundamental connection between physics and computer science.
Then turning to look out his West-facing window towards computer science, he asks Is P Versus NP Formally Indepenent?, where he considers formal logic systems and the implications of Godel's Incompleteness Theorem for the likelihood of resolving the P versus NP question. It's stealing his thunder a little, but the most quotable line comes from his conclusion:
So I'll state, as one of the few definite conclusions of this survey, that P \not= NP is either true or false. It's one or the other. But we may not be able to prove which way it goes, and we may not be able to prove that we can't prove it.
There's a little nagging question that some researchers are only just beginning to explore, which is, are certain laws of physics formally independent? I'm not even entirely sure what that means, but it's an interesting kind of question to ponder on a lazy Sunday afternoon.
There's something else embedded in these topics, though. Almost all of the current work on complexity theory is logic-oriented, essentially because it was born of the logic and formal mathematics of the first half of the 20th century. But, if we believe Mumford's claim that statistical inference (and in particular Bayesian inference) will invade all of science, I wonder what insights it can give us about solving hard problems, and perhaps why they're hard to begin with.
I'm aware of only anecdotal evidence of such benefits, in the form of the Survey Propagation Algorithm and its success at solving hard k-SAT formulas. The insights from the physicists' non-rigorous results has even helped improve our rigorous understanding of why problems like random k-SAT undergo a phase transition from mostly easy to mostly hard. (The intuition is, in short, that as the density of constraints increases, the space of valid solutions fragments into many disconnected regions.) Perhaps there's more being done here than I know of, but it seems that a theory of inferential algorithms as they apply to complexity theory (I'm not even sure what that means, precisely; perhaps it doesn't differ significantly from PPT algorithms) might teach us something fundamental about computation.
posted March 1, 2006 02:32 PM in Interdisciplinarity | permalink | Comments (0)