September 09, 2011

What is the probability of a 9/11-size terrorist attack?

Sunday is the 10-year anniversary of the 9/11 terrorist attacks. As a commemoration of the day, I'm going to investigate answers to a very simple question: what is the probability of a 9/11-size or larger terrorist attack?

There are many ways we could try to answer this question. Most of them don't involve using data, math and computers (my favorite tools), so we will ignore those. Even using quantitative tools, approaches differ based on how strong are the assumptions they make about the social and political processes that generate terrorist attacks. We'll come back to this point throughout the analysis.

Before doing anything new, it's worth repeating something old. For the better part of the past 8 years that I've been studying the large-scale patterns and dynamics of global terrorism (see for instance, here and here), I've emphasized the importance of taking an objective approach to the topic. Terrorist attacks may seem inherently random or capricious or even strategic, but the empirical evidence demonstrates that there are patterns and that these patterns can be understood scientifically. Earthquakes and seismology serves as an illustrative example. Earthquakes are extremely difficult to predict, that is, to say beforehand when, where and how big they will be. And yet, plate tectonics and geophysics tells us a great deal about where and why they happen and the famous Gutenberg-Richter law tells us roughly how often quakes of different sizes occur. That is, we're quite good at estimating the long-time frequencies of earthquakes because larger scales allows us to leverage a lot of empirical and geological data. The cost is that we lose the ability to make specific statements about individual earthquakes, but the advantage is insight into the fundamental patterns and processes.

The same can be done for terrorism. There's now a rich and extensive modern record of terrorist attacks worldwide [1], and there's no reason we can't mine this data for interesting observations about global patterns in the frequencies and severities of terrorist attacks. This is where I started back in 2003 and 2004, when Maxwell Young and I started digging around in global terrorism data. Catastrophic events like 9/11, which (officially) killed 2749 people in New York City, might seem so utterly unique that they must be one-off events. In their particulars, this is almost surely true. But, when we look at how often events of different sizes (number of fatalities) occur in the historical record of 13,407 deadly events worldwide [2], we see something remarkable: their relative frequencies follow a very simple pattern.

The figure shows the fraction of events that killed at least x individuals, where I've divided them into "severe" attacks (10 or more fatalities) and "normal" attacks (less than 10 fatalities). The lions share (92.4%) of these events are of the "normal" type, killing less than 10 individuals, but 7.6% are "severe", killing 10 or more. Long-time readers have likely heard this story before and know where it's going. The solid line on the figure shows the best-fitting power-law distribution for these data [3]. What's remarkable is that 9/11 is very close to the curve, suggesting that statistically speaking, it is not an outlier at all.

A first estimate: In 2009, the Department of Defense received the results of a commissioned report on "rare events", with a particular emphasis on large terrorist attacks. In section 3, the report walks us through a simple calculation of the probability of a 9/11-sized attack or larger, based on my power-law model. It concludes that there was a 23% chance of an event that killed 2749 or more between 1968 and 2006. [4] The most notable thing about this calculation is that its magnitude makes it clear that 9/11 should not be considered a statistical outlier on the basis of its severity.

How we can we do better: Although probably in the right ballpark, the DoD estimate makes several strong assumptions. First, it assumes that the power-law model holds over the entire range of severities (that is x>0). Second, it assumes that the model I published in 2005 is perfectly accurate, meaning both the parameter estimates and the functional form. Third, it assumes that events are generated independently by a stationary process, meaning that the production rate of events over time has not changed nor has the underlying social or political processes that determine the frequency or severity of events. We can improve our estimates by improving on these assumptions.

A second estimate: The first assumption is the easiest to fix. Empirically, 7.6% of events are "severe", killing at least 10 people. But, the power-law model assumed by the DoD report predicts that only 4.2% of events are severe. This means that the DoD model is underestimating the probability of a 9/11-sized event, that is, the 23% estimate is too low. We can correct this difference by using a piecewise model: with probability 0.076 we generate a "severe" event whose size is given by a power-law that starts at x=10; otherwise we generate a "normal" event by choosing a severity from the empirical distribution for 0 < x < 10 . [5] Walking through the same calculations as before, this yields an improved estimate of a 32.6% chance of a 9/11-sized or larger event between 1968-2008.

A third estimate: The second assumption is also not hard to improve on. Because our power-law model is estimated from finite empirical data, we cannot know the alpha parameter perfectly. Our uncertainty in alpha should propagate through to our estimate of the probability of catastrophic events. A simple way to capture this uncertainty is to use a computational bootstrap resampling procedure to generate many synthetic data sets like our empirical one. Estimating the alpha parameter for each of these yields an ensemble of models that represents our uncertainty in the model specification that comes from the empirical data.

This figure overlays 1000 of these bootstrap models, showing that they do make slightly different estimates of the probability of 9/11-sized events or larger. As a sanity check, we find that the mean of these bootstrap parameters is alpha=2.397 with a standard deviation of 0.043 (quite close to the 2.4+/-0.1 value I published in 2009 [6]). Continuing with the simulation approach, we can numerically estimate the probability of a 9/11-sized or larger event by drawing synthetic data sets from the models in the ensemble and then asking what fraction of those events are 9/11-sized or larger. Using 10,000 repetitions yields an improved estimate of 40.3%.

Some perspective: Having now gone through three calculations, it's notable that the probability of a 9/11-sized or larger event has almost doubled as we've improved our estimates. There are still additional improvements we could do, however, and these might push the number back down. For instance, although the power-law model is a statistically plausible model of the frequency-severity data, it's not the only such model. Alternatives like the stretched exponential or the log-normal decay faster than the power law, and if we were to add them to the ensemble of models in our simulation, they would likely yield 9/11-sized or larger events with lower frequencies and thus likely pull the probability estimate down somewhat. [7]

Peering into the future: Showing that catastrophic terrorist attacks like 9/11 are not in fact statistical outliers given the sheer magnitude and diversity of terrorist attacks worldwide over the past 40 years is all well and good, you say. But, what about the future? In principle, these same models could be easily used to make such an estimate. The critical piece of information for doing so, however, is a clear estimate of the trend in the number of events each year. The larger that number, the greater the risk under these models of severe events. That is, under a fixed model like this, the probability of catastrophic events is directly related to the overall level of terrorism worldwide. Let's look at the data.

Do you see a trend here? It's difficult to say, especially with the changing nature of the conflicts in Iraq and Afghanistan, where many of the terrorist attacks of the past 8 years have been concentrated. It seems unlikely, however, that we will return to the 2001 levels (200-400 events per year; the optimist's scenario). A dire forecast would have the level continue to increase toward a scary 10,000 events per year. A more conservative forecast, however, would have the rate continue as-is relative to 2007 (the last full year for which I have data), or maybe even decrease to roughly 1000 events per year. Using our estimates from above, 1000 events overall would generate about 75 "severe" events (more than 10 fatalities) per year. Plugging this number into our computational model above (third estimate approach), we get an estimate of roughly a 3% chance of a 9/11-sized or larger attack each year, or about a 30% chance over the next decade. Not a certainty by any means, but significantly greater than is comfortable. Notably, this probability is in the same ballpark for our estimates for the past 40 years, which goes to show that the overall level of terrorism worldwide has increased dramatically during those decades.

It bears repeating that this forecast is only as good as the models on which it is based, and there are many things we still don't know about the underlying social and political processes that generate events at the global scale. (In contrast to the models the National Hurricane Center uses to make hurricane track forecasts.) Our estimates for terrorism all assume a homogeneous and stationary process where event severities are independent random variables, but we would be foolish to believe that these assumptions are true in the strong sense. Technology, culture, international relations, democratic movements, urban planning, national security, etc. are all poorly understood and highly non-stationary processes that could change the underlying dynamics in the future, making our historical models less reliable than we would like. So, take these estimates for what they are, calculations and computations using reasonable but potentially wrong assumptions based on the best historical data and statistical models currently available. In that sense, it's remarkable that these models do as well as they do in making fairly accurate long-term probabilistic estimates, and it seems entirely reasonable to believe that better estimates can be had with better, more detailed models and data.

Update 9 Sept. 2011: In related news, there's a piece in the Boston Globe (free registration required) about the impact 9/11 had on what questions scientists investigate that discusses some of my work.

-----

[1] Estimates differ between databases, but the number of domestic or international terrorist attacks worldwide between 1968 and 2011 is somewhere in the vicinity of 50,000-100,000.

[2] The historical record here is my copy of the National Memorial Institute for the Prevention of Terrorism (MIPT) Terrorism Knowledge Base, which stores detailed information on 36,018 terrorist attacks worldwide from 1968 to 2008. Sadly, the Department of Homeland Security pulled the plug on the MIPT data collection effort a few years ago. The best remaining data collection effort is the one run by the University of Maryland's National Consortium for the Study of Terrorism and Response to Terrorism (START) program.

[3] For newer readers: a power-law distribution is a funny kind of probability distribution function. Power laws pop up all over the place in complex social and biological systems. If you'd like an example of how weird power-law distributed quantities can be, I highly recommend Clive Crook's 2006 piece in The Atlantic title "The Height of Inequality" in which he considers what the world would look like if human height were distributed as unequally as human wealth (a quantity that is very roughly power-law-like).

[4] If you're curious, here's how they did it. First, they took the power-law model and the parameter value I estimated (alpha=2.38) and computed the model's complementary cumulative distribution function. The "ccdf" tells you the probability of observing an event at least as large as x, for any choice of x. Plugging in x=2749 yields p=0.0000282. This gives the probability of any single event being 9/11-sized or larger. The report was using an older, smaller data set with N=9101 deadly events worldwide. The expected number of these events 9/11-sized or larger is then p*N=0.257. Finally, if events are independent then the probability that we observe at least one event 9/11-sized or larger in N trials is 1-exp(-p*N)=0.226. Thus, about a 23% chance.

[5] This changes the calculations only slightly. Using alpha=2.4 (the estimate I published in 2009), given that a "severe" event happens, the probability that it is at least as large as 9/11 is p=0.00038473 and there were only N=1024 of them from 1968-2008. Note that the probability is about a factor of 10 larger than the DoD estimate while the number of "severe" events is about a factor of 10 smaller, which implies that we should get a probability estimate close to theirs.

[6] In "Power-law distributions in empirical data," SIAM Review 51(4), 661-703 (2009), with Cosma Shalizi and Mark Newman.

[7] This improvement is mildly non-trivial, so perhaps too much effort for an already long-winded blog entry.

posted September 9, 2011 02:17 PM in Terrorism | permalink | Comments (1)

November 18, 2010

Algorithms, numbers and quantification

On the plane back from Europe the other day I was reading this month's Atlantic Monthly and happened across a little piece by Alexis Madrigal called "Take the Data Out of Dating" about OkCupid's clever use of algorithms to increase the frequency of "three-ways" (which in dating-website-speak means a person sent a note, received a reply, and fired off a follow-up; not exactly a direct measure of their success at helping people find love, but that's their proxy of choice). It's a thoughtful piece largely because the punch line resonates with much of my recent feelings about the creeping use of scientometrics in the attempts of higher eduction administrators to understand what exactly their faculty have done or not done, and how they compare to their peers. (I could list a dozen other ways numbers are increasingly invading decision-making processes that used to be done based on principles and qualities, but ack there are so many.) More generally, I think it puts in a good perspective what exactly we lose when we focus on using numbers or algorithms to automate decisions about inherently human problems. Here it is:

Algorithms are made to restrict the amount of information the user sees—that’s their raison d'etre. By drawing on data about the world we live in, they end up reinforcing whatever societal values happen to be dominant, without our even noticing. They are normativity made into code—albeit a code that we barely understand, even as it shapes our lives.

We’re not going to stop using algorithms. They’re too useful. But we need to be more aware of the algorithmic perversity that’s creeping into our lives. The short-term fit of a dating match or a Web page doesn’t measure the long-term value it may hold. Statistically likely does not mean correct, or just, or fair. Google-generated kadosh [ed: best choice] is meretricious, offering a desiccated kind of choice. It’s when people deviate from what we predict they’ll do that they prove they are individuals, set apart from all others of the human type.

posted November 18, 2010 10:41 AM in Thinking Aloud | permalink | Comments (1)

October 27, 2010

Story-telling, statistics, and other grave insults

The New York Times (and the NYT Magazine) has been running a series of pieces about math, science and society written by John Allen Paulos, a mathematics professor at Temple University and author of several popular books. His latest piece caught my eye because it's a topic close to my heart: stories vs. statistics. That is, when we seek to explain something [1], do we use statistics and quantitative arguments using mainly numbers or do we use stories and narratives featuring actors, motivations and conscious decisions? [2] Here are a few good excerpts from Paulos's latest piece:

...there is a tension between stories and statistics, and one under-appreciated contrast between them is simply the mindset with which we approach them. In listening to stories we tend to suspend disbelief in order to be entertained, whereas in evaluating statistics we generally have an opposite inclination to suspend belief in order not to be beguiled. A drily named distinction from formal statistics is relevant: we’re said to commit a Type I error when we observe something that is not really there and a Type II error when we fail to observe something that is there. There is no way to always avoid both types, and we have different error thresholds in different endeavors, but the type of error people feel more comfortable may be telling.

...

I’ll close with perhaps the most fundamental tension between stories and statistics. The focus of stories is on individual people rather than averages, on motives rather than movements, on point of view rather than the view from nowhere, context rather than raw data. Moreover, stories are open-ended and metaphorical rather than determinate and literal.

It seems to me that for science, the correct emphasis should be on the statistics. That is, we should be more worried about observing something that is not really there. But as humans, statistics is often too dry and too abstract for us to understand intuitively, to generate that comfortable internal feeling of understanding. Thus, our peers often demand that we give not only the statistical explanation but also a narrative one. Sometimes, this can be tricky because the structure of the two modes of explanation are in fundamental opposition, for instance, if the narrative must include notions of randomness or stochasticity. In such a case, there is no reason for any particular outcome, only reasons for ensembles or patterns of outcomes. The idea that things can happen for no reason is highly counter intuitive [3], and yet in the statistical sciences (which is today essentially all sciences), this is often a critical part of the correct explanation [4]. For the social sciences, I think this is an especially difficult balance to strike because our intuition about how the world works is built up from our own individual-level experiences, while many of the phenomena we care about are patterns above that level, at the group or population levels [5].

This is not a new observation and it is not a tension exclusive to the social sciences. For instance, here is Stephen J. Gould (1941-2002), the eminent American paleontologist, speaking about the differences between microevolution and macroevolution (excerpted from Ken McNamara's "Evolutionary Trends"):

In Flatland, E.A. Abbot's (1884) classic science-fiction fable about realms of perception, a sphere from the world of three dimensions enters the plane of two-dimensional Flatland (where it is perceived as an expanding circle). In a notable scene, he lifts a Flatlander out of his own world and into the third dimension. Imagine the conceptual reorientation demanded by such an utterly new and higher-order view. I do not suggest that the move from organism to species could be nearly so radical, or so enlightening, but I do fear that we have missed much by over reliance on familiar surroundings.

An instructive analogy might be made, in conclusion, to our successful descent into the world of genes, with resulting insight about the importance of neutralism in evolutionary change. We are organisms and tend to see the world of selection and adaptation as expressed in the good design of wings, legs, and brains. But randomness may predominate in the world of genes--and we might interpret the universe very differently if our primary vantage point resided at this lower level. We might then see a world of largely independent items, drifting in and out by the luck of the draw--but with little islands dotted about here and there, where selection reins in tempo and embryology ties things together. What, then, is the different order of a world still larger than ourselves? If we missed the world of genic neutrality because we are too big, then what are we not seeing because we are too small? We are like genes in some larger world of change among species in the vastness of geological time. What are we missing in trying to read this world by the inappropriate scale of our small bodies and minuscule lifetimes?

To quote Howard T. Odum (1924-2002), the eminent American ecologist, on a similar theme: "To see these patterns which are bigger than ourselves, let us take a special view through the macroscope." Statistical explanations, and the weird and diffuse notions of causality that come with them, seem especially well suited to express in a comprehensible form what we see through this "macroscope" (and often what we see through microscopes). And increasingly, our understanding of many important phenomena, be they social network dynamics, terrorism and war, sustainability, macroeconomics, ecosystems, the world of microbes and viruses or cures for complex diseases like cancer, depend on us seeing clearly through some kind of macroscope to understand the statistical behavior of a population of potentially interacting elements.

Seeing clearly, however, depends on finding new and better ways to build our intuition about the general principles that take inherent randomness or contingency at the individual level and produce complex patterns and regularities at the macroscopic or population level. That is, to help us understand the many counter-intuitive statistical mechanisms that shape our complex world, we need better ways of connecting statistics with stories.

27 October 2010: This piece is also being featured on Nature's Soapbox Science blog.

-----

[1] Actually, even defining what we mean by "explain" is a devilishly tricky problem. Invariably, different fields of scientific research have (slightly) different definitions of what "explain" means. In some cases, a statistical explanation is sufficient, in others it must be deterministic, while in still others, even if it is derived using statistical tools, it must be rephrased in a narrative format in order to provide "intuition". I'm particularly intrigued by the difference between the way people in machine learning define a good model and the way people in the natural sciences define it. The difference appears, to my eye, to be different emphases on the importance of intuitiveness or "interpretability"; it's currently deemphasized in machine learning while the opposite is true in the natural sciences. Fortunately, a growing number of machine learners are interested in building interpretable models, and I expect great things for science to come out of this trend.

In some areas of quantitative science, "story telling" is a grave insult, leveled whenever a scientist veers too far from statistical modes of explanation ("science") toward narrative modes ("just so stories"). While sometimes a justified complaint, I think completely deemphasizing narratives can undermine scientific progress. Human intuition is currently our only way to generate truly novel ideas, hypotheses, models and principles. Until we can teach machines to generate truly novel scientific hypotheses from leaps of intuition, narratives, supported by appropriate quantitative evidence, will remain a crucial part of science.

[2] Another fascinating aspect of the interaction between these two modes of explanation is that one seems to be increasingly invading the other: narratives, at least in the media and other kinds of popular discourse, increasing ape the strong explanatory language of science. For instance, I wonder when Time Magazine started using formulaic titles for its issues like "How X happens and why it matters" and "How X affects Y", which dominate its covers today. There are a few individual writers who are amazingly good at this form of narrative, with Malcolm Gladwell being the one that leaps most readily to my mind. His writing is fundamentally in a narrative style, stories about individuals or groups or specific examples, but the language he uses is largely scientific, speaking in terms of general principles and notions of causality. I can also think of scientists who import narrative discourse into their scientific writing to great effect. Doing so well can make scientific writing less boring and less opaque, but if it becomes more important than the science itself, it can lead to "pathological science".

[3] Which is perhaps why the common belief that "everything happens for a reason" persists so strongly in popular culture.

[4] It cannot, of course, be the entire explanation. For instance, the notion among Creationists that natural selection is equivalent to "randomness" is completely false; randomness is a crucial component of way natural selection constructs complex structures (without the randomness, natural selection could not work) but the selection itself (what lives versus what dies) is highly non-random and that is what makes it such a powerful process.

What makes statistical explanations interesting is that many of the details are irrelevant, i.e., generated by randomness, but the general structure, the broad brush-strokes of the phenomena are crucially highly non-random. The chief difficulty of this mode of investigation is in correctly separating these two parts of some phenomena, and many arguments in the scientific literature can be understood as a disagreement about the particular separation being proposed. Some arguments, however, are more fundamental, being about the very notion that some phenomena are partly random rather than completely deterministic.

[5] Another source of tension on this question comes from our ambiguous understanding of the relationship between our perception and experience of free will and the observation of strong statistical regularities among groups or populations of individuals. This too is a very old question. It tormented Rev. Thomas Malthus (1766-1834), the great English demographer, in his efforts to understand how demographic statistics like birth rates could be so regular despite the highly contingent nature of any particular individual's life. Malthus's struggles later inspired Ludwig Boltzmann (1844-1906), the famous Austrian physicist, to use a statistical approach to model the behavior of gas particles in a box. (Boltzmann had previously been using a deterministic approach to model every particle individually, but found it too complicated.) This contributed to the birth of statistical physics, one of the three major branches of modern physics and arguably the branch most relevant to understanding the statistical behavior of populations of humans or genes.

posted October 27, 2010 07:15 AM in Scientifically Speaking | permalink | Comments (0)

October 05, 2010

Steven Johnson on where good ideas come from

Remember that fun cartoonist video paired with Philip Zimbardo talking about our funny relationship with time? Unsurprisingly, there are more such videos. I like this one, on Steven Johnson's new book "Where Good Ideas Come From". There's not a whole lot really revolutionary in what he says, but it's a good and thoughtful reminder that good ideas need time to incubate and they often need to be shared, borrowed or recombined (like genes, no?) in order to reach their full potential.

Of course, there's an important flip side to this sensible sounding idea, which is don't bad ideas often also incubate for a long time and often get shared, borrowed or recombined? The real question would seem to be Are there any genuine differences between where good ideas come from and where bad ideas come from? Selfishly, I'd like to think that the pressure of "publish or perish" and "fund thyself" is an example of how to encourage the production of bad ideas, but then I have to remind myself that many of the people who launched the scientific revolution in the 1600s in England, founded the Royal Society, and changed the world forever also had to work as medical doctors in order to fund their research on the side.

Tip to Nikolaus.

posted October 5, 2010 04:30 PM in Thinking Aloud | permalink | Comments (1)

January 31, 2010

Why I think the iPad is good

Since Apple announced it, I've found myself in the strange position of defending the iPad to my friends, who uniformly think it sucks. I don't think a single one of them agrees with me that the iPad is good. Most of them cite things like the name, the lack of multi-tasking, and the lack of deep customizability (i.e., programming) as reasons why it sucks. (If you want a full list of these kinds of reasons, the Huffington Post gives nine of them). I don't disagree with these complaints at all, although I'm pretty sure that some of them will be fixed later on (like the multi-tasking).

Some complaints are more thoughtful, that the iPad is a closed device (like the iPod touch / iPhone), that you can only do the things on it that Apple allows you to and that these are basically focused on consuming media (and spending money at Apple's online stores). (These points are made well by io9's review of the iPad). And, I don't disagree that this is a problem with Apple's business strategy for the iPad, and that it will limit its appeal among more serious computer users.

But, I think all of these complaints miss the point of what is good about the iPad.

The iPad is good because it will push the common experience of computing more toward how we interact with every other device / object in the world, i.e., pushing, pulling, prodding and poking them, and this is the future. (Imagine programming a computer using a visual programming language, rather than using an arcane character-based syntax we currently use; I don't know if it would be genuinely better, but I'd sure like to find out.) One thing that sucks about how current computers are designed is how baroque their interfaces are. Getting them to do even simple things requires learning complex sequences of actions using complicated indirect interfaces. By making the mode of interaction more direct, devices like the iPad (even with all of its flaws) will make many kinds of simple interactions with computing devices easier, and that's a good thing.

I think the iPad is disappointing to many techy people because they wanted it to completely replace their current laptop. They wanted a device that would do everything they can do now, but using a cool multi-touch interface. (To be honest, it's not even clear that this is possible.) But I think Apple knows that these people are not the target audience for the iPad. The people who will buy and love the iPad are your parents and your children. These are people who primarily want a casual computing device (for things like online shopping, reading the news and gossip sites, listening to music, watching tv/movies, reading email, etc.), who don't care too much about hacking their computers, and who don't mind playing inside Apple's closed world (which more and more of us do anyway; think iTunes).

If things go the way I think they will, in 20 years, the kids I'll be teaching at CU Boulder will have had their first experience with computers on something like an iPad, and they're going to expect all "real" computers to be as physically intuitive as it is. They're going to hate keyboards and mice (which will go the way of standard transmissions in cars), and they're going to think current laptops are "clunky". They'll also know that serious computing activities require a serious computer (something more customizable and programmable than an iPad). But most people don't do or care about serious computing activities, and I think Apple knows this.

So, I think most of the criticism of the iPad is sour grapes (by techy people who misunderstand how Apple is targeting with the iPad and, more fundamentally, what Apple has done to the future of human-computer interaction, which is going to be dominated by multi-touch interfaces like the iPad's). I hope the iPad is successful because I want interacting with computers to suck less. Of course, I also want it to run multiple apps, have a camera for video-conferencing, use open standards and file formats, do handwriting recognition, and generally replace my laptop. These things will come, I think, but to become real, they need a device like the iPad to call home.

posted January 31, 2010 12:56 PM in Thinking Aloud | permalink | Comments (4)

August 21, 2007

Sleight of mind

There's a nice article in the NYTimes right now that ostensibly discusses the science of magic, or rather, the science of consciousness and how magicians can engineer false assumptions through their understanding of it. Some of the usual suspects make appearances, including Teller (of Penn and Teller), Daniel Dennett (that rascally Tufts philosopher who has been in the news much of late over his support of atheism and criticism of religion) and Irene Pepperberg, whose African parrot Alex has graced this blog before (here and here). Interestingly, the article points out a potentially distant forerunner of Alex named Clever Hans, a horse who learned not arithmetic, but his trainer's unconscious suggestions about what the right answers were (which sounds like pretty intelligent behavior to me, honestly). Another of the usual suspects is the wonderful video that, with the proper instruction to viewers, conceals a person in a gorilla suit walking across the screen.

The article is a pleasant and short read, but what surprised me the most is that philosophers are, apparently, still arguing over whether consciousness is a purely physical phenomenon or does it have some additional immaterial component, called "qualia". Dennett, naturally, has the best line about this.

One evening out on the Strip, I spotted Daniel Dennett, the Tufts University philosopher, hurrying along the sidewalk across from the Mirage, which has its own tropical rain forest and volcano. The marquees were flashing and the air-conditioners roaring — Las Vegas stomping its carbon footprint with jackboots in the Nevada sand. I asked him if he was enjoying the qualia. “You really know how to hurt a guy,” he replied.

For years Dr. Dennett has argued that qualia, in the airy way they have been defined in philosophy, are illusory. In his book “Consciousness Explained,” he posed a thought experiment involving a wine-tasting machine. Pour a sample into the funnel and an array of electronic sensors would analyze the chemical content, refer to a database and finally type out its conclusion: “a flamboyant and velvety Pinot, though lacking in stamina.”

If the hardware and software could be made sophisticated enough, there would be no functional difference, Dr. Dennett suggested, between a human oenophile and the machine. So where inside the circuitry are the ineffable qualia?

This argument is just a slightly different version of the well-worn Chinese room thought experiment proposed by John Searle. Searle's goal was to undermine the idea that the wine-tasting machine was actually equivalent to an oenophile (so-called "strong" artificial intelligence), but I think his argument actually shows that the whole notion of "intelligence" is highly problematic. In other words, one could argue that the wine-tasting machine as a whole (just like a human being as a whole) is "intelligent", but the distinction between intelligence and non-intelligences becomes less and less clear as one considers poorer and poorer versions of the machine, e.g., if we start mucking around with its internal program, so that it makes mistakes with some regularity. The root of this debate, which I think has been well-understood by critics of artificial intelligence for many years, is that humans are inherently egotistical beings, and we like feeling that we are special in some way that other beings (e.g., a horse or a parrot) are not. So, when pressed to define intelligence scientifically, we continue to move the goal posts to make sure that humans are always a little more special than everything else, animal or machine.

In the end, I have to side with Alan Turing, who basically said that intelligence is as intelligences does. I'm perfectly happy to dole out the term "intelligence" to all manner of things or creatures to various degrees. In fact, I'm pretty sure that we'll eventually (assuming that we don't kill ourselves off as a species, in the meantime) construct an artificial intelligence that is, for all intents and purposes, more intelligent than a human, if only because it won't have the enumerable quirks and idiosyncrasies (e.g., optical illusions and humans' difficulty in predicting what will make us the happiest) in human intelligence that are there because we are evolved beings rather than designed beings.

posted August 21, 2007 09:52 AM in Thinking Aloud | permalink | Comments (22)

March 07, 2007

Making virtual worlds grow up

On February 20th, SFI cosponsored a business network topical meeting on "Synthetic Environments and Enterprise", or "Collective Intelligence in Synthetic Environments" in Santa Clara. Although these are fancy names, the idea is pretty simple. Online virtual worlds are pretty complex environments now, and several have millions of users who spend an average of 20-25 hours of time per week exploring, building, or otherwise inhabiting these places. My long-time friend Nick Yee has made a career out of studying the strange psychological effects and social behaviors stimulated by these virtual environments. For many businesses, it is only just now dawning on them that games have something that could help enterprise, namely, that many games are fun, while much work is boring. This workshop was designed around exploring a single question: How can we use the interesting aspects of games to make work more interesting, engaging, productive, and otherwise less boring?

Leighton Read (who sits on SFI's board of trustees) was the general ringmaster for the day, and gave, I think, a persuasive pitch for how well-designed incentive structures can be used to produce useful stuff for businesses [1]. Thankfully, it seems that people interested in adapting game-like environments to other domains are realizing that military applications [2] are pretty limited. Some recent clever examples of games that produce something useful are the ESP game, in which you try to guess the text tags (a la flickr) that another player will give to a photo you both see; the Korean search giant Naver, in which you write answers to search queries and are scored on how much people like your result; and, Dance Dance Revolution, where you compete in virtual dance competitions by actually exercising. What these games have in common is that they break down the usual button-mashing paradigm by creating social or physical incentives for achievement.

One of the main themes of the workshop was exactly this kind of strategic incentive structuring [3], along with the dangling question of, How can we design useful incentive structures to facilitate hard work? In the context of games themselves (video or otherwise), this is a bit like asking, What makes a game interesting enough to spend time playing? A few possibilities are a escapism / being someone else / a compelling story line (a la movies and books), a competitive aspect (as in card games), beautiful imagery (3d worlds), reflex and precision training (shooters and jumpers), socialization (most MMOs), or outsmarting a computer (most games from the 80s and 90s when AI was simplistic), and even creating something unique / of value (like crafting for a virtual economy). MMOs have many of these aspects, and perhaps that's what makes them so widely appealing - that is, it's not that MMOs manage to get any one thing right about interesting incentive structures [5], but rather they have something for everyone.

Second Life (SL), a MMO in which all its content is user-created (or, increasingly, business-built), got a lot of lip-service at the workshop as being a panacea for enterprise and gaming [6]. I don't believe this hype for a moment; Second Life was designed to allow user-created objects, but not to be a platform for complex, large-scale, or high-bandwidth interactions. Yet, businesses (apparently) want to use SL as a way to interact with their clients and customers, a platform for teleconferencing, broadcasting, advertising, etc., and a virtual training ground for employees Sure, all of these things possible under the SL Life system, but none of them can work particularly well [7] because SL wasn't designed to be good at facilitating them. At this point, SL is just a fad, and in my mind, there are only two things that SL does better than other, more mature technologies (like instant messaging, webcams, email, voice-over-IP, etc.). The first is to make it possible for account executives to interact with their customers in a more impromptu fashion - when they log into the virtual world, they can get pounced on by needy customers that previously would have had to go through layers of bureaucracy to get immediate attention. Of course, this kind of accessibility will disappear when there are hundreds or thousands of potential pouncers. The second is that it allows businesses to bring together people with common passions in a place that they can interact over them [8].

Neither of these things is particularly novel. The Web was the original way to bring like-minded individuals together over user-created content, and the Web 2.0 phenomenon allows more people to do this, in a more meaningful way than I think Second Life ever will. What the virtual-world aspect gives to this kind of collective organization is a more intuitive feeling of identification with a place and a group. That is, it takes a small mental flip to think of a username and a series of text statements as being an intentional, thinking person, whereas our monkey brains find it easier to think of a polygonal avatar with arms and legs as being a person. In my mind, this is the only reason to prefer virtual-world mediated interactions over other forms of online interaction, and at this point, aside from entertainment and novelty, those other mediums are much more compelling than the virtual worlds.

-----

[1] There's apparently even an annual conference dedicated to exploring these connections.

[2] Probably the best known (and reviled) example of this is America's Army, which is a glorified recruiting tool for the United States Army, complete with all the subtle propoganda you'd expect from such a thing: the US is the good guys, only the bad guys do bad things like torture, and military force is always the best solution. Of course, non-government-sponsored games aren't typically much better.

[3] I worry, however, that the emphasis on social incentives will backfire on many of these enterprise-oriented endeavors. That is, I've invested a lot of time and energy in building and maintaining my local social network, and I'd be pretty upset if a business tried to co-opt those resources for marketing or other purposes [4].

[4] In poking around online, I discovered the blog EmergenceMarketing that focuses on precisely this kind of issue within the marketing community. I suspect that what's causing the pressure on the marketing community is not just increasing competition for people's limited attention (the information deluge, as I like to call it), but also the increasing ease by which people can organize and communicate on issues related to overtly selfish corporate practices; it's not pleasant to be treated like a rock from which money is to be squeezed.

[5] The fact that populations migrate from game to game (especially when a new one is released) suggests that most MMOs are far from perfect on many of these things, and that the novelty of a new system is enough to break the loyalty of many players to their investment in the old system.

[6] The hype around Second Life is huge enough to produce parodies, and to obscure some significant problems with the system's infrastructure, design, scalability, etc. For instance, see Daren Barefoot's commentary on the Second Life hype.

[7] The best example of this was the attempt to telecast Tom Malone's talk (complete with powerpower slides) from MIT into the Cisco Second Life amphitheater and the Cisco conference room I was sitting in. The sound was about 20 seconds delayed, the slides out of sync, and the talk generally reduced to a mechanical reading of prepared remarks, for both worlds. Why was this technology used instead of another, better adapted technology like a webcam? Multicast is a much better adapted technology for this kind of situation, and gets used for some extremely popular online events. In Second Life, the Cisco amphitheater could only host about a hundred or so users; multicast can reach tens of thousands.

[8] The example of this that I liked was Pontiac. Apparently, they first considered building a virtual version of their HQ in SL, but some brilliant person encouraged them instead to build a small showroom on a large SL island, and then let users create little pavilions nearby oriented around their passion for cars (not Pontiacs, just cars in general). The result is a large community of car enthusiasts who interact and hang out around the Pontiac compound. In a sense, this is just the kind of thing the Web has been facilitating for years, except that now there's a 3d component to the virtual community that many web sites have fostered. So, punchline: what gets businesses excited about SL is the thing that (eventually) got them excited about the Web; SL is the Web writ 3d, but without its inherent scalability, flexibility, decentralization, etc.

posted March 7, 2007 01:43 PM in Thinking Aloud | permalink | Comments (0)

November 25, 2006

Unreasonable effectiveness (part 3)

A little more than twenty years after Hamming's essay, the computer scientist Bernard Chazelle penned an essay on the importance of the algorithm, in which he offers his own perspective on the unreasonable effectiveness of mathematics.

Mathematics shines in domains replete with symmetry, regularity, periodicity -- things often missing in the life and social sciences. Contrast a crystal structure (grist for algebra's mill) with the World Wide Web (cannon fodder for algorithms). No math formula will ever model whole biological organisms, economies, ecologies, or large, live networks.

Perhaps this, in fact, is what Hamming meant by saying that much of physics is logically deducible, that the symmetries, regularities, and periodicities of physical nature constrain it in such strong ways that mathematics alone (and not something more powerful) can accurately capture its structure. But, complex systems like organisms, economies and engineered systems don't have to, and certainly don't seem to, respect those constraints. Yet, even these things exhibit patterns and regularities that we can study.

Clearly, my perspective matches Chazelle's, that algorithms offer a better path toward understanding complexity than the mathematics of physics. Or, to put it another way, that complexity is inherently algorithmic. As an example of this kind of inherent complexity through algorithms, Chazelle cites Craig Reynolds' boids model. Boids is one of the canonical simulations of "artificial life"; in this particular simulation, a trio of simple algorithmic rules produce surprisingly realistic flocking / herding behavior when followed by a group of "autonomous" agents [1]. There are several other surprisingly effective algorithmic models of complex behavior (as I mentioned before, cellular automata are perhaps the most successful), but they all exist in isolation, as models of disconnected phenomenon.

So, I think one of the grand challenges for a science of complexity will be to develop a way to collect the results of these isolated models into a coherent framework. Just as we have powerful tools for working with a wide range of differential-equation models, we need similar tools for working with competitive agent-based models, evolutionary models, etc. That is, we would like to be able to write down the model in an abstract form, and then draw strong, testable conclusions about it, without simulating it. For example, imagine being able to write down Reynolds' three boids rules and deriving the observed flocking behavior before coding them up [2]. To me, that would prove that the algorithm is unreasonably effective at capturing complexity. Until then, it's just a dream.

Note: See also part 1 and part 2 of this series of posts.

[1] This citation is particularly amusing to me considering that most computer scientists seem to be completely unaware of the fields of complex systems and artificial life. This is, perhaps, attributable to computer science's roots in engineering and logic, rather than in studying the natural world.

[2] It's true that problems of intractability (P vs NP) and undecidability lurk behind these questions, but analogous questions lurk behind much of mathematics (Thank you, Godel). For most practical situations, mathematics has sidestepped these questions. For most practical situations (where here I'm thinking more of modeling the natural world), can we also sidestep them for algorithms?

posted November 25, 2006 01:19 PM in Things to Read | permalink | Comments (0)

November 24, 2006

Unreasonable effectiveness (part 2)

In keeping with the theme [1], twenty years after Wigner's essay on The Unreasonable Effectiveness of Mathematics in the Natural Sciences, Richard Hamming (who has graced this blog previously) wrote a piece by the same name for The American Mathematical Monthly (87 (2), 1980). Hamming takes issue with Wigner's essay, suggesting that the physicist has dodged the central question of why mathematics has been so effective. In Hamming's piece, he offers a few new thoughts on the matter: primarily, he suggests, mathematics has been successful in physics because much of it is logically deducible, and that we often change mathematics (i.e., we change our assumptions or our framework) to fit the reality we wish to describe. His conclusion, however, puts the matter best.

From all of this I am forced to conclude both that mathematics is unreasonably effective and that all of the explanations I have given when added together simply are not enough to explain what I set out to account for. I think that we -- meaning you, mainly -- must continue to try to explain why the logical side of science -- meaning mathematics, mainly -- is the proper tool for exploring the universe as we perceive it at present. I suspect that my explanations are hardly as good as those of the early Greeks, who said for the material side of the question that the nature of the universe is earth, fire, water, and air. The logical side of the nature of the universe requires further exploration.

Hamming, it seems, has dodged the question as well. But, Hamming's point that we have changed mathematics to suit our needs is important. Let's return to the idea that computer science and the algorithm offer a path toward capturing the regularity of complex systems, e.g., social and biological ones. Historically, we've demanded that algorithms yield guarantees on their results, and that they don't take too long to return them. For example, we want to know that our sorting algorithm will actually sort a list of numbers, and that it will do it in the time I allow. Essentially, our formalisms and methods of analysis in computer science have been driven by engineering needs, and our entire field reflects that bias.

But, if we want to use algorithms to accurately model complex systems, it stands to reason that we should orient ourselves toward constraints that are more suitable for the kinds of behaviors those systems exhibit. In mathematics, it's relatively easy to write down an intractable system of equations; similarly, it's easy to write down an algorithm who's behavior is impossible to predict. The trick, it seems, will be to develop simple algorithmic formalisms for modeling complex systems that we can analyze and understand in much the same way that we do for mathematical equations.

I don't believe that one set of formalisms will be suitable for all complex systems, but perhaps biological systems are consistent enough that we could use one set for them, and perhaps another for social systems. For instance, biological systems are all driven by metabolic needs, and by a need to maintain structure in the face of degradation. Similarly, social systems are driven by, at least, competitive forces and asymmetries in knowledge. These are needs that things like sorting algorithms have no concept of.

Note: See also part 1 and part 3 of this series of posts.

[1] A common theme, it seems. What topic wouldn't be complete without its own wikipedia article?

posted November 24, 2006 12:21 PM in Things to Read | permalink | Comments (2)

November 23, 2006

Unreasonable effectiveness (part 1)

Einstein apparently once remarked that "The most incomprehensible thing about the universe is that it is comprehensible." In a famous paper in Pure Mathematics (13 (1), 1960), the physicist Eugene Wigner (Nobel in 1963 for atomic theory) discussed "The Unreasonable Effectiveness of Mathematics in the Natural Sciences". The essay is not too long (for an academic piece), but I think this example of the the application of mathematics gives the best taste of what Wigner is trying to point out.

The second example is that of ordinary, elementary quantum mechanics. This originated when Max Born noticed that some rules of computation, given by Heisenberg, were formally identical with the rules of computation with matrices, established a long time before by mathematicians. Born, Jordan, and Heisenberg then proposed to replace by matrices the position and momentum variables of the equations of classical mechanics. They applied the rules of matrix mechanics to a few highly idealized problems and the results were quite satisfactory.

However, there was, at that time, no rational evidence that their matrix mechanics would prove correct under more realistic conditions. Indeed, they say "if the mechanics as here proposed should already be correct in its essential traits." As a matter of fact, the first application of their mechanics to a realistic problem, that of the hydrogen atom, was given several months later, by Pauli. This application gave results in agreement with experience. This was satisfactory but still understandable because Heisenberg's rules of calculation were abstracted from problems which included the old theory of the hydrogen atom.

The miracle occurred only when matrix mechanics, or a mathematically equivalent theory, was applied to problems for which Heisenberg's calculating rules were meaningless. Heisenberg's rules presupposed that the classical equations of motion had solutions with certain periodicity properties; and the equations of motion of the two electrons of the helium atom, or of the even greater number of electrons of heavier atoms, simply do not have these properties, so that Heisenberg's rules cannot be applied to these cases.

Nevertheless, the calculation of the lowest energy level of helium, as carried out a few months ago by Kinoshita at Cornell and by Bazley at the Bureau of Standards, agrees with the experimental data within the accuracy of the observations, which is one part in ten million. Surely in this case we "got something out" of the equations that we did not put in.

As someone (apparently) involved in the construction of "a physics of complex systems", I have to wonder whether mathematics is still unreasonably effective at capturing these kind of inherent patterns in nature. Formally, the kind of mathematics that physics has historically used is equivalent to a memoryless computational machine (if there is some kind of memory, it has to be explicitly encoded into the current state); but, the algorithm is a more general form of computation that can express ideas that are significantly more complex, at least partially because it inherently utilizes history. This suggests to me that a physics of complex systems will be intimately connected to the mechanics of computation itself, and that select tools from computer science may ultimately let us express the structure and behavior of complex, e.g., social and biological, systems more effectively than the mathematics used by physics.

One difficulty in this endeavor, of course, is that the mathematics of physics is already well-developed, while the algorithms of complex systems are not. There have been some wonderful successes with algorithms already, e.g., cellular automata, but it seems to me that there's a significant amount of cultural inertia here, perhaps at least partially because there are so many more physicists than computer scientists working on complex systems.

Note: See also part 2 and part 3 of this series of posts.

posted November 23, 2006 11:31 AM in Things to Read | permalink | Comments (2)

September 08, 2006

Academic publishing, tomorrow

Imagine a world where academic publishing is handled purely by academics, rather than ruthless, greedy corporate entities. [1] Imagine a world where hiring decisions were made on the techincal merit of your work, rather than the coterie of journals associated with your c.v. Imagine a world where papers are living documents, actively discussed and modified (wikified?) by the relevant community of interested intellectuals. This, and a bit more, is the future, according to Adam Rogers, a senior associate editor at "Wired" magazine. (tip to The Geomblog)

The gist of Rogers' argument is that the Web will change academic publishing into this utopian paradise of open information. I seriously doubt things will be like he predicts, but he does raise some excellent points about how the Web is facilitating new ways of communicating technical results. For instance, he mentions a couple of on-going experiments in this area:

In other quarters, traditional peer review has already been abandoned. Physicists and mathematicians today mainly communicate via a Web site called arXiv. (The X is supposed to be the Greek letter chi; it's pronounced "archive." If you were a physicist, you'd find that hilarious.) Since 1991, arXiv has been allowing researchers to post prepublication papers for their colleagues to read. The online journal Biology Direct publishes any article for which the author can find three members of its editorial board to write reviews. (The journal also posts the reviews – author names attached.) And when PLoS ONE launches later this year, the papers on its site will have been evaluated only for technical merit – do the work right and acceptance is guaranteed.

It's a bit hasty to claim that peer review has been "abandoned", but the arxiv has certainly almost completely supplanted some journals in their role of disseminating new research [2]. This is probably most true for physicists, since they're the ones who started the arxiv; other fields, like biology, don't have a pre-print archive (that I know of), but they seem to be moving toward open access journals for the same purpose. In computer science, we already have something like this, since the primary venue for publication is in conferences (which are peer reviewed, unlike conference in just about every other discipline), and whose papers are typically picked up by CiteSeer.

It seems that a lot of people are thinking or talking about open access this week. The Chronicle of Higher Education has a piece on the momentum for greater open access journals. It's main message is the new letter, signed by 53 presidents of liberal arts colleges (including my own Haverford College) in support of the bill currently in Congress (although unlikely to pass this year) that would mandate that all federally funded research be eventually made publicly available. The comments from the publishing industry are unsurprisingly self-interested and uninspiring, but they also betray a great deal of arrogance and greed. I wholeheartedly support more open access to articles - publicly funded research should be free to the public, just like public roads are free for everyone to use.

But, the bigger question here is, Could any these various alternatives to the pay-for-access model really replace journals? I'm less sure of the future here, as journals also serve a couple of other roles that things like the arxiv were never intended to fill. That is, journals run the peer review process, which, at its best, prevents erroneous research from getting a stamp of "community approval" and thereby distracting researchers for a while as they a) figure out that it's mistaken, and b) write new papers to correct it. This is why, I think, there is a lot of crap on the arxiv. A lot of authors self-police themselves quite well, and end up submitting nearly error-free and highly competent work to journals, but the error-checking process is crucial, I think. Sure, peer review does miss a lot of errors (and frauds), but, to paraphrase Mason Porter paraphrasing Churchill on democracy, peer review is the worst form of quality control for research, except for all the others. The real point here is that until something comes along that can replace journals as being the "community approved" body of work, I doubt they'll disappear. I do hope, though, that they'll morph into more benign organizations. PNAS and PLoS are excellent role models for the future, I think. And, they also happen to publish really great research.

Another point Rogers makes about the changes the Web is encouraging is a social one.

[...] Today’s undergrads have ... never functioned without IM and Wikipedia and arXiv, and they’re going to demand different kinds of review for different kinds of papers.

It's certainly true that I conduct my research very differently because I have access to Wikipedia, arxiv, email, etc. In fact, I would say that the real change these technologies will have on the world of research will be to decentralize it a little. It's now much easier to be a productive, contributing member of a research community without being down the hall from your colleagues and collaborators than it was 20 years ago. These electronic modes of communication just make it easier for information to flow freely, and I think that ultimately has a very positive effect on research itself. Taking that role away from the journals suggests that they will become more about getting that stamp of approval, than anything else. With its increased relative importance, who knows, perhaps journals will do a better job at running the peer review process (they could certainly use the Web, etc. to do a better job at picking reviewers...).

(For some more thoughts on this, see a recent discussion of mine with Mason Porter.)

Update Sept. 9: Suresh points to a recent post of his own about the arxiv and the issue of time-stamping.

[1] Actually, computer science conferences, impressively, are a reasonable approximation to this, although they have their own fair share of issues.

[2] A side effect of the arXiv is that it presents tricky issues regarding citation, timing and proper attribution. For instance, if a research article becomes a "living" documents, proper citation becomes rather problematic. For instance, which version of an article do you cite? (Surely not all of them!) And, if you revise your article after someone posts a derivative work, are you obligated to cite it in your revision?

posted September 8, 2006 05:23 PM in Simply Academic | permalink | Comments (3)

July 26, 2006

Models, errors and the methods of science.

A recent posting on the arxiv prompts me to write down some recent musings about the differences between science and non-science.

On the Nature of Science by B.K. Jennings

A 21st century view of the nature of science is presented. It attempts to show how a consistent description of science and scientific progress can be given. Science advances through a sequence of models with progressively greater predictive power. The philosophical and metaphysical implications of the models change in unpredictable ways as the predictive power increases. The view of science arrived at is one based on instrumentalism. Philosophical realism can only be recovered by a subtle use of Occam's razor. Error control is seen to be essential to scientific progress. The nature of the difference between science and religion is explored.

Which can be summarized even more succinctly by George Box, famously saying "all models are wrong but some models are useful" with the addendum that this recognition is what makes science different from religion (or other non-scientific endeavors), and that the sorting out the useful from the useless is what drives science forward.

In addition to being a relatively succinct introduction to the basic terrain of modern philosophy of science, Jennings also describes two common critiques of science. The first is the God of the Gaps idea: basically, science explains how nature works and everything left unexplained is the domain of God. Obviously, the problem is that those gaps have a pesky tendency to disappear over time, taking that bit of God with them. For Jennings, this idea is just a special case of the more general "Proof by Lack of Imagination" critique, which is summarized as "I cannot imagine how this can happen naturally, therefore it does not, or God must have done it." As with the God of the Gaps idea, more imaginative people tend to come along (or have come along before) who can imagine how it could happen naturally (e.g., continental drift). Among physicists who like this idea, things like the precise value of fundamental constants are grist for the mill, but can we really presume that we'll never be able to explain them naturally?

Evolution is, as usual, one of the best examples of this kind of attack. For instance, almost all of the arguments currently put forth by creationists are just a rehashing of arguments made in the mid-to-late 1800s by religious scientists and officials. Indeed, Darwin's biggest critic was the politically powerful naturalist Sir Richard Owen, who objected to evolution because he preferred the idea that God used archetypical forms to derive species. The proof, of course, was in the overwhelming weight of evidence in favor of evolution, and, in the end, with Darwin being much more clever than Owen.

Being the bread and butter of science, this may seem quite droll. But I think non-scientists have a strong degree of cognitive dissonance when faced with such evidential claims. That is, what distinguishes scientists from non is our conviction that knowledge about the nature of the world is purely evidential, produced only by careful observations, models and the control of our errors. For the non-scientist, this works well enough for the knowledge required to see to the basics of life (eating, moving, etc.), but conflicts with (and often loses out to) the knowledge given to us by social authorities. In the West before Galileo, the authorities were the Church or Aristotle - today, Aristotle has been replaced by talk radio, television and cranks pretending to be scientists. I suspect that it's this conflicting relationship with knowledge that might explain several problems with the lay public's relationship with science. Let me connect this with my current reading material, to make the point more clear.

Deborah Mayo's excellent (and I fear vastly under-read) Error and the Growth of Experimental Knowledge, is a dense and extremely thorough exposition of a modern philosophy of science, based on the evidential model I described above. As she reinterprets Kuhn's analysis of Popper, she implicitly points to an explanation for why science so often classes with non-science, and why these clashes often leave scientists shaking their heads in confusion. Quoting Kuhn discussing why astrology is not a science, she says

The practitioners of astrology, Kuhn notes, "like practitioners of philosophy and some social sciences [AC: I argue also many humanities]... belonged to a variety of different schools ... [between which] the debates ordinarily revolved about the implausibility of the particular theory employed by one or another school. Failures of individual predictions played very little role." Practitioners were happy to criticize the basic commitments of competing astrological schools, Kuhn tells us; rival schools were constantly having their basic presuppositions challenged. What they lacked was that very special kind of criticism that allows genuine learning - the kind where a failed prediction can be pinned on a specific hypothesis. Their criticism was not constructive: a failure did not genuinely indicate a specific improvement, adjustment or falsification.

That is, criticism that does not focus on the evidential basis of theories is what non-sciences engage in. In Kuhn's language, this is called "critical discourse" and is what distinguishes non-science from science. In a sense, critical discourse is a form of logical jousting, in which you can only disparage the assumptions of your opponent (thus undercutting their entire theory) while championing your own. Marshaling anecdotal evidence in support of your assumptions is to pseudo-science, I think, what stereotyping is to racism.

Since critical discourse is the norm outside of science, is it any wonder that when non-scientists, attempting to resolve the cognitive dissonance between authoritative knowledge and evidential knowledge, resort to the only form of criticism they understand? This leads me to be extremely depressed about the current state of science education in this country, and about the possibility of politicians ever learning from their mistakes.

posted July 26, 2006 11:26 PM in Scientifically Speaking | permalink | Comments (1)

July 17, 2006

Uncertainty about probability

In the past few days, I've been reading about different interpretations of probability, i.e., the frequentist and bayesian approaches (for a primer, try here). This has, of course, led me back to my roots in physics since both quantum physics (QM) and statistical mechanics both rely on probabilities to describe the behavior of nature. Amusingly, I must not have been paying much attention while I was taking QM at Haverford, e.g., Neils Bohr once said "If quantum mechanics hasn't profoundly shocked you, you haven't understood it yet." and back then I was neither shocked nor confused by things like the uncertainty principle, quantum indeterminacy or Bell's Theorem. Today, however, it's a different story entirely.

John Baez has a nice summary and selection of news-group posts that discuss the idea of frequentism versus bayesianism in the context of theoretical physics. This, in turn, led me to another physicist's perspective on the matter. The late Ed Jaynes has an entire book on probability from a physics perspective, but I most enjoyed his discussion of the physics of a "random experiment", in which he notes that quantum physics differs sharply in its use of probabilities from macroscopic sciences like biology. I'll just quote Jaynes on this point, since he describes it so eloquently:

In biology or medicine, if we note that an effect E (for example, muscle contraction) does not occur unless a condition C (nerve impulse) is present, it seems natural to infer that C is a necessary causative agent of E... But suppose that condition C does not always lead to effect E; what further inferences should a scientist draw? At this point the reasoning formats of biology and quantum theory diverge sharply.

... Consider, for example, the photoelectric effect (we shine a light on a metal surface and find that electrons are ejected from it). The experimental fact is that the electrons do not appear unless light is present. So light must be a causative factor. But light does not always produce ejected electrons... Why then do we not draw the obvious inference, that in addition to the light there must be a second causative factor...?

... What is done in quantum theory is just the opposite; when no cause is apparent, one simple postulates that no cause exists; ergo, the laws of physics are indeterministic and can be expressed only in probability form.

... In classical statistical mechanics, probability distributions represent our ignorance of the true microscopic coordinates - ignorance that was avoidable in principle but unavoidable in practice, but which did not prevent us from predicting reproducible phenomena, just because those phenomena are independent of the microscopic details.

In current quantum theory, probabilities express the ignorance due to our failure to search for the real causes of physical phenomena. This may be unavoidable in practice, but in our present state of knowledge we do not know whether it is unavoidable in principle.

Jaynes goes on to describe how current quantum physics may simply be in a rough patch where our experimental methods are simply too inadequate to appropriately isolate the physical causes of the apparent indeterministic behavior of our physical systems. But, I don't quite understand how this idea could square with the refutations of such a hidden variable theory after Bell's Theorem basically laid local realism to rest. It seems to me that Jaynes and Baez, in fact, evoke similar interpretations of all probabilities, i.e., that they only represent our (human) model of our (human) ignorance, which can be about either the initial conditions of the system in question, the causative rules that cause it to evolve in certain ways, or both.

It would be unfair to those statistical physicists who work in the field of complex networks to say that they share the same assumptions of no-causal-factor that their quantum physics colleagues may accept. In statistical physics, as Jaynes points out, the reliance on statistical methodology is forced on statistical physicists by our measurement limitations. Similarly, in complex networks, it's impractical to know the entire developmental history of the Internet, the evolutionary history of every species in a foodweb, etc. But unlike statistical physics, in which experiments are highly repeatable, every complex network has a high degree of uniqueness, and are thus more like biological and climatological systems where there is only one instance to study. To make matters even worse, complex networks are also quite small, typically having between 10^2 and 10^6 parts; in contrast, most systems that concern statistical physics have 10^22 or more parts. In these, it's probably not terribly wrong to use a frequentist perspective and assume that their relative frequencies behave like probabilities. But when you only have a few thousand or million parts, such claims seems less tenable since it's hard to argue that you're close to asymptotic behavior in this case. Bayesianism, being more capable of dealing with data-poor situations in which many alternative hypotheses are plausible, seems to offer the right way to deal with such problems. But, perhaps owing to the history of the field, few people in network science seem to use it.

For my own part, I find myself being slowly seduced by their siren call of mathematical rigor and the notion of principled approaches to these complicated problems. Yet, there are three things about the bayesian approach that make me a little uncomfortable. First, given that with enough data, it doesn't matter what your original assumption about the likelihood of any outcome is (i.e., your "prior"), shouldn't bayesian and frequentist arguments lead to the same inferences in a limiting, or simply very large, set of identical experiments? If this is right, then it seems more reasonable that statistical physicists have been using frequentist approaches for years with great success. Second, in the case where we are far from the limiting set of experiments, doesn't being able to choose an arbitrary prior amount to a kind of scientific relativism? Perhaps this is wrong because the manner in which you update your prior, given new evidence, is what distinguishes it from certain crackpot theories.

Finally, choosing an initial prior seems highly arbitrary, since one can always recurse a level and ask what prior on priors you might take. Here, I like the ideas of a uniform prior, i.e., I think everything is equally plausible, and of using the principle of maximum entropy (MaxEnt; also called the principle of indifference, by Laplace). Entropy is a nice way to connect this approach with certain biases in physics, and may say something very deep about the behavior of our incomplete description of nature at the quantum level. But, it's not entirely clear to me (or, apparently, others: see here and here) how to use maximum entropy in the context of previous knowledge constraining our estimates of the future. Indeed, one of the main things I still don't understand is how, if we model the absorption of knowledge as a sequential process, to update our understanding of the world in a rigorous way while guaranteeing that the order we see the data doesn't matter.

Update July 17: Cosma points out that Jaynes's Bayesian formulation of statistical mechanics leads to unphysical implications like a backwards arrow of time. Although it's comforting to know that statistical mechanics cannot be reduced to mere Bayesian crank-turning, it doesn't resolve my confusion about just what it means that the quantum state of matter is best expressed probabilistically! His article also reminds me that there are good empirical reasons to use a frequentist approach, reasons based on Mayo's arguments and which should be familiar to any scientist who has actually worked with data in the lab. Interested readers should refer to Cosma's review of Mayo's Error, in which he summarizes her critique of Bayesianism.

posted July 17, 2006 03:30 PM in Scientifically Speaking | permalink | Comments (0)

March 20, 2006

It's a Monty Hall universe, after all

Attention conservation notice: This mini-essay was written after one of my family members asked me what I thought of the idea of predestination. What follows is a rephrasing of my response, along with a few additional thoughts that connect this topic to evolutionary game theory.

Traditionally, the idea of predestination means that each person has a fixed path, or sequence of events, that constitute their life. From the popular perspective, this gets translated into "everything happens for a reason" - a statement that raises little red flags with me whenever someone states it earnestly. In my experience, this platitude mostly gets used to rationalize a notable coincidence (e.g., bumping into someone you know at the airport) or a particularly tragic event, and that people don't actually behave as if they believe it (more on this later).

I suspect that most people who find predestination appealing believe, at some level, in a supernatural force that has predetermined every little event in the world. But, when you get right down to it, there's nothing in our collective experience of the physical universe that supports either this idea or the existence of any supernatural force. But, there are aspects of the universe that are, in a sense, compatible with the idea of predestination. Similarly, there are aspects that are wholly incompatible, and I'll discuss both momentarily. The problem of course, is that these aspects are not the ones that a casual observer would focus on when considering predestination.

The aspect of the universe that is compatible with the idea of predestination comes from the fact that the universe is patterned. That is, there are rules (such as the laws of physics) that prescribe the consequences for each action. If the universe were totally unpredictable in every way possible, then there is no cause-and-effect, while a consistent and physically realistic universe requires that it exists. As a toy example, if you push a ball off a table, it then falls to the ground. To be precise, your push is the cause, the fall is the effect, and gravity is the natural mechanism by which cause leads to effect. If the universe were totally unpredictable, then that mechanism wouldn't exist, and the ball might just as well fly up to the ceiling, or whiz around your head in circles, as fall to the ground. So, the fact that cause-and-effect exists means that there's a sequence of consequences that naturally follow from any action, and this is a lot like predestination.

But, there's an aspect of the universe that is fundamentally incompatible with predestination: quantum physics. At the smallest scales, the idea of cause-and-effect becomes problematic, and the world decomposes into a fuzzy mass of unpredictability. There are still patterns there, but the strongest statements that can be made are only statistical in nature. That is, while individual events themselves totally unpredictable, when taken together in larger numbers, you observe that certain kinds of events are more likely than others; individual events themselves are random, while en mass they are not. Indeed, the utilization of this fact is what underlies the operation of virtually every electronic device.

Einstein struggled with the apparent conflict between the randomness of quantum theory and the regularity of the macroscopic world we human inhabit. This struggle was the source of his infamous complaint that God does not play dice with the universe. Partially because of his struggle, physicists have probed very deeply into the possibility that the randomness of the universe at the smallest level is just an illusion, and that there is some missing piece of the picture that would dispel the fog of randomness, returning us to the predictable world of Newton. But, these tests for "hidden variables" have always failed, and the probabilistic model of the universe, i.e., quantum physics, has been validated over, and over, and over. So, it appears that God really does play dice with the universe, at least at the very smallest level.

But, how does this connect with the kind of universe that we experience? Although the motions of the water molecules in the air around me are basically random, does that also mean that my entire life experience is also basically unpredictable? Well, yes, actually. In the 1960s, physicists discovered that large systems like the weather are "mathematically chaotic". This is the idea that very small disturbances in one place can be amplified, by natural processes, to become very large disturbances in another place. This idea was popularized by the idea that a butterfly can flap its wings in Brazil and cause a tornado in Texas. And, physicists have shown that indeed, the unpredictability of tiny atoms and electrons can and do cause unpredictability in large systems like the rhythm of your heart, the weather patterns all over the world and even the way water splashes out of a boiling pot.

So basically, predestination exists, but only in a statistical sense. Technically, this is what's called "probabilistic determinism", and it means that while hindsight is perfect (after all, the past has already happened, so it can only be one way), the future is unknown and is, at least until it happens, undetermined except in the statistical sense. Put another way, the broad brushstrokes of the future are predetermined (because the universe operates through natural and consistent forces), but the minute details are not (because the universe is fundamentally probabilistic in nature). If some supernatural force is at play in the universe and has predetermined certain features of the universe, then they are only very vague and general ones like life probably evolving on a planet somewhere (the "blind watchmaker" idea, basically), and not like the kind of events that most people consider when they think about predestination, such as attending a certain graduate school or falling in love with a certain person.

In summary, what I've tried to establish in the above paragraphs is that the belief in predestination is an irrational one because it's not actually supported by any physical evidence from the real world. But, the idea of predestination has a certain utility for intelligent beings who can only see a very small piece of the world around them. From any one person's perspective, the world is a very confusing and unpredictable place, and I think it's very comforting to believe that such an impression is false and that the world is actually very ordered beyond our horizon of knowledge. That is, holding this belief makes it easier to actually make a decision in one's own life, because it diminishes the fear that one's choice will result in really bad consequences, and it does this by asserting that whatever decision one makes, the outcome was already determined. So, it frees the decision-maker from the responsibility of the consequences, for better, or for worse. And, in a world where decisions must be made in a timely fashion, i.e., where one cannot spend hours, days, or years pondering which choice is best, which is to say in the world that we inhabit, that freedom from consequence is really useful.

In fact, this mental freedom is necessary for survival (although getting it via a belief in predestination is not the only way to acquire it). The alternative is a dead species - dead from a paralysis of indecision - which clearly has been selected against in our evolutionary history. But also, there are clear problems with having an extreme amount of such freedom. No decision making being can fully believe in the disconnect between their decisions and the subsequent consequences. Otherwise, that being would have no reason to think at all, and could make decisions essentially at random. So, there must then be a tension between the consequence-free and consequence-centric modes of decision making, and indeed, we see exactly this tension in humans today. Or rather, humans seem to apply both modes of decision making depending on the situation, with consequence-free thinking perhaps applied retrospectively more often than prospectively. Ultimately, the trick is to become conscious of these modes and to learn how to apply them toward the end goal of making better decisions, i.e., determining the relative gain in the quality of the decision against the extra time it took to come upon it. Of course, humans are notoriously bad judges of the quality of their decisions (e.g., here and here), so spending a little extra time to consider your choices may be a reasonable way to insure that you're happy with whatever choice you end up making.

posted March 20, 2006 05:43 PM in Thinking Aloud | permalink | Comments (1)

March 13, 2006

Quantum mysticism, the new black

When I was much younger and in my very early days of understanding the conflicting assertions about the nature of the world made by religious and scientific authorities, I became curious about what Eastern philosophy had to say about the subject. The usual questions troubled my thoughts: How can everything happen for a reason if we have free will? or, How can one reconcile the claims about Creation from the Bible (Torah, Koran, Vedas, whatever) with factual and scientifically verified statements about the Universe, e.g., the Big Bang, evolution, heliocentrism, etc.? and so forth.

Eastern philosophy (and its brother Eastern mysticism), to my Western-primed brain, seemed like a possible third-way to resolve these conundrums. At first, my imagination was captured by books like The Tao of Physics, which offered the appearance of a resolution through the mysteries of quantum physics. But, as I delved more deeply into Physics itself, and indeed actually started taking physics courses in high school and college, I became increasingly disenchanted with the slipperiness of New Age thought (which is, for better or for worse, the Western heir of Eastern mysticism). The end result was a complete rejection of the entire religio-cultural framework of New Age-ism on the basis of it being irrational, subjective and rooted in the ubiquitous but visceral desire to confirm the special place of humans, and more importantly yourself, in the Universe - the same desire that lays at the foundation of much of organized religious thought. But possibly what provoked the strongest revulsion from it was the fact that New Age-ism claims a pseudo-scientific tradition, much like modern creationism (a.k.a. intelligent design), in which the entire apparatus of repeatable experiments, testable hypotheses, and the belief in an objective reality (which implies a willingness to change your mind when confronted with overwhelming evidence) is ignored in favor of the ridiculous contradictory argument that because science hasn't proved it to be false (and more importantly, but usually not admittedly, because it seems like it should be right), it must therefore be true.

Fast-forward many years to the release of the film version of Eastern mysticism-meets-Physics: "What the #$!%* Do We Know!?". Naturally, I avoided this film like the plague - it veritably reeked of everything I disliked about New Age-ism. But now, apparently, there's a sequel to it. Alas, I doubt that the many of the faithful who flock to see the film will be reading this thoughtful essay on it in the New York Times "Far Out, Man. But Is It Quantum Physics?" by National Desk correspondent Dennis Overbye. His conclusion about the movie and its apparent popularity among the New Agers, in his own words, runs like so

When it comes to physics, people seem to need to kid themselves. There is a presumption, Dr. Albert [a professor of philosophy and physics at Columbia] said, that if you look deeply enough you will find "some reaffirmation of your own centrality to the world, a reaffirmation of your ability to take control of your own destiny." We want to know that God loves us, that we are the pinnacle of evolution.

But one of the most valuable aspects of science, he said, is precisely the way it resists that temptation to find the answer we want. That is the test that quantum mysticism flunks, and on some level we all flunk.

That is, we are fundamentally irrational beings, and are intimately attached to both our own convictions and the affirmation of them. Indeed, we're so naturally attached to them that it takes years of mental training to be otherwise. That's what getting a degree in science is about - training your brain to intuitively believe in a reality that is external to your own perception of it, that is ordered (even if in an apparently confusing way) and predictable (if only probabilistically), and that is accessible to the human mind. Overbye again,

I'd like to believe that like Galileo, I would have the courage to see the world clearly, in all its cruelty and beauty, "without hope or fear," as the Greek writer Nikos Kazantzakis put it. Take free will. Everything I know about physics and neuroscience tells me it's a myth. But I need that illusion to get out of bed in the morning. Of all the durable and necessary creations of atoms, the evolution of the illusion of the self and of free will are perhaps the most miraculous. That belief is necessary to my survival.

Overbye is, in his colloquial way, concluding that irrationality has some positive utility for any decision-making being with incomplete information about the world it lives in. We need to believe (at some level) that we have the ability to decide our actions independently from the world we inhabit in order to not succumb to the fatalistic conclusion that everything happens for no reason. But understanding this aspect of ourselves at least gives us the hope of recognizing when that irrationality is serving us well, and when it is not. This, I think, is one of the main reasons why science education should be universally accessible - to help us make better decisions on the whole, rather than being slaves to our ignorance and the whim of external forces (be they physical or otherwise).

posted March 13, 2006 08:29 PM in Thinking Aloud | permalink | Comments (3)

March 01, 2006

The scenic view

In my formal training in physics and computer science, I never did get much exposure to statistics and probability theory, yet I have found myself consistently using them in my research (partially on account of the fact that I deal with real data quite often). What little formal exposure I did receive was always in some specific context and never focused on probability as a topic itself (e.g., statistical mechanics, which could hardly be called a good introduction to probability theory). Generally, my training played-out in the crisp and clean neighborhoods of logical reasoning, algebra and calculus, with the occasional day-trip to the ghetto of probability. David Mumford, a Professor of Mathematics at Brown University, opines about ongoing spread of that ghetto throughout the rest science and mathematics, i.e., how probability theory deserves a respect at least equal to that of abstract algebra, in a piece from 1999 on The Dawning of the Age of Stochasticity. From the abstract,

For over two millennia, Aristotle's logic has rules over the thinking of western intellectuals. All precise theories, all scientific models, even models of the process of thinking itself, have in principle conformed to the straight-jacket of logic. But from its shady beginnings devising gambling strategies and counting corpses in medieval London, probability theory and statistical inference now emerge as better foundations for scientific models ... [and] even the foundations of mathematics itself.

It may sound it, but I doubt that Mumford is actually overstating his case here, especially given the deep connection between probability theory, quantum mechanics (c.f. the recent counter-intuitive result on quantum interrogation) and complexity theory.

A neighborhood I'm more familiar with is that of special functions; things like the Gamma distribution, the Riemann Zeta function (a personal favorite), and the Airy functions. Sadly, these familiar friends show up very rarely in the neighborhood of traditional computer science, but instead hang out in the district of mathematical modeling. Robert Batterman, a Professor of Philosophy at Ohio State University, writes about why exactly these functions are so interesting in On the Specialness of Special Functions (The Nonrandom Effusions of the Divine Mathematician).

From the point of view presented here, the shared mathematical features that serve to unify the special functions - the universal form of their asymptotic expansions - depends upon certain features of the world.

(Emphasis his.) That is, the physical world itself, by presenting a patterned appearance, must be governed by a self-consistent set of rules that create that pattern. In mathematical modeling, these rules are best represented by asymptotic analysis and, you guessed it, special functions, that reveal the universal structure of reality in their asymptotic behavior. Certainly this approach to modeling has been hugely successful, and remains so in current research (including my own).

My current digs, however, are located in the small nexus that butts up against these neighborhoods and those in computer science. Scott Aaronson, who occupies an equivalent juncture between computer science and physics, has written several highly readable and extremely interesting pieces on the commonalities he sees in his respective locale. I've found them to be a particularly valuable way to see beyond the unfortunately shallow exploration of computational complexity that is given in most graduate-level introductory classes.

In NP-complete Problems and Physical Reality Aaronson looks out of his East-facing window toward physics for hints about ways to solve NP-complete problems by using physical processes (e.g., simulated annealing). That is, can physical reality efficiently solve instances of "hard" problems? Although he concludes that the evidence is not promising, he points to a fundamental connection between physics and computer science.

Then turning to look out his West-facing window towards computer science, he asks Is P Versus NP Formally Indepenent?, where he considers formal logic systems and the implications of Godel's Incompleteness Theorem for the likelihood of resolving the P versus NP question. It's stealing his thunder a little, but the most quotable line comes from his conclusion:

So I'll state, as one of the few definite conclusions of this survey, that P \not= NP is either true or false. It's one or the other. But we may not be able to prove which way it goes, and we may not be able to prove that we can't prove it.

There's a little nagging question that some researchers are only just beginning to explore, which is, are certain laws of physics formally independent? I'm not even entirely sure what that means, but it's an interesting kind of question to ponder on a lazy Sunday afternoon.

There's something else embedded in these topics, though. Almost all of the current work on complexity theory is logic-oriented, essentially because it was born of the logic and formal mathematics of the first half of the 20th century. But, if we believe Mumford's claim that statistical inference (and in particular Bayesian inference) will invade all of science, I wonder what insights it can give us about solving hard problems, and perhaps why they're hard to begin with.

I'm aware of only anecdotal evidence of such benefits, in the form of the Survey Propagation Algorithm and its success at solving hard k-SAT formulas. The insights from the physicists' non-rigorous results has even helped improve our rigorous understanding of why problems like random k-SAT undergo a phase transition from mostly easy to mostly hard. (The intuition is, in short, that as the density of constraints increases, the space of valid solutions fragments into many disconnected regions.) Perhaps there's more being done here than I know of, but it seems that a theory of inferential algorithms as they apply to complexity theory (I'm not even sure what that means, precisely; perhaps it doesn't differ significantly from PPT algorithms) might teach us something fundamental about computation.

posted March 1, 2006 02:32 PM in Interdisciplinarity | permalink | Comments (0)

February 21, 2006

Pirates off the Coast of Paradise

At the beginning of graduate school, few people have a clear idea of what area of research they ultimately want to get into. Many come in with vague or ill-informed notions of their likes and dislikes, most of which are due to the idiosyncrasies of their undergraduate major's curriculum, and perhaps scraps of advice from busy professors. For Computer Science, it seems that most undergraduate curricula emphasize the physical computer, i.e., the programming, the operating system and basic algorithm analysis, over the science, let alone the underlying theory that makes computing itself understandable. For instance, as a teaching assistant for an algorithms course during my first semester in grad school, I was disabused of any preconceptions when many students had trouble designing, carrying-out, and writing-up a simple numerical experiment to measure the running time of an algorithm as a function of its input size, and I distinctly remember seeing several minds explode (and, not in the Eureka! sense) during a sketch of Cantor's diagonalization argument. When you consider these anecdotes along with the flat or declining numbers of students enrolling in computer science, we have a grim picture of both the value that society attributes to Computer Science and the future of the discipline.

The naive inference here would be that students are (rightly) shying away from a field that serves little purpose to society, or to them, beyond providing programming talent for other fields (e.g., the various biological or medical sciences, or IT departments, which have a bottomless appetite for people who can manage information with a computer). And, with programming jobs being outsourced to India and China, one might wonder if the future holds anything but an increasing Dilbert-ization of Computer Science.

This brings us to a recent talk delivered by Prof. Bernard Chazelle (CS, Princeton) at the AAAS Annual Meeting about the relevance of the Theory of Computer Science (TCS for short). Chazelle's talk was covered briefly by PhysOrg, although his separate and longer essay really does a better job of making the point,

Moore's Law has fueled computer science's sizzle and sparkle, but it may have obscured its uncanny resemblance to pre-Einstein physics: healthy and plump and ripe for a revolution. Computing promises to be the most disruptive scientific paradigm since quantum mechanics. Unfortunately, it is the proverbial riddle wrapped in a mystery inside an enigma. The stakes are high, for our inability to “get” what computing is all about may well play iceberg to the Titanic of modern science.

He means that behind the glitz and glam of iPods, Internet porn, and unmanned autonomous vehicles armed with GPS-guided missles, TCS has been drawing fundamental connections, through the paradigm of abstract computation, between previously disparate areas throughout science. Suresh Venkatasubramanian (see also Jeff Erickson and Lance Fortnow) phrases it in the form of something like a Buddhist koan,

Theoretical computer science would exist even if there were no computers.

Scott Aaronson, in his inimitable style, puts it more directly and draws an important connection with physics,

The first lesson is that computational complexity theory is really, really, really not about computers. Computers play the same role in complexity that clocks, trains, and elevators play in relativity. They're a great way to illustrate the point, they were probably essential for discovering the point, but they're not the point. The best definition of complexity theory I can think of is that it's quantitative theology: the mathematical study of hypothetical superintelligent beings such as gods.

Actually, that last bit may be overstating things a little, but the idea is fair. Just as theoretical physics describes the physical limits of reality, theoretical computer science describes both the limits of what can be computed and how. But, what is physically possible is tightly related to what is computationally possible; physics is a certain kind of computation. For instance, a guiding principle of physics is that of energy minimization, which is a specific kind of search problem, and search problems are the hallmark of CS.

The Theory of Computer Science is, quite to the contrary of the impression with which I was left after my several TCS courses in graduate school, much more than proving that certain problems are "hard" (NP-complete) or "easy" (in P), or that we can sometimes get "close" to the best much more easily than we can find the best itself (approximation algorithms), or especially that working in TCS requires learning a host of seemingly unrelated tricks, hacks and gimmicks. Were it only these, TCS would be interesting in the same way that Sudoku puzzles are interesting - mildly diverting for some time, but eventually you get tired of doing the same thing over and over.

Fortunately, TCS is much more than these things. It is the thin filament that connects the mathematics of every natural science, touching at once game theory, information theory, learning theory, search and optimization, number theory, and many more. Results in TCS, and in complexity theory specifically, have deep and profound implications for what the future will look like. (E.g., do we live in a world where no secret can actually be kept hidden from a nosey third party?) A few TCS-related topics that John Baez, a mathematical physicist at UC Riverside who's become a promoter of TCS, pointed to recently include "cryptographic hash functions, pseudo-random number generators, and the amazing theorem of Razborov and Rudich which says roughly that if P is not equal to NP, then this fact is hard to prove." (If you know what P and NP mean, then this last one probably doesn't seem that surprising, but that means you're thinking about it in the wrong direction!) In fact, the question of P versus NP may even have something to say about the kind of self-consistency we can expect in the laws of physics, and whether we can ever hope to find a Grand Unified Theory. (For those of you hoping for worm-hole-based FTL travel in the future, P vs. NP now concerns you, too.)

Alas my enthusiasm for these implications and connections is stunted by a developing cynicism, not because of a failure to deliver on its founding promises (as, for instance, was the problem that ultimately toppled artificial intelligence), but rather because of its inability to convince not just the funding agencies like NSF that it matters, but its inability to convince the rest of Computer Science that it matters. That is, TCS is a vitally important, but a needlessly remote, field of CS, and is valued by the rest of CS for reasons analogous to those for which CS is valued by other disciplines: its ability to get things done, i.e., actual algorithms. This problem is aggravated by the fact that the mathematical training necessary to build toward a career in TCS is not a part of the standard CS curriculum (I mean at the undergraduate level, but the graduate one seems equally faulted). Instead, you acquire that knowledge by either working with the luminaries of the field (if you end up at the right school), or by essentially picking up the equivalent of a degree in higher mathematics (e.g., analysis, measure theory, abstract algebra, group theory, etc.). As Chazelle puts it in his pre-talk interview, "Computer science ... is messy and infuriatingly complex." I argue that this complexity is what makes CS, and particularly TCS, inaccessible and hard-to-appreciated. If Computer Science as a discipline wants to survive to see the "revolution" Chazelle forecasts, it needs to reevaluate how it trains its future members, what it means to have a science of computers, and even further, what it means to have a theory of computers (a point CS does abysmally on). No computer scientist likes to be told her particular area of study is glorified programming, but without significant internal and external restructuring, that is all Computer Science will be to the rest of the world.

posted February 21, 2006 12:06 AM in Scientifically Speaking | permalink | Comments (0)

February 09, 2006

What intelligent design is really about

In the continuing saga of the topic, the Washington Post has an excellent (although a little lengthy) article (supplementary commentary) about the real issues underlaying the latest attack on evolution by creationists, a.k.a. intelligent designers. Quoting liberally,

If intelligent design advocates have generally been blind to the overwhelming evidence for evolution, scientists have generally been deaf to concerns about evolution's implications.

Or rather, as Russell Moore, a dean at the Southern Baptist Theological Seminary puts it in the article, "...most Americans fear a world in which everything is reduced to biology." It is a purely emotional argument for creationists, which is probably what makes it so difficult for them to understand the rational arguments of scientists. At its very root, creationism rebels against the idea of a world that is indifferent to their feelings and indifferent to their existence.

But, even Darwin struggled with this idea. In the end, he resolved the cognitive dissonance between his own piety and his deep understanding of biology by subscribing to the "blind watchmaker" point of view.

[Darwin] realized [his theory] was going to be controversial, but far from being anti-religious, ... Darwin saw evolution as evidence of an orderly, Christian God. While his findings contradicted literal interpretations of the Bible and the special place that human beings have in creation, Darwin believed he was showing something even more grand -- that God's hand was present in all living things... The machine [of natural selection], Darwin eventually concluded, was the way God brought complex life into existence.

(Emphasis mine.) The uncomfortable truth for those who wish for a personal God is that, by removing his active involvement in day-to-day affairs (i.e., God does not answer prayers), evolution makes the world less forgiving and less loving. It also makes it less cruel and less spiteful, as it lacks evil of the supernatural caliber. Evolution cuts away the black, the white and even the grey, leaving only the indifference of nature. This lack of higher meaning is exactly what creationists rebel against at a basal level.

So, without that higher (supernatural) meaning, without (supernatural) morality, what is mankind to do? As always, Richard Dawkins puts it succinctly, in his inimitable way.

Dawkins believes that, alone on Earth, human beings can rebel against the mechanistic indifference of nature. Understanding the pitiless ways of natural selection is precisely what can make humans moral, Dawkins said. It is human agency, human rationality and human law that can create a world more compassionate than nature, not a religious view that falsely sees the universe as fundamentally good and benevolent.

Isn't the ideal put forth in the American Constitution one of a secular civil society where we decide our own fate, we decide our own rules of behavior, and we decide what is moral and immoral? Perhaps the Christian creationists that wish for evolution, and all it represents, to be evicted from public education aren't so different from certain other factions that are hostile to secular civil society.

posted February 9, 2006 11:17 PM in Thinking Aloud | permalink | Comments (0)

January 30, 2006

Selecting morality

I've been musing a little more about Dr. Paul Bloom's article on the human tendency to believe in the supernatural. (See here for my last entry on this.) The question that's most lodged in my mind right now is thus, What if the only way to have intelligence like ours, i.e., intelligence that is capable of both rational (science) and irrational (art) creativity, is to have these two competing modules, the one that attributes agency to everything and the one that coldly computes the physical outcome of events? If this is true, then the ultimate goal of creating "intelligent" devices may have undesired side-effects. If futurists like Jeff Hawkins are right that an understanding of the algorithms that run the brain are within our grasp, then we may see these effects within our lifetime. Not only will your computer be able to tell when you're unhappy with it, you may need to intuit when it's unhappy with you! (Perhaps because you ignored it for several days while you tended to your Zen rock garden, or perhaps you left it behind while you went to the beach.)

This is a somewhat entertaining line of thought, with lots of unpleasant implications for our productivity (imagine having to not only keep track of the social relationships of your human friends, but also of all the electronic devices in your house). But, Bloom's discussion raises another interesting question. If our social brain evolved to manage the burgeoning collection of inter-personal and power relationships in our increasingly social existence, and if our social brain is a key part of our ability to "think" and imagine and understand the world, then perhaps it is hard-wired with certain moralistic beliefs. A popular line of argument between theists and atheists is the question of, If one does not get one's sense of morality from God, what is to stop everyone from doing exactly as they please, regardless of its consequences? The obligatory examples of such immoral (amoral?) behavior are rape and murder - that is, if I don't have in me the fear of God and his eternal wrath, what's to stop me from running out in the street and killing the first person I see?

Perhaps surprisingly, as the philosopher Daniel Dennett (Tufts University) mentions in this half-interview, half-survey article from The Boston Globe, being religious doesn't seem to have any impact on a person's tendency to do clearly immoral things that will get you thrown in jail. In fact, many of those whom are most vocal about morality (e.g., Pat Robertson) are themselves cravenly immoral, by any measure of the word (a detailed list of Robertson's crimes; a brief but humorous summary of them (scroll to bottom; note picture)).

Richard Dawkins, the well-known British ethologist and atheist, recently aired a two-part documentary, of his creation, on the BBC's Channel 4 attempting to explore exactly this question. (Audio portion for both episodes available here and here, courtesy of onegoodmove.org.) He first posits that faith is the antithesis of rationality - a somewhat incendiary assertion on the face of it. However, consider that faith is, by definition, the belief in something for which there is no evidence or for which there is evidence against, while rationally held beliefs are those based on evidence and evidence alone. In my mind, such a distinction is rather important for those with any interest in metaphysics, theology or that nebulous term, spirituality. Dawkins' argument goes very much along the lines of Stephen Weinberg, Nobel Prize in physics, who once said "Religion is an insult to human dignity - without it you'd have good people doing good things and evil people doing evil things. But for good people to do evil things it takes religion." However, Dawkins' documentary points at a rather more fundamental question, Where does morality comes from if not from God, or the cultural institutions of a religion?

This question was recently, although perhaps indirectly, explored by Jessica Flack and her colleagues at the Santa Fe Institute; published in Nature last week (summary here). Generally, Flack et al. studied the importance of impartial policing, by authoritative members of a pigtailed macaque troupe, to the cohesion and general health of the troupe as a whole. Their discovery that all social behavior in the troupe suffers in the absence of these policemen shows that they serve the important role of regulating the self-interested behavior of individuals. That is, by arbitrating impartially among their fellows in conflicts, when there is no advantage or benefit to them for doing so, the policemen demonstrate an innate sense of a right and wrong that is greater than themselves.

There are two points to take home from this discussion. First, that humans are not so different from other social animals in that we need constant reminders of what is "moral" in order for society to function. But second, if "moral" behavior can come from the self-interested behavior of individuals in social groups, as is the case for the pigtailed macaque, then it needs no supernatural explanation. Morality can thus derive from nothing more than the natural implication of real consequences, to both ourselves and others, for certain kinds of behaviors, and the observation that those consequences are undesirable. At its heart, this is the same line of reasoning for religious systems of morality, except that the undesirable consequences are supernatural, e.g., burning in Hell, not getting to spend eternity with God, etc. But clearly, the pigtailed macaques can be moral without God and supernatural consequences, so why can't humans?

J. C. Flack, M. Girvan, F. B. M. de Waal and D. C. Krakauer, "Policing stabilizes construction of social niches in primates." Nature 439, 426 (2006).

Update, Feb. 6th: In the New York Times today, there is an article about how quickly a person's moral compass can shift when certain unpalatable acts are sure to be done (by that person) in the near future, e.g., being employed as a part of the State capital punishment team, but being (morally) opposed to the death penalty. This reminds me of the Milgram experiment (no, not that one), which showed that a person's moral compass could be broken simply by someone with authority pushing it. In the NYTimes article, Prof. Bandura (Psychology, Stanford) puts it thus:

It's in our ability to selectively engage and disengage our moral standards, and it helps explain how people can be barbarically cruel in one moment and compassionate the next.

(Emphasis mine.) With a person's morality being so flexible, it's no wonder that constant reminders (i.e., policing) are needed to keep us behaving in a way that preserves civil society. Or, to use the terms theists prefer, it is policing, and the implicit terrestrial threat embodied by it, that keeps us from running out in the street and doing profane acts without a care.

Update, Feb. 8th: Salon.com has an interview with Prof. Dennet of Tufts University, a strong advocate of clinging to rationality in the face of the dangerous idea that everything that is "religious" in its nature is, by definition, off-limits to rational inquiry. Given that certain segments of society are trying (and succeeding) to expand the range of things that fall into that domain, Dennet is an encouragingly clear-headed voice. Also, when asked how we will know right from wrong without a religious base of morals, he answers that we will do as we have always done, and make our own rules for our behavior.

posted January 30, 2006 02:43 AM in Thinking Aloud | permalink | Comments (3)

January 05, 2006

Is God an accident?

This is the question that Dr. Paul Bloom, professor of psychology at Yale, explores in a fascinating exposé in The Atlantic Monthly on the origins of religion, as evidence by a belief in supernatural beings through a neurological basis of our ability to attritute agency. He begins,

Despite the vast number of religions, nearly everyone in the world believes in the same things: the existence of a soul, an afterlife, miracles, and the divine creation of the universe. Recently psychologists doing research on the minds of infants have discovered two related facts that may account for this phenomenon. One: human beings come into the world with a predisposition to believe in supernatural phenomena. And two: this predisposition is an incidental by-product of cognitive functioning gone awry. Which leads to the question ...

The question being, of course, whether the nearly universal belief in these things is an accident of evolution optimizing brain-function for something else entirely.

Belief in the supernatural is an overly dramatic way to put the more prosaic idea that we see agency (willful acts, as in, free will) where none exists. That is, consider the extreme ease with which we anthropomorphize inanimate objects like the Moon ("O, swear not by the moon, the fickle moon, the inconstant moon, that monthly changes in her circle orb, Lest that thy love prove likewise variable." Shakespeare Romeo and Juliet 2:ii), complex objects like our computers (intentionally confounding us, colluding to ruin our job or romantic prospects, etc.), and living creatures whom we view as little more than robots ("smart bacteria"). Bloom's consideration of the question of why is this innate tendency apparently universal among humans is a fascinating exploration of both evolution, human behavior and our pathologies. At the heart of his story arc, he considers whether easy attribution of agency provides some other useful ability in terms of natural selection. In short, he concludes that yes, our brain is hardwired to see intention and agency where none exists because viewing the world through this lens made (makes) it easier for us to manage our social connections and responsibilities, and the social consequences of our actions. For instance, consider a newborn - Bloom desribes experiments that show that

when twelve-month-olds see one object chasing another, they seem to understand that it really is chasing, with the goal of catching; they expect the chaser to continue its pursuit along the most direct path, and are surprised when it does otherwise.

But more generally,

Understanding of the physical world and understanding of the social world can be seen as akin to two distinct computers in a baby's brain, running separate programs and performing separate tasks. The understandings develop at different rates: the social one emerges somewhat later than the physical one. They evolved at different points in our prehistory; our physical understanding is shared by many species, whereas our social understanding is a relatively recent adaptation, and in some regards might be uniquely human.

This doesn't directly resolve the problem of liberal attribution of agency, which is the foundation of a belief in supernatural beings and forces, but Bloom resolves this by pointing out that because these two modes of thinking evolved separately and apparently function independently, we essentially view people (whose agency is understood by our "social brain") as being fundamentally different from objects (whose behavior is understood by our "physics brain"). This distinction makes it possible for us to envision "soulless bodies and bodiless souls", e.g., zombies and ghosts. With this in mind, certain recurrent themes in popular culture become eminently unsurprising.

So it seems that we are all dualists by default, a position that our everyday experience of consciousness only reinforces. Says Bloom, "We don't feel that we are our bodies. Rather, we feel that we occupy them, we possess them, we own them." The problem of having two modes of thinking about the world is only exacerbated by the real world's complexity, i.e., is a dog's behavior best understood with the physics brain or the social brain?, is a computer's behavior best understood with... you get the idea. In fact, it seems that you could argue quite convincingly that much of modern human thought (e.g., Hobbes, Locke, Marx and Smith) has been an exploration of the tension between these modes; Hobbes in particular sought a physical explanation of social organization. This also points out, to some degree, why it is so difficult for humans to be rational beings, i.e., there is a fundamental irrationality in the way we view the world that is difficult to first be aware of, and then to manage.

Education, or more specifically a training in scientific principles, can be viewed as a conditioning regiment that encourages the active management of the social brain's tendency to attribute agency. For instance, I suspect that the best scientists use their social mode of thinking when analyzing the interaction of various forces and bodies to make the great leaps of intuition that yield true steps forward in scientific understanding. That is, the irrationality of the two modes of thinking can, if engaged properly, be harnessed to extend the domain of rationality. There is certainly a great many suggestive anecdotes for this idea, and it suggests that if we ever want computers to truly solve problems the way humans do (as opposed to simply engaging in statistical melee), they will need to learn how to be more irrational, but in a careful way. I certainly wouldn't want my laptop to suddenly become superstitious about say, being plugged into the Internet!

posted January 5, 2006 04:50 PM in Scientifically Speaking | permalink | Comments (0)

December 19, 2005

On modeling the human response time function; Part 3.

Much to my surprise, this morning I awoke to find several emails in my inbox apparently related to my commentary on the Barabasi paper in Nature. This morning, Anders Johansen pointed out to myself and Luis Amaral (I can only assume that he has already communicated this to Barabasi) that in 2004 he published an article entitled Probing human response times in Physica A about the very same topic using the very same data as that of Barabasi's paper. In it, he displays the now familiar heavy-tailed distribution of response times and fits a power law of the form P(t) ~ 1/(t+c) where c is a constant estimated from the data. Asymptotically, this is the same as Barabasi's P(t) ~ 1/t; it differs in the lower tail, i.e., for t < c where it scales more uniformly. As an originating mechanism, he suggests something related to a spin-glass model of human dynamics.

Although Johansen's paper raises other issues, which I'll discuss briefly in a moment, let's step back and think about this controversy from a scientific perspective. There are two slightly different approaches to modeling that are being employed to understand the response-time function of human behavior. The first is a purely "fit-the-data" approach, which is largely what Johansen has done, and certainly what Amaral's group has done. The other, employed by Barabasi, uses enough data analysis to extract some interesting features, posits a mechanism for the origin of those and then sets about connecting the two. The advantage of developing such a mechanistic explanation is that (if done properly) it provides falsifiable hypotheses and can move the discussion past simple data-analysis techniques. The trouble begins, as I've mentioned before, when either a possible mechanistic model is declared to be "correct" before being properly vetted, or when an insufficient amount of data analysis is done before positing a mechanism. This latter kind of trouble allows for a debate over how much support the data really provides to the proposed mechanism, and is exactly the source of the exchange between Barabasi et al. and Stouffer et al.

I tend to agree with the idea implicitly put forward by Stouffer et al.'s comment that Barabasi should have done more thorough data analysis before publishing, or alternatively, been a little more cautious in his claims of the universality of his mechanism. In light of Johansen's paper and Johansen's statement that he and Barabasi spoke at the talk in 2003 where Johansen presented his results, there is now the specter that either previous work was not cited that should have been, or something more egregious happened. While not to say that this aspect of the story isn't an important issue in itself, it is a separate one from the issues regarding the modeling, and it is those with which I am primarily concerned. But, given the high profile of articles published in journals like Nature, this kind of gross error in attribution does little to reassure me that such journals are not aggravating certain systemic problems in the scientific publication system. This will probably be a topic of a later post, if I ever get around to it. But let's get back to the modeling questions.

Seeking to be more physics and less statistics, the ultimate goal of such a study of human behavior should be to understand the mechanism at play, and at least Barabasi did put forward and analyze a plausible suggestion there, even if a) he may not have done enough data analysis to properly support it or his claims of universality, and b) his model assumes some reasonably unrealistic behavior on the part of humans. Indeed, the former is my chief complaint about his paper, and why I am grateful for the Stouffer et al. comment and the ensuing discussion. With regard to the latter, my preference would have been for Barabasi to have discussed the fragility of his model with respect to the particular assumptions he describes. That is, although he assumes it, humans probably don't assign priorities to their tasks with anything like a uniformly random distribution and nor do humans always execute their highest priority task next. For instance, can you decide, right now without thinking, what the most important email in your inbox is at this moment? Instead, he commits the crime of hubris and neglects these details in favor of the suggestiveness of his model given the data. On the other hand, regardless of their implausibility, both of these assumptions about human behavior can be tested through experiments with real people and through numerical simulation. That is, these assumptions become predictions about the world that, if they fail to agree with experiment, would falsify the model. This seems to me an advantage of Barabasi's mechanism over that proposed by Johansen, which, by relying on a spin glass model of human behavior, seems quite trickier to falsify.

But let's get back to the topic of the data analysis and the argument between Stouffer et al. and Barabasi et al. (now also Johansen) over whether the data better supports a log-normal or a power-law distribution. The importance of this point is that if the log-normal is the better fit, then the mathematical model Barabasi proposes cannot be the originating mechanism. From my experience with distributions with heavy tails, it can be difficult to statistically (let alone visually) distinguish between a log-normal and various kinds of power laws. In human systems, there is almost never enough data (read: orders of magnitude) to distinguish these without using standard (but sophisticated) statistical tools. This is because for any finite sample of data from an asymptotic distribution, there will be deviations that will blur the functional form just enough to look rather like the other. For instance, if you look closely at the data of Barabasi or Johansen, there are deviations from the power-law distribution in the far upper tail. Stouffer et al. cite these as examples of the poor fit of the power law and as evidence supporting the log-normal. Unfortunately, they could simply be due to deviations due to finite-sample effects (not to be confused with finite-size effects), and the only way to determine if they could have been is to try resampling the hypothesized distribution and measuring the sample deviation against the observed one.

The approach that I tend to favor for resolving this kind of question combines a goodness-of-fit test with a statistical power test to distinguish between alternative models. It's a bit more labor-intensive than the Bayesian model selection employed by Stouffer et al., but this approach offers, in addition to others that I'll describe momentarily, the advantage of being able to say that, given the data, neither model is good or that both models are good.

Using Monte Carlo simulation and something like the Kolmogorov-Smirnov goodness-of-fit test, you can quantitatively gauge how likely a random sample drawn from your hypothesized function F (which can be derived using maximum likelihood parameter estimation or by something like a least-squares fit; it doesn't matter) will have a deviation from F at least as big as the one observed in the data. By then comparing the deviations with an alternative function G (e.g., a power law versus a log-normal), you get a measure of the power of F over G as an originating model of the data. For heavy-tailed distributions, particularly those with a sample-mean that converges slowly or never at all (as is the case for something like P(t) ~ 1/t), sampling deviations can cause pretty significant problems with model selection, and I suspect that the Bayesian model selection approach is sensitive to these. On the other hand, by incorporating sampling variation into the model selection process itself, one can get an idea of whether it is even possible to select one model over another. If someone were to use this approach to analyze the data of human response times, I suspect that the pure power law would be a poor fit (the data looks too curved for that), but that the power law suggested in Johansen's paper would be largely statistically indistinguishable from a log-normal. With this knowledge in hand, one is then free to posit mechanisms that generate either distribution and then proceed to validate the theory by testing its predictions (e.g., its assumptions).

So, in the end, we may not have gained much in arguing about which heavy-tailed distribution the data likely came from, and instead should consider whether or not an equally plausible mechanism for generating the response-time data could be derived from the standard mechanisms for producing log-normal distributions. If we had such an alternative mechanism, then we could devise some experiments to distinguish between them and perhaps actually settle this question like scientists.

As a closing thought, my interest in this debate is not particularly in its politics. Rather, I think this story suggests some excellent questions about the practice of modeling, the questions a good modeler should ponder on the road to truth, and some of the pot holes strewn about the field of complex systems. It also, unfortunately, provides some anecdotal evidence of some systemic problems with attribution, the scientific publishing industry and the current state of peer-review at high-profile, fast turn-around-time journals.

References for those interested in reading the source material.

A. Johansen, "Probing human response times." Physica A 338 (2004) 286-291.

A.-L. Barabasi, "The origin of bursts and heavy tails in human dynamics." Nature 435 (2005) 207-211.

D. B. Stouffer, R. D. Malmgren and L. A. N. Amaral "Comment on 'The origin of bursts and heavy tails in human dynamics'." e-print (2005).

J.-P. Eckmann, E. Moses and D. Sergi, "Entropy of dialogues creates coherent structures in e-mail traffic." PNAS USA 101 (2004) 14333-14337.

A.-L. Barabasi, K.-I. Goh, A. Vazquez, "Reply to Comment on 'The origin of bursts and heavy tails in human dynamics'." e-print (2005).

posted December 19, 2005 04:32 PM in Scientifically Speaking | permalink | Comments (0)

November 06, 2005

Finding your audience

Some time ago, a discussion erupted on Crooked Timber about the ettiquete of interdisciplinary research. This conversation was originally sparked by Eszter Hargittai, a sociologist with a distinct interest in social network analysis, who complained about some physicists working on social networks and failing to appropriately cite previous work in the area. I won't rehash the details, since you can read them for yourself. However, the point of the discussion that is salient for this post is the question of where and how one should publish and promote interdisciplinary work.

Over the better half of this past year, I have had my own journey with doing interdisciplinary research in political science. Long-time readers will know that I'm referring to my work with here, here and here). In our paper (old version via arxiv), we use tools from extremal statistics and physics to think carefully about the nature and evolution of terrorism, and, I think, uncover some interesting properties and trends at the global level. Throughout the process of getting our results published in an appropriate technical venue, I have espoused the belief that it should either go to an interdisciplinary journal or one that political scientists will read. That is, I felt that it should go to a journal with an audience that would both appreciate the results and understand their implications.

This idea of appropriateness and audience, I think, is a central problem for interdisciplinary researchers. In an ideal world, every piece of novel research would be communicated to exactly that group of people who would get the most out of learning about the new result and who would be able to utilize the advance to further deepen our knowledge of the natural world. Academic journals and conferences are a poor approximation of this ideal, but currently they're the best institutional mechanism we have. To correct for the non-idealness of these institutions, academics have always distributed preprints of their work to their colleagues (who often pass them to their own friends, etc.). Blogs, e-print archives and the world wide web in general constitute interesting new developments in this practice and show how the fundamental need to communicate ideas will co-opt whatever technology is available. Returning to the point, however, what is interesting about interdisciplinary research is that by definition it has multiple target audiences to which it could, or should, be communicated. Choosing that audience can become a question of choosing what aspects of the work you think are most important to science in general, i.e., what audience has the most potential to further develop your ideas? For physicists working on networks, some of their work can and should be sent to sociology journals, as its main contribution is in the form of understanding social structure and implication, and sociologists are best able to use these discoveries to explain other complex social phenomena and to incorporate them into their existing theoretical frameworks.

In our work on the statistics of terrorism, Maxwell and I have chosen a compromise strategy to address this question: while we selected general science or interdisciplinary journals to send our first manuscript on the topic, we have simultaneously been making contacts and promoting our ideas in political science so as to try to understand how to further develop these ideas within their framework (and perhaps how to encourage the establishment to engage in these ideas directly). This process has been educational in a number of ways, and recently has begun to bear fruit. For instance, at the end of October, Maxwell and I attended the International Security Annual Conference (in Denver this year) where we presented our work in the second of two panels on terrorism. Although it may have been because we announced ourselves as computer scientists, stood up to speak, used slides and showed lots of colorful figures, the audience (mostly political scientists, with apparently some government folk present as well) was extremely receptive to our presentation (despite the expected questions about statistics, the use of randomness and various other technical points that were unfamiliar to them). This led to several interesting contacts and conversations after the session, and an invitation to the both of us to attend a workshop in Washington DC on predictive analysis for terrorism that will be attended by people from the entire alphabet soup of spook agencies. Also, thanks to the mention of our work in The Economist over the summer, we have similarly been contacted be a handful of political scientists who are doing rigorous quantitative work in a similar vein as ours. We're cautiously optimistic that this may all lead to some fruitful collaborations, and ultimately to communicating our ideas to the people to whom they will matter the most.

Despite the current popularity of the idea of interdisciplinary research (not to be confused with excitement about the topic itself, which would take the form of funding), if you are interested in pursuing a career in it, like many aspects of an academic career, there is little education about its pitfalls. The question of etiquette in academic research deserves much more attention in graduate school than it currently receives, as does its subtopic of interdisciplinary etiquette. Essentially, it is this last idea that lays at the heart of Eszter Hargittai's original complaint about physicists working on social networks: because science is a fundamentally social exercise, there are social consequences for not observing the accepted etiquette, and those consequences can be a little unpredictable when the etiquette is still being hammered out as in the case of interdisciplinary research. For our work on terrorism, our compromise strategy has worked so far, but I fully expect that, as we continue to work in the area, we will need to more fully adopt the mode and convention of our target audience in order to communicate effectively with them.

posted November 6, 2005 01:15 PM in Simply Academic | permalink | Comments (1)

October 27, 2005

Links, links, links.

The title is perhaps a modern variation on Hamlet's famous "words, words, words" quip to Lord Polonius. Some things I've read recently, with mild amounts of editorializing:

Tim Burke (History professor at Swarthmore College) recently discussed (again) his thoughts on the future of academia. That is, why would it take for college costs to actually decrease. I assume this arises at least partially as a result of the recent New York Times article on the ever increasing tuition rates for colleges in this country. He argues that modern college costs rise at least partially as a result of pressure from lawsuits and parents to provide in loco parentis to the kids attending. Given the degree of hand-holding I experienced at Haverford, perhaps the closest thing to Swarthmore without actually being Swat, this makes a lot of sense. I suspect, however, that tuition prices will continue to increase apace for the time being, if only because enrollment rates continue to remain high.

Speaking of high enrollment rates, Burke makes the interesting point

... the more highly selective a college or university is in its admission policies, the more useful it is for an employer as a device for identifying potentially valuable employees, even if the employer doesn’t know or care what happened to the potential employee while he or she was a student.

This assertion belies an assumption about whose pervasiveness I wonder. Basically, Burke is claiming that selectivity is an objective measure of something. Indeed, it is. It's an objective measure of the popularity of the school, filtered through the finite size of a freshman class that the school can reasonably admit, and nothing else. A huge institution could catapult itself higher in the selectivity rankings simply by cutting the number of students it admits.

Barabasi's recent promotion of his ideas about the relationship between "bursty behavior" among humans and our managing a queue of tasks to accomplish continues to generate press. New Scientist and Physics Web both picked the piece of work on Darwin's, Einstein's and modern email-usage communication patterns. To briefly summarize from Barabasi's own paper:

Here we show that the bursty nature of human behavior is a consequence of a decision based queueing process: when individuals execute tasks based on some perceived priority, the timing of the tasks will be heavy tailed, most tasks being rapidly executed, while a few experience very long waiting times.

A.-L. Barabasi (2005) "The origin of bursts and heavy tails in human dynamics." Nature 435, 207.

That is, the response times are described by a power law with exponent between 1.0 and 1.5. Once again, power laws are everywhere. (NB: In the interest of full disclosure, power laws are one focus of my research, although I've gone on record saying that there's something of an irrational exuberance for them these days.) To those of you experiencing power-law fatigue, it may not come as any surprise that last night in the daily arXiv mailing of new work, a very critical (I am even tempted to say scathing) comment on Barabasi's work appeared. Again, to briefly summarize from the comment:

... we quantitatively demonstrate that the reported power-law distributions are solely an artifact of the analysis of the empirical data and that the proposed model is not representative of e-mail communication patterns.

D. B. Stouffer, R. D. Malmgren and L. A. N. Amaral (2005) "Comment on The origin of bursts and heavy tails in human dynamics." e-print.

There are several interesting threads imbedded in this discussion, the main one being on the twin supports of good empirical research: 1) rigorous quantitative tools for data analysis, and 2) a firm basis in empirical and statistical methods to support whatever conclusions you draw with aforementioned tools. In this case, Stouffer, Malmgren and Amaral utilize Bayesian model selection to eliminate the power law as a model, and instead show that the distributions are better described by a log-normal distribution. This idea of the importance of good tools and good statistics is something I've written on before. Cosma Shalizi is a continual booster of these issues, particularly among physicists working in extremal statistics and social science.

And finally, Carl Zimmer, always excellent, on the evolution of language.

[Update: After Cosma linked to my post, I realized it needed a little bit of cleaning up.]

posted October 27, 2005 01:23 AM in Thinking Aloud | permalink | Comments (0)

October 17, 2005

Some assembly required.

While browsing the usual selection of online venues for news about the world, I came across a reference to a recent statistical study of American and European knowledge of science and technology, conducted in part by the National Science Foundation. The results, as you my dear reader may guess, were depressing. Here are a few choice excerpts.

Conclusions about technology and science:

Technology has become so user friendly it is largely "invisible." Americans use technology with a minimal comprehension of how or why it works or the implications of its use or even where it comes from. American adults and children have a poor understanding of the essential characteristics of technology, how it influences society, and how people can and affect its development.

and

NSF surveys have asked respondents to explain in their own words what it means to study something scientifically. Based on their answers, it is possible to conclude that most Americans (two-thirds in 2001) do not have a firm grasp of what is meant by the scientific process. This lack of understanding may explain why a substantial portion of the population believes in various forms of pseudoscience.

Regarding evolution (a topical topic; see also several of my entries):

Response to one of the questions, "human beings, as we know them today, developed from earlier species of animals," may reflect religious beliefs rather than actual knowledge about science. In the United States, 53 percent of respondents answered "true" to that statement in 2001, the highest level ever recorded by the NSF survey. (Before 2001, no more than 45 percent of respondents answered "true.") The 2001 result represented a major change from past surveys and brought the United States more in line with other industrialized countries about the question of evolution.

Yet, there is hope

... the number of people who know that antibiotics do not kill viruses has been increasing. In 2001, for the first time, a majority (51 percent) of U.S. respondents answered this question correctly, up from 40 percent in 1995. In Europe, 40 percent of respondents answered the question correctly in 2001, compared with only 27 percent in 1992.

Also, the survey found that belief in devil possession declined between 1990 and 2001. On the other hand, belief in other paranormal phenomena increased, and

... at least a quarter of the U.S. population believes in astrology, i.e., that the position of the stars and planets can affect people's lives. Although the majority (56 percent) of those queried in the 2001 NSF survey said that astrology is "not at all scientific," 9 percent said it is "very scientific" and 31 percent thought it is "sort of scientific".

In the United States, skepticism about astrology is strongly related to level of education [snip]. In Europe, however, respondents with college degrees were just as likely as others to claim that astrology is scientific.

Aside from being thoroughly depressing for a booster of science and rationalism such as myself, this suggests that, not only do Westerners have little conception of what it means to be "scientific" or what "technology" actually is, but Western life does not require people to have any mastery of scientific or technological principles. That is, one can get along just fine in life while being completely ignorant of why things actually happen or how to rigorously test hypotheses. Of course, this is a little bit of a circular problem, since if no one understands how things work, people will design user-friendly things that don't need to be understood in order to function. That is, those who are not ignorant of how the world works provide no incentive to those who are to change their ignorant ways. Of course, aren't we all ignorant of the complicated details of many of the wonders that surround us? Perhaps the crucial difference lies not in being ignorant itself, but in being unwilling to seek out the truth (especially when it matters).

The conclusions of the surveys do nothing except bolster my belief that rational thinking and careful curiosity are not the natural mode of human thought, and that the Enlightenment was a weird and unnatural turn of events. Perhaps one of the most frightening bits of the survey was the following statement

there is no evidence to suggest that legislators or their staff are any more technologically literate than the general public.

posted October 17, 2005 08:52 PM in Thinking Aloud | permalink | Comments (0)

September 24, 2005

Measuring technological progress; a primer

I used to think it was just a silly idea that no one really took seriously, but here I am blogging about it. After reading Bill Tozier's rip on Ray Kurzweil's concept of The Singularity, I'm led to record some of my own thoughts. (Disclaimer: I'm not a regular reader of Bill's, although perhaps I should be, having found his blog via Cosma Shalizi.) I would briefly summarize this Singularity business, but best to let its inventor do the deed:

An analysis of the history of technology shows that technological change is exponential, contrary to the common-sense "intuitive linear" view. So we won't experience 100 years of progress in the 21st century -- it will be more like 20,000 years of progress (at today's rate). ... Within a few decades, machine intelligence will surpass human intelligence, leading to The Singularity -- technological change so rapid and profound it represents a rupture in the fabric of human history.

Bill contends that it's worthless for technological change to happen at an exponential rate if no one is actually using those ideas. But that misses the point of the Singularity a little - Kurzweil is actually claiming that the rate of change in the actual technological state of humanity is advancing at an ever increasing rate, and frequently employs figures showing exponential trends in certain metrics like CPU speed, number of genes sequenced, etc. Were it merely a production of ideas, well, you could argue that it could be exponential by simply claiming it's proportional to the current human population (i.e., if each person has one novel idea to contribute to the world), and be done with it. But the idea of the Singularity implies that the technological power of humanity grows exponentially, so it naturally assumes that ideas will be turned into applications.

Unlike Kurzweil, I'm a bad futurist. That is, I am loath to share my vision of the future because I'm pretty sure I'll be wrong; the future will be more interesting and less predictable than I think anyone gives it credit for. So, let me propose that there is at least one much better metric by which to chart the "growth" of technology's impact on human civilization. To be quantitative, let's measure the average amount of energy that an average human releases (e.g., internal combustion engines, jet engines, electricity, etc.) in a given year. Of course, this ignores, like all of economic theory, the environmental cost of such expenditure in the form of drawing down the bank of natural resources available to us on Earth, and also ignores the fact that energy efficiency is another form of technological advancement. However, my measure at least, is nicely well-defined and has none of the non-falsifiable overtones of Kurzweil's idea; plus, if it is increasing exponentially, then it has lots of nice implications about technology and perhaps even the stability of civilization.

Generally, though, you can't fault Kurzweil for his optimism; he truly believes that the future will be a good place to raise our children, and that the Singularity will ultimately bring about wonderful changes to our lives such as immortality (although, it's not settled if immortality will be a Good Thing(tm), for instance), an end to stupidity and computers that do what you want rather than what you tell them to. In his own words:

The implications include the merger of biological and nonbiological intelligence, immortal software-based humans, and ultra-high levels of intelligence that expand outward in the universe at the speed of light.

Since Kurzweil claims most of us today will be around to witness the Singularity, I suppose I'll just wait to see who's right, him or me (with my own secret and probably way off-base predictions).

posted September 24, 2005 08:05 PM in Thinking Aloud | permalink | Comments (1)

May 14, 2005

The utility of irrationality

I have long been a proponent of rationality, yet this important mode of thinking is not the most natural for the human brain. It is our irrationality that distinguishes us from purely computational beings. Were we perfectly rational thinkers, there would be no impulse buys, no procrastination, no pleasant diversions and no megalomaniacal dictators. Indeed, being perfectly rational is so far from a good approximation of how humans think, it's laughable that economists ever considered it a reasonable model for human economic behavior (neoclassical microeconomics assumed this, although lately ideas are becoming more reasonable).

Perfect rationality, or the assumption that someone will always follow the most rational choice given the available information, is at least part of what makes it inherently difficult for computers to solve certain kinds of tasks in the complex world we inhabit (e.g., driving cars). That is, in order to make an immediate decision, when you have wholly insufficient knowledge about past, present and future, you need something else to drive you toward a particular solution. For humans, these driving forces are emotions, bodily needs and a fundamental failure to be completely rational, and they almost always tip the balance of indecision toward some action. Yet, irrationality serves a greater purpose than simply helping us to quickly make up our minds. It is also what gives us the visceral pleasures of art, music and relaxing afternoons in the park. The particularly pathological ways in which we are irrational are what makes us humans, rather than something else. Perhaps, if we ever encounter an extraterrestrial culture or learn to communicate with dolphins, we will, as a species, come to appreciate the origins of our uniqueness by comparing our irrationalities with theirs.

Being irrational seems to be deeply rooted in the way we operate in the real world. I recall a particularly interesting case study from my freshman psychology course at Swarthmore College: a successful financial investor had a brain lesion on the structure of the brain that is associated with emotion. The removal of this structure resulted in a perfectly normal man who happened to also be horrible at investing. Why? Apparently, because the brain normally stores a great deal of information about past decisions in the form of emotional associations, previous bad investments recalled a subconscious negative emotional response when jogged by similar characteristics of a present situation (and vice versa). Emotion, then, is a fundamental tool for representing the past, i.e., it is the basis of memory, and, as such, is both irrational and mutable. In fact, I could spend the rest of the entry musing on the utility of irrationality and its functional role in the brain (e.g., creativity in young songbirds). However, what is more interesting to me at this moment is the observation that we are first and foremost irrational beings, and only secondarily rational ones. Indeed, being rational is so difficult that it requires a particularly painful kind of conditioning in order to draw it out of the mental darkness that normally obscures it. That is, it requires education that emphasizes the principles of rational inquiry, skepticism and empirical validation. Sadly, I find none of these to be taught with much reliability in undergraduate Computer Science education (a topic about which I will likely blog in the future).

This month's Scientific American "Skeptic" column treats just this topic: the difficulty of being rational. In his usual concise yet edifying style, Shermer describes the tendency of humans to look for patterns in the tidal waves of information constantly washing over us, and that although it is completely natural for the human brain, evolved for this very purpose, to discover correlations in that information, it takes mental rigor to distinguish the true correlations from the false:

We evolved as a social primate species whose language ability facilitated the exchange of such association anecdotes. The problem is that although true pattern recognition helps us survive, false pattern recognition does not necessarily get us killed, and so the overall phenomenon has endured the winnowing process of natural selection. [...] Anecdotal thinking comes naturally; science requires training.

Thinking rationally requires practice, disciplined caution and a willingness to admit to being wrong. These are the things that do not come naturally to us. The human brain is so powerfully engineered to discover correlations and believe in their truth, that, for instance, even after years of rigorous training in physics, undergraduates routinely still believe their eyes and youthful assumptions about the conservation of momentum, over what their expensive college education has taught them (one of my fellow Haverford graduates Howard Glasser '00 studied exactly this topic for his honors thesis). That is, we are more likely to trust our intuition than to trust textbooks. The dangers of this kind of behavior taken to the extreme are, unfortunately, well documented.

Yet, despite this hard line against irrationality and our predilection toward finding false correlations in the world, this behavior has a utilitarian purpose beyond those described above; one that is completely determined by the particular characteristics and idiosyncrasies of being human and has implications for the creative process that is Science. For instance, tarot cards, astrology and other metaphysical phenomenon (about which I've blogged before) may completely fail the test of scientific validation for predicting the future, yet they serve the utilitarian purpose of stimulating our minds to be introspective. These devices are designed to engage the brain's pattern recognition centers, encouraging you to think about the prediction's meaning in your life rather than thinking about it objectively. Indeed, this is their only value: with so much information, both about self and others, to consider at each moment and in each decision that must be made, the utility of any such device is in focusing on interesting aspects which have meaning to the considerer.

Naturally, one might use this argument to justify a wide variety of completely irrational behavior, and indeed, anything that stimulates the observer in ways that go beyond their normal modes of thinking has some utility. However, the danger in this line of argument lies in confusing the tool with the mechanism; tools are merely descriptive, while mechanisms have explanatory power. This is the fundamental difference, for instance, between physics and statistics. The former is the pursuit of natural mechanisms that explain the existence of the real structure and regularity observed by the latter; both are essential elements of scientific inquiry. As such, Irrationality, the jester which produces an incessant and uncountable number of interesting correlations, provides the material through which, wielding the scepter of empirical validation according to the writ of scientific inquiry, Rationality sorts in an effort to find Truth. Without the one, the other is unfocused and mired in detail, while without the other, the one is frivolous and false.

posted May 14, 2005 06:07 AM in Thinking Aloud | permalink | Comments (0)

March 13, 2005

The virtues of playing dice

In physics, everyone (well, almost) assumes that true randomness does exist, because so much of modern physics is built on this utilitarian assumption. Despite some people being very determined to do so, physicists have not determined that determinism isn't the rule of the universe; all they have is a bunch of empirical evidence against it (which for most physicists, is enough). So-called "hidden variable" models have been a popular way (e.g., here, here and here) to probe this question in a principled fashion. They're based on the premise that if in, for instance quantum mechanics, there was some hidden variable that we've just been too stupid to figure out yet, then there must be regularities (correlations) in physical reality that betray its existence. Yet so far, no hidden variable model has prevailed against quantum mechanics and apparent true randomness. (For an excellent discussion of randomness and physics, see Stephen Hawkings' lectures on the subject, especially if you wonder about things like black holes.)

In computer science, everyone knows that there's no way to deterministically create a truly random number. Amusingly, computer scientists often assume that physicists have settled the existence of randomness; yet, why hasn't anyone yet stuck a black-body radiation device inside of each computer (which would be ever-so-useful)? Perhaps getting true randomness is trickier than they have come to believe. In the meantime, those of us who want randomness in computer programs have to make do with the aptly-named pseudo-random number generators (some of which are extremely sophisticated) that create strings of numbers that only appear to be random. (It can be very dangerous to forget that pseudo-random number generators are completely deterministic, as I've blogged about before.) It frequently surprises me that in computer science, most people appear to believe that randomness is Bad in computer programs, but maybe that's just the systems and languages people who want machines to be predictable. This is a silly idea, really, as with randomness, you can beat adversaries that (even extremely sophisticated) determinism cannot. Also, it's often a lot easier to be random than it is to be deterministic in a complicated fashion. These things seem rather important for topics like oh, computer security. Perhaps the coolest use for pseudo-random number generators is in synchronization of wireless devices via frequency hopping.

There are a couple of interesting points derived from Shannon's information theory about randomness and determinism. For instance, my esteemed advisor showed that when a technologies uses electromagnetic radiation (like radio waves) to transmit information, it has the same power spectrum as black-body radiation. This little result apparently ended up in a science-fiction book by Ian Stewart in which the cosmic microwave background radiation was actually a message from someone-or-other - was it God or an ancient alien race? (Stewart has also written a book on randomness, chaos and mathematics, which I'll have to pick up sometime.)

Here's an interesting gedanken experiment with regard to competition and randomness. Consider the case where you are competing against some adversary (e.g., your next-door neighbor, or, if you like, gay-married terrorists) in a game of dramatic consequence. Let's assume that you both will pursue strategies that are not completely random (that is, you can occasionally rely upon astrology or dice to make a decision, but not all the time). If you both are using sufficiently sophisticated strategies (and perhaps have infinite computational power to analyze your opponent's past behavior and make predictions about future behavior), then your opponent's actions will appear as if to be drawn from a completely random strategy; as will your own. That is, if you can detect some correlation or pattern in your opponent's strategy, then naturally you can use that to your advantage. But if your opponent knows that you will do this, which is a logical assumption, then your opponent will eliminate that structure. (This point raises an interesting question for stock marketers - because we have limited computational power, are we bound to create exploitable structure in the way we buy and sell stocks?)

The symmetry between really complicated determinism and apparent randomness is a much more universally useful property than I think it's given credit for, particularly in the world of modeling of complex systems (like the stock market, navigation on a network, and avalanches). When faced with such a system that you want to model, the right strategy to pursue is probably something like: 1) select the most simple mechanisms that are required to produce the desired behavior and then 2) use randomness for everything else. You could say that your model then has "zero intelligence", but in light of our gedanken experiment, perhaps that's a misnomer. Ultimately, if the model works, you have successfully demonstrated that a lot of the fine structure of the world that may appear to matter to some people doesn't actually matter at all (at least for the property you modeled), and that the assumed mechanisms are, at most, the necessary set for your modeled property. The success of such random models is very impressive and begs the question that does it make any difference in the end if we believe that human society (or whatever system was modeled) is not largely random? This is exactly the kind of idea that Jared Diamond explores in his recent book on the failure of past great civilizations - maybe life really is just random, and we're too stubborn to admit it.

posted March 13, 2005 12:26 AM in Thinking Aloud | permalink | Comments (0)

March 08, 2005

Of Men and Machines

The human concept of self is incredibly flexible. Yet we are often quite attached to the notion that our self is somehow special, e.g., many of us dislike believing that our mind is actually just an emergent property of the mechanics of our neurocircuitry. Because our mind feels so separate from our body, mustn't it be so? If the body is just the thing that the mind inhabits, then augmenting the body with machinery should do nothing to our sense of self or our mind, right? But can the main withstand the brain itself being augmented? Sure, we've not yet blurred the distinction between man and machine to the extent that Masamune Shirow does in the incredibly elegant universe of Ghost in the Shell (2 manga series, 2 movies, and 1 tv series, so far), but only those who haven't been paying attention to history will deny that we are moving in that direction. Such a future holds promises of things like externalized memory, thought-controlled computers, and artificial eyes that see a much wider range of the electromagnetic spectrum are just a few of the things that may come to pass. Or, if you prefer more benevolent applications, fully functional replacement limbs for amputees, replacement wrists for those of us with carpal tunnel syndrome, etc. These are the stuffs of science-fiction today.

But consider the recent progress in interfacing brains and machines, like a monkey learning to control a robotic arm with neural impulses. Again, if you prefer less extreme examples, consider the everyday task of driving a car, in which you exhibit your extremely flexible sense of spatial self-extent. How is it that you can "sense" how close your car is to the curb when you park? Or, navigate a parking lot with as much ease as you would navigate a crowded room? Consider any video game, in which players inevitably gain an amazing degree of control over their virtual avatar by mentally mapping hand-movements into visual feedback. Or, consider that whenever you pick up an object in your hand, like a pencil, your brain extends its sense of 'self' to encompass the extent of that object. For all practical purposes, that object becomes a part of you while it's in your hand, or at least, your brain treats it as such. Basically, the human brain easily adapts to whatever regularities it perceives in its streams of input, so there seems to be no reason, in principle, why it couldn't learn to use a mechanical body part in lieu of an organic one just as easily as you learn to wield that pencil or a tennis racket.

Although cybernetic limbs may seem an outlandish possibility, for amputees, they are the freedom to participate more fully in society. One of the most interesting and amazing ventures in this domain is the biomechanics lab at the MIT Media Lab, which is itself run by a man with artificial legs. When I was touring the Media Lab back in January, this was the group that I thought was the most interesting and one of the few that seemed to being doing real science that has the potential to dramatically alter the world. But, what happens when one can make a prosthetic arm that not only does everything a real arm does, but does it better than the original, and perhaps does more? Won't people then choose to become an amputee in order to gain those advantages. For thousands of years, humans have preferred the advantages of tools over the basic abilities granted us by evolution (well, unless you're Amish), so isn't cybernetic enhancement the logical extension of this tendency? The usefulness of machines lies in their extending our own small set of abilities to a much larger set of possibilities. Cars let us go faster and longer than our legs allow; planes let us fly without needing wings; and computers let us (among other things) stay organized at the global level. Machines gives us the ability to surpass our humble roots and achieve the things that our imaginations dream up.

This premise of choosing cybernetic body-enhancements over the natural body is the basis for much of the plot of Ghost in the Shell (and much of the cyberpunk literature). But ultimately, I don't think that it will be science that has the most trouble with putting flesh and steel in the proverbial blender, but rather humanity's deeply rooted fear of the unknown. If biomechanical enhancement ever becomes popular, i.e., beyond a medical need, I'm sure there will be hate-crimes perpetrated against those participants for betraying humanity, or becoming inhuman monsters. The religious right along with other fearful and conservative groups will condemn the practice as ungodly, and try to make non-medical augmentation illegal. But I doubt the public discourse on augmentation will really address the fundamental question of humanity that Ghost in the Shell explores (as does another of my favorite manga series, Battle Angel Alita). That is, how many body parts are you willing to replace with mechanical versions before you begin to feel less "human"? Will the choice of getting a mechanical part that is visually dissimilar to its organic version actually be a choice of reducing your apparent humanity? (Won't people treat you differently if you don't look human?) Which is more human, a completely human brain encased in a completely mechanical body, or a completely mechanical brain encased in a completely human body? If the mechanical brain is functionally equivalent to the human brain, can it be considered legitimately different? What if we put that completely mechanical brain in a completely mechanical body? What happens when an "artificial" human learns to behave like a real human? (As, for instance, in the exquisite vision of Blade Runner.)

In Ghost in the Shell, the protagonist Major Motoko Kusanagi, equipped with a state-of-the-art cyborg body that contains just her brain/spinal column, is in the midst of wrestling with these questions when a completely digital life form known as The Puppeteer asks her to merge with him to become a new form of life in the sea of information on the Internets. Although the end of the movie may be far-fetched, it's somewhat reassuring that Ghost in the Shell is so popular. It suggests the existence of a large population of people who are thinking about these questions as we move ever closer to a world in which, largely by force of will alone, we are able to sculpt our exteriors to suit our whimsical and shallow interior.

A few questions to ponder in closing:

- When cyborg bodies of custom design are available, won't we choose to make them all beautiful?

- What are the security implications of everyone having wireless connectivity from inside their skull to the Internet?

- When computers and brains can exchange data seamlessly, what kind of crime will brain-hacking be?

- Will the ability to sculpt our exteriors into machines allow us to circumvent the faster-than-light problem with space colonization?
- If you could have a mechanical arm that did everything your current arm does, but does it better, faster, is stronger, never tires, etc., would you really give up your own flesh for that enhancement?

(Pictures taken from Ghost in the Shell 2: Innocence; first is from the opening sequence in which a solo-copter is circling Tokyo; the second is Batou and Togusa conversing about their recent harrowing mental battle with a super-hacker.)

Update: Cosma Shalizi points out that Andy Clark explores this topic in great depth in his "Natural-Born Cyborgs" - I will definitely be picking up a copy of this!

posted March 8, 2005 01:50 AM in Thinking Aloud | permalink | Comments (1)

February 20, 2005

On the currency of ideas

Scarce resources. It's one of those things that you know is really important for all sorts of other stuff, but most of the time feels like a distant problem for you, or maybe your kids, to deal with. Sure, everyone agrees that material stuff can be scarce. I mean, there's never enough time in the day, or parking spaces, or money. Those are scarcities that most people worry about, right? But who ever worries about a shortage of ideas?

As is often the case when I'm driving somewhere, I found myself musing tonight about the recent bruhaha over software patents in Europe in which certain industries are trying very hard to make ideas as ownable as shoes. As preposterous as this idea sounds (after all, if I lend you my pair of shoes, then you have them and I don't; whereas, if I tell you my latest brilliant idea, we both have it), that's what several very wealthy industries believe is the key to their continued profitability. Why do they believe this? For two reasons, basically. On the one hand, they believe that being able to own an idea will protect their investment in the development of said idea by suing the pants off of anyone else who tries to do something similar. On the other hand, if ideas are like shoes, then they can be bought and sold (read: for profit) just like any other commodity. Pharmaceutical companies patent chemical structures, online companies patent user interfaces, and everyone wants a piece of the intellectual property pie before the last piece gets eaten. If these people are successful at redefining what it means to own something, won't the future be full of people being arrested for saying the wrong things, or even thinking the wrong thoughts? Shades of Monsieur Orwell linger darkly these days.

This is all rather abstract, and the drama over software patents will probably play out without a care about little folks like me. But a cousin of this demon is lurking much closer to home. When there are more people than good ideas floating around, ideas become a scarce resource just like anything else. I've had the same conversation a half-dozen times with different people recently about how fast-paced my field is. Why is it this way? Well, sure, there's a lot really great stuff being done by a lot of great people. But then, the same is true in fields like quantum gravity and econophysics. I suspect that part of what really makes my field move is that people are scared of being scooped. And although it hasn't happened to me yet, I fear it just as much as everyone else. And so, everyone in the race spends a few more hours hunched over the computer, a few more days feverishly crafting an old idea into a finished project, and spends fewer moments admiring the sky, and fewer thoughts on the people they love.

In academia, ideas are already property. Ideas are owned when someone published them. But the competition doesn't stop there. Then comes the endless self-promotion of your work in an effort to convince other people that your idea is a good idea. In the end, an idea is yours only when other people will argue that it's not theirs. The entire system is founded on the premise that, if your idea is truly great, then in the end everyone will acknowledge that it's fabulous and that you're that much cooler for having come up with it. While not exactly a system that encourages healthy lifestyles, especially for women, it has enough merits to outweigh the failings, and its a lot better than any alternative resembling software patents. The danger comes from the combination of a lag-time between coming up with an idea and publishing it, and when there are more researchers than ideas. When these are true, you get the fever pitch mental race to see who gets to make the first splash, and who gets water up the nose. I've never liked the sting of a nasal enema of chlorine or salt, and I have at least two projects where this is a fairly serious concern.

When a group decides that an idea is owned by a person, it's an inherently social exchange and can never become a financial exchange without micro-policing that would put any totalitarian to shame. Hypothetically, if ideas were locked up by law, would I have to pay you when you tell me your idea? Not with money yet, but right now, I do still pay you. Instead of financial capital, I pay you with social capital: with respect, recognition, and reference. When I, in turn, tell my friend about your idea, I tell them that it's yours. If it's a good idea, then we both associate its goodness with you. This is the heart of the exchange, you give us your idea, and we give you back another idea. Normally, this is the best we can do, but modern technology has given us a new way to pay each other for ideas: hyperlinks. When I link to other articles and pages in my posts, I am tithing their owners and authors. After all, hyperlinks (roughly) are how Google decides who owns what. That is, topology becomes a proxy for wealth.

In the social world, reputation and gossip are the currency of exchange. But in the digital world, hyperlinks are the currency of information and ideas. So, who wants some free money?

p.s. Blogs are an intersection of the social world and the digital world. If bloggers and other sites exchange currency in the form of hyperlinks, what do readers and bloggers exchange? Comments. Comments are the currency that readers use to pay their authors for their posts.

posted February 20, 2005 05:09 AM in Thinking Aloud | permalink | Comments (0)

February 15, 2005

End of the Enlightenment; follow-up

I keep stumbling across great pieces on rational thought, scientific inquiry and then also, unfortunately, stuff about Intelligent Design'ers. A quick round-up of several good ones.

Here is a fantastic piece written by Anthropology professor James Lett about how to test evidence of some claim, featured on CSICOP, which aptly stands for Committee for the Scientific Investigation of Claims of the Paranormal. A brief excerpt:

The rule of falsifiability is essential for this reason: If nothing conceivable could ever disprove the claim, then the evidence that does exist would not matter; it would be pointless to even examine the evidence, because the conclusion is already known -- the claim is invulnerable to any possible evidence. This would not mean, however, that the claim is true; instead it would mean that the claim is meaningless.

I am a big fan of Michael Shermer who writes the Skeptic column for Scientific American. His February 2003 article entitled "Psychic Drift" is an excellent primer on rational thought:

Data and theory. Evidence and mechanism. These are the twin pillars of sound science. Without data and evidence, there is nothing for a theory or mechanism to explain. Without a theory and mechanism, data and evidence drift aimlessly on a boundless sea.

Finally, the Global Consciousness Project (amazingly, run out of Princeton) is the massive exercise in rabbit chasing that I mentioned in the previous post. An sound critique of their claims is made by Claus Larsen after he went to a talk by Dean Radin of GCP.

Another serious problem with the September 11 result was that during the days before the attacks, there were several instances of the eggs picking up data that showed the same fluctuation as on September 11th. When I asked Radin what had happened on those days, the answer was:
"I don't know."
I then asked him - and I'll admit that I was a bit flabbergasted - why on earth he hadn't gone back to see if similar "global events" had happened there since he got the same fluctuations. He answered that it would be "shoe-horning" - fitting the data to the result.
Checking your hypothesis against seemingly contradictory data is "shoe-horning"?
For once, I was speechless.

Finally, in subsequent conversations with Leigh, I made an observation that's worth repeating. The Church was having it out with natural philosophers (i.e., proto-natural scientists) about whether the earth was flat, or if the sun went 'round the earth as long ago as c.1500 if you count from Copernicus (c.1600 if you count from Galileo). A rough estimation on the length of that battle is 300 years before the general public agreed with the brave gentlemen who stood against ignorance. Charles Darwin kicked off the battle over whether biological complexity requires design or not, that is, the "debate" over evolution, a little over 150 years ago. So, if history repeats itself (which it inevitably does), then we have a long way to go before this fight is over. As a side note, Darwin's birthday was February 12th.

posted February 15, 2005 01:53 AM in Thinking Aloud | permalink | Comments (0)

February 12, 2005

End of the Enlightenment

The Enlightenment was a grand party of rationalism, lasting a brief, yet highly productive 300 years. A mere blip in the multi-millennial history of humans. Alas, the wine was good and the fireworks spectacular. Now, the candle is going out, and we are returning to the comfort of darkness, where life isn't so complicated and the unknown more understandable.

"I worry that, especially as the Millennium edges nearer, pseudoscience and superstition will seem year by year more tempting, the siren song of unreason more sonorous and attractive. Where have we heard it before? Whenever our ethnic or national prejudices are aroused, in times of scarcity, during challenges to national self-esteem or nerve, when we agonize about our diminished cosmic place and purpose, or when fanaticism is bubbling up around us-then, habits of thought familiar from ages past reach for the controls." -- Carl Sagan, The Demon-Haunted World: Science As a Candle in the Dark

Primary among the tenants of the Enlightenment was the belief that the world is fundamentally rational, a belief that stood in stark contrast to the dogma of the necessity of divinity for action and of the existence of the supernatural. With rationality, however, God was no long needed to guide an apple from the tree to the ground. With rationality, something odd happened: science became predictive, whereas before it had only been descriptive. Religion (in its many forms) remains the latter.

"I maintain there is much more wonder in science than in pseudoscience. And in addition, to whatever measure this term has any meaning, science has the additional virtue, and it is not an inconsiderable one, of being true." -- Carl Sagan

Before the Enlightenment, people turned to those who had the ear of God for information about the future. But with the emergence of rational thought and its heir scientific inquiry, prediction became the providence of Man. Although George W Bush may claim that it is freedom, I claim that it is instead science that is the most fundamental democratizing force in the world. Science, not freedom, gives both the aristocrat and peasant access to Truth.

"Many statements about God are confidently made by theologians on grounds that today at least sound specious. Thomas Aquinas claimed to prove that God cannot make another God, or commit suicide, or make a man without a soul, or even make a triangle whose interior angles do not equal 180 degrees. But Bolyai and Lobachevsky were able to accomplish this last feat (on a curved surface) in the nineteenth century, and they were not even approximately gods." -- Carl Sagan, Broca's Brain

However sensationalist it may be to claim that the we are in the twilight of the Enlightenment, especially considering the wonders of modern science, there is disturbing evidence that a cultural backlash against rationality is happening. Consider the Bush administration's abuse of science for political ends, including recent revelation that at the U.S. Fish and Wildlife Service's scientists have been instructed to alter their scientific findings for political and pro-business reasons. Scientists apparently self-censor themselves for fear of political repercussions. And with the faux debate over intelligent design (simply the new version of creationism, although equally inane) apparently persuading many teachers to avoid teaching evolution at all, it seems clear that something significant is happening. (For many excellent critiques of intelligent design, and a fantastic discussion of how evolution is supported by a burgeoning amount of evidence, see Carl Zimmer's imminently readable blog.)

"For most of human history we have searched for our place in the cosmos. Who are we? What are we? We find that we inhabit an insignificant planet of a hum-drum star lost in a galaxy tucked away in some forgotten corner of a universe in which there are far more galaxies than people. We make our world significant by the courage of our questions, and by the depth of our answers." -- Carl Sagan

During the 300 years of Enlightenment, science has steadily pushed back the darkness and revealed that the extent of the material world which we know is completely mechanistic. Although the details may seem arcane or magical to most, the power of this world-view is affirmed by the general public's acceptance of fruits of science, i.e., technology, as basic, even essential, components of their life.

"If you want to save your child from polio, you can pray or you can inoculate... Try science." -- Carl Sagan, The Demon-Haunted World

Why then, is there such a violent reaction among so many about topics like evolution or the possibility of life beyond Earth? If rational thought has been so much more successful than any of the alternatives, why then, do people persist in believing in psychic powers and the idea that the Earth was created in 168 hours? Why will people accept that electrons follow the laws of physics in flowing through the transistors which are critical to displaying this text, yet people will not accept the mountain of evidence supporting evolution?

"If we long to believe that the stars rise and set for us, that we are the reason there is a Universe, does science do us a disservice in deflating our conceits?... For me, it is far better to grasp the Universe as it really is than to persist in delusion, however satisfying and reassuring." -- Carl Sagan, The Demon-Haunted World

It seems reasonable to me that the answer to these questions is that human beings are fundamentally irrational beings and that rational thought is not a natural mode of thought for us. Science is has never been popular because its study is frustrating, slow and confusing. It often involves math or memorization, and it always involves discipline and persistence. These things do not come easily to most people. It is easier to be utilitarian and dogmatic, than it is to be skeptical and careful. Combined with godly powers of rationalization (a topic about which I will blog soon) and a fundamental laziness of mind (which can only be circumvented by careful training and perpetual vigilance), it seems somewhat surprising that the Enlightenment ever happened in the first place.

"Think of how many religions attempt to validate themselves with prophecy. Think of how many people rely on these prophecies, however vague, however unfulfilled, to support or prop up their beliefs. Yet has there ever been a religion with the prophetic accuracy and reliability of science?" -- Carl Sagan, The Demon-Haunted World

But since it did happen, the least we can do is enjoy the candle that burns so brightly now, even as the darkness advances menacingly. We can only hope (an irrational and truly human feeling) that the Enlightenment is more resilient than the darkness is persistent.

"Our species needs, and deserves, a citizenry with minds wide awake and a basic understanding of how the world works." -- Carl Sagan, The Demon-Haunted World

Update: The Global Consciousness Project is a prime example of supposedly rational people being very irrational. In it, normally respectable scientists monitor the fluctuations of random number generators in an effort to measure global psychic events. This recent story about it sounds persuasive, but the scientists involved are making a fundamental mistake of agency. Sure, there are some unexplained correlations between the random number generators, but there are also significant correlations between the first letter of your name and your life span. The true question is whether or not there is a causative relationship between the first letter and your life span. Similarly, these scientists' notion of the causative mechanism between the fluctuations in the random number generators and apparent "global" events is simply a self-fulfilling hypothesis - you can also explain away the failures and highlight the successes. It's called the investigator's bias, and it's a well documented state of irrationality that seems quite rational to the beholder.

p.s. Thank you to my friend Leigh Fanning for provoking the thoughts that led to this entry over dinner last night at Vivace's.

posted February 12, 2005 05:51 AM in Thinking Aloud | permalink | Comments (0)

February 03, 2005

Our ignorance of intelligence

A recent article in the New York Times, which is itself a review of a review article that recently appeared in Nature Neuroscience Reviews by the oddly named Avian Brain Nomenclature Consortium, about the incredible intelligence of certain bird species has prompted me to dump some thoughts about the abstract quality of intelligence, and more importantly, where it comes from. Having also recently finished reading On Intelligence by Jeff Hawkins (yes, that one), I've returned to my once and future fascination with that ephemeral and elusive quality that is "intelligence". We'll return to that shortly, but first let's hear some amazing things, from the NYTimes article, about what smart birds can do.

"Magpies, at an earlier age than any other creature tested, develop an understanding of the fact that when an object disappears behind a curtain, it has not vanished.

At a university campus in Japan, carrion crows line up patiently at the curb waiting for a traffic light to turn red. When cars stop, they hop into the crosswalk, place walnuts from nearby trees onto the road and hop back to the curb. After the light changes and cars run over the nuts, the crows wait until it is safe and hop back out for the food.

Pigeons can memorize up to 725 different visual patterns, and are capable of what looks like deception. Pigeons will pretend to have found a food source, lead other birds to it and then sneak back to the true source.

Parrots, some researchers report, can converse with humans, invent syntax and teach other parrots what they know. Researchers have claimed that Alex, an African gray, can grasp important aspects of number, color concepts, the difference between presence and absence, and physical properties of objects like their shapes and materials. He can sound out letters the same way a child does."

Amazing. What is even more surprising is that the structure of the avian brain is not like the mammalian brain at all. In mammals (and especially so in humans), the so-called lower regions of the brain have been enveloped by a thin sheet of cortical cells called the neo-cortex. This sheet is the base of human intelligence and is incredibly plastic. Further, it's assumed most of the control for many basic functions like breathing and hunger. The neocortex's pre-eminence is what allows people to consciously starve themselves to death. Arguably, it's the seat of free will (which I will blog about on a later date).

So how is it that birds, without a neocortex, can be so intelligent? Apparently, they have evolved an set of neurological clusters that are functionally equivalent to the mammal's neocortex, and this allow them to learn and predict complex phenomena. The equivalence is an important point in support of the belief that intelligence is independent of the substrate on which it is based; here, we mean specifically the types of supporting structures, but this independence is a founding principle of the dream of artificial intelligence (which is itself a bit of a misnomer). If there is more than one way that brains can create intelligent behavior, it is reasonable to wonder if there is more than one kind of substance from which to build those intelligent structures, e.g., transitors and other silicon parts.

It is this idea of independence that lies at the heart of Hawkins' "On Intelligence", in which he discusses his dream of eventually understanding the algorithm that runs on top of the neurological structures in the neocortex. Once we understand that algorithm, he dreams that humans will coexist with and cultivate a new species of intelligent machines that never get cranky, never have to sleep and can take care of mundanities like driving humans around, and crunching through data. Certainly a seductive and utopian future, quite unlike the uninterestingly, technophobic, distopian futures that Hollywood dreams up (at some point, I'll blog about popular culture's obsession with technophobia and its connection with the ancient fear of the unknown).

But can we reasonably expect that the engine of science, which has certainly made some astonishing advances in recent years, will eventually unravel the secret of intelligence? Occasionally, my less scientifically-minded friends have asked me to make my prediction on this topic (see previous reference to the fear-of-the-unknown). My response is, and will continue to be, that "intelligence" is, first of all, a completely ill-defined term as whenever we make machines do something surprisingly clever, critics just change the definition of intelligence. But excepting that slipperiness, I do not think we will realize Hawkins' dream of intelligent machines within my lifetime, and perhaps not within my children's either. What the human brain does is phenomenally complicated, and we are just now beginning to understand its most basic functions, let alone understand how they interact or even how they adapt over time. Combined with the complicated relationship between genetics and brain-structure (another interesting question: how does the genome store the algorithms that allow the brain to learn?), it seems like the quest of understanding human intelligence will keep many scientists employed for many many years. That all being said, I would love to be proved wrong.

Computer: tea; Earl Grey; hot.

Update 3 October 2012: In the news today is a new study at PNAS on precisely this topic, by Dugas-Ford, Rowell, and Ragsdale, "Cell-type homologies and the origins of the neocortex." The authors use a clever molecular marker approach to show that the cells that become the neocortex in mammals form different, but identifiable structures in birds and lizards, with all three neural structures performing similar neurological functions. That is, they found convergent evolution in the functional behavior of different neurological architectures in these three groups of species. What seems so exciting about this discovery is that having multiple solutions to the same basic problem should help us identify the underlying symmetries that form the basis for intelligent behavior.

posted February 3, 2005 02:17 AM in Obsession with birds | permalink | Comments (0)