« March 2008 | Main | June 2008 »

May 25, 2008

On climate change

Sometimes I'm haunted by the feeling that I'm studying the wrong things in life. That while networks, evolution and terrorism are interesting, they're only peripherally related to the central problems that face our generation. That is, sometimes I wish I worked on climate change and, in particular, on sustainable development and carbon-neutral energy sources (like solar cells). Fortunately, there are a lot of people working on this problem, and there's even a climate change summer school this year, run by the Mathematical Sciences Research Institute (MSRI) in Berkeley CA [1]. If you can't make the event, MSRI recently published an online book that gives a good introduction (in relatively accessible terms) to the science, called Mathematics of Climate Change.

It's hard, of course, to really get your head around how big a problem the energy-question is. We all know by now that we should use less oil, that we should buy more fuel efficient cars, that we should have better insulated houses, lower-power refrigerators, etc.; there are lots of shoulds floating around in the media. And then there are the sky-is-falling types, who say that if we don't do all these things immediately, then the planet is going to overheat, the oceans will rise 100 feet, and civilization will be cast 4000 years back to the Stone Age. Fear can be a powerful motivator, but only when it's clear what the right reaction is. Unfortunately, for an average person who wants to have a positive impact, to do their part in saving the world, it's not at all clear what can be done, or even how much urgency is really warranted.

Last week, Prof. Nathan Lewis (CalTech) visited SFI as our colloquium speaker. Lewis has been trying to get his head around just how big the problem of sustainable growth is, and then translate it into understandable terms. I wasn't that thrilled with the style of his presentation, but the content itself was great and the message was rather clear.

First, there's the question of what are the consequences of climate change. If the consequences are small, then maybe it's okay to ignore the whole problem. Unfortunately, the last time we know for a fact that carbon dioxide (CO2) levels were close to what they are approaching now, 90% of all life on the Earth became extinct. This catastrophe happened about 251.4 million years ago (for comparison, dinosaurs died out about 65.5 million years ago), and is called the end-Permian extinction event. To put it in more clear terms how big an extinction this was, it's the only time in all of Earth's history that cockroaches almost became extinct. This is not, of course, to say that 90% of all life on Earth (possibly including us) will become extinct over the next few centuries or millennia because of the increased (and increasing!) CO2 levels we're experiencing, but that we have very little experience with or expectation about what happens when CO2 levels are this high, and the only data point we do have (the end-Permian event) suggests that things could be very bad. So, it might be useful for us to try to avoid venturing into such unknown territory. We only have one planet to experiment with, after all.

So, if we're resolved to avoid end-Permian-like CO2 levels, what can we do? If you think that human-generated CO2 makes no significant contribution to the global CO2 levels, then you don't have many options that don't involve actively extracting CO2 from the atmosphere (e.g., planting lots and lots of trees). On the other hand, if you, like the vast (vast!) majority of climate scientists, think that human-generated CO2 is the main culprit of rising CO2 concentrations (and temperatures), then we have lots of options, since we theoretically have control over how much CO2 we as humans emit [2]. Unfortunately, one of Lewis's points is that, given the scale of the problem we face and how much time we have left to solve it, simply reducing CO2 output is not going to be enough. That is, being green enough to save the planet as we know it is going to require a major reallocation of our civilization's resources; business-as-usual, or even a half-assed attempt, is not going to make a big enough change in atmospheric CO2 concentrations to prevent the planet from being irrevocably changed (heated) for the next 3000 years or more.

To keep CO2 levels from approaching end-Permian levels, we basically have to eliminate almost all CO2 emissions from human industrial activities, everywhere on Earth, within the next 50 years. That's a huge task, especially considering that China recently became the world's largest emitter of CO2, and, along with the US, shows little interest in reducing its emissions. (Scare-tactics go both ways, and the usual argument against doing anything is that it will hurt hurt economic growth, cost jobs, etc. This is ridiculous, of course, since there are huge economic gains to be won by being successful at creating clean, abundant energy.)

Fortunately, there's a good solution at hand: solar energy. Unlike other sources (wind power, tidal power, geothermal power, biofuels, etc.), solar energy is incredibly abundant (1000 times more abundant than wind power), and could satisfy the energy demands of the entire planet using today's technology. Some estimates say that enough sunlight falls on the southeast quarter of New Mexico to power the entire United States. In fact, solar energy is so abundant that covering only something like 1% of the Earth's land with solar panels would give us plentiful power in perpetuity. And, as a bonus, solar power emits basically no carbon dioxide.

The hurdles to a solar-powered future are twofold (there are others, too, but these are the big ones). First, there are the political problem with getting all of civilization to embrace this solution now, rather than in 50 years when it's too late (that is, in 50 years, if we've done nothing significant, CO2 levels will already be at their end-Permian levels). The political climate does seem to be changing a little, but the inertia in the direction of ignoring the problem and burning our way back to the end-Permian is very very strong. The second problem is that energy from the sun is still a lot more expensive than energy from oil and coal, so there's not yet an economic incentive to get behind solar power. For the average citizen, then, there's not much to do that won't cost (possibly a lot) more money, and this severely limits the ability of the populace to use their economic leverage to drive the switch to solar power. This last part is where carbon taxes or a cap-and-trade system can change the balance, by making oil and coal more expensive relative to solar. If these systems can be put in place relatively soon, and the political climate continues to become more favorable to large-scale changes to where we get our energy and how we use it, we may be able to avoid end-Permian-level CO2 concentrations. Plus, if we solve the energy problem (and with it the CO2 problem), there are other important problems (e.g., water, food, etc.) that we will, in principle, also be able to solve. It's a bright future, if only we can find it in ourselves to collectively get there.

Update 27 May 2008: In the comments, "diarmuid" points out that David MacKay, a well known expert on learning algorithms, inference and information theory, comes to basically the same conclusions above about how to solve the energy-climate problem. MacKay has even written a book "Without Hot Air" about it, for those interested in more. (It looks like a draft of the book is available for free download.)

Update 29 May 2009: Bela Nagy tells me that there's another summer school on climate change, with the impressive-sounding name The International Graduate Summer School on Statistics and Climate Modeling. This one is being run at CU-Boulder by the National Center for Atmospheric Research (NCAR) and the Institute for Mathematics Applied to Geosciences (IMAGe). It runs August 9-13, and they'll be accepting applications up until June 15th. The organizers are Stephan Sain (NCAR), Doug Nychka (IMAGe, NCAR), Claudia Tebaldi (NCAR), Caspar Ammann (NCAR), and Bo Li (NCAR and Pudue).

Update 14 June 2008: Carl Zimmer, science writer and author of a number of best-selling popular science books, now also has an essay on the end-Permian extinction and its relationship to the current warming trend, which says much the same thing about the threat life on Earth faces from increased CO2 levels.

-----

[1] Climate Change Summer School July 14th - August 1st, 2008

Organized By: Chris Jones (UNC Chapel Hill), Inez Fung (U.C. Berkeley), Eric Kostelich (Arizona State University), K.K. Tung (U. Washington), and Mary Lou Zeeman (Bowdoin College).

[2] A nice paraphrasing of what the industrial revolution has done to the atmosphere is this: burning coal and oil in our factories and cars has had a similar effect on the atmosphere as if a massive volcano had been erupting continuously, with ever increasing ferocity, for 200 years or so.

posted May 25, 2008 09:24 AM in Global Warming | permalink | Comments (5)

May 23, 2008

Shaping up to be a good year

Yesterday I heard the good news that my first paper (with Doug Erwin) on biology and evolution was accepted at Science. Unlike my experience with publishing in Nature, the review process for this paper was fast and relatively painless. I think this was partly because the paper's topic, on the evolution of species body masses, is a relatively conventional one in paleobiology / evolutionary biology / ecology. In fact, people have been thinking about this topic for more than 100 years, going all the way back to E. D. Cope in 1887 who suggested that mammal species had an inherent tendency to become larger over evolutionary timescales (millions of years). This idea went through several reformulations as our understanding of evolution matured over the 20th century. From a modern perspective, we now know from fossil data that changes to how big a species is are not deterministic in the sense that they always get bigger (as Cope thought), but rather changes are stochastic, with both decreases and increases happening with great frequency. The tendency, however, for many kinds of species (including mammals and brachiopods) is that the increases slightly outnumber the decreases (a pattern called Cope's Rule), perhaps because of competitive or robustness advantages from increased size.

Anyway, there's a lot more to say on this topic, but I'll wait until the paper comes out to say it. In general, it's been a lot of fun learning about evolution and ecology, and I hope to do some more work in this area in the future.

posted May 23, 2008 08:20 AM in Self Referential | permalink | Comments (2)

May 17, 2008

A vending machine for crows

I can't say that I was all that impressed with Joshua Klein's TED talk itself, but the idea of being able to use crows' intelligence and tenacity to produce interesting new behavior is a neat one. For instance, I kind of liked the vision of a murder of crows cleaning up the streets in exchange for peanuts... Plus, the footage he shows of clever crow behavior is worth watching the rest of the talk.

posted May 17, 2008 10:11 PM in Obsession with birds | permalink | Comments (0)

May 08, 2008

GATech Conference: Frontiers in Multi-Scale Systems Biology

Georgia Tech is getting into interdisciplinary science, at least when it comes to biology. Apparently, they're launching a new "institute" called the Integrative BioSystems Institute which is supposed to bring folks together from different biological disciplines to approach the big problems in biology (and by "biology", it seems that they mainly mean molecular and cellular biology, i.e., genes, proteins, metabolites, neurons, etc.). Anyway, to kick off their new center, they're throwing a big party, I mean, a big conference. The upside, of course, is that it should be chock full of speakers on a wide range of biological topics, and potentially a good place to learn about interesting questions.

GA Tech's Frontiers in Multi-Scale Systems Biology

October 18-21, 2008 at Georgian Terrace Hotel, Atlanta, GA

Organizers: Jeffrey Skolnick (Co-Chair), Eberhard Voit (Co-Chair), David Bader, Lynn Durham, Richard Fujimoto, Jessica Gilmore, Melissa Kemp, Patricia Sobecky, LaDawn Terry, Eric Vigoda.

Description: Frontiers in Multi-Scale Systems Biology will highlight representative topics of multi-scale systems biology including: genomics, proteomics, metabolomics, molecular inventories and databases, modeling and simulation, high-performance computing, enabling experimental and computational technologies, and applications in cancer, neuroscience and the environment.

Conference themes are
1. The creation of key molecular inventories that drive integrative biological systems analyses at all significant levels of biological organization.
2. Enabling experimental technologies for the investigation of multi-level, multi-scale integrative biological systems.
3. Innovation in high-performance computing, modeling and simulation, with applications in multi-scale integrative biology.
4. Applications of enabling experimental and computational technologies and molecular inventories.

posted May 8, 2008 06:18 PM in Conferences and Workshops | permalink | Comments (0)

May 01, 2008

Hierarchical structure of networks

Many scientists believe that complex networks, like those we use to describe the interactions of genes, social relationships, and food webs, have a modular structure, in which genes or people or critters tend to cluster into densely interacting groups that are only loosely connected to each other. This idea is appealing since it agrees with a lot of our everyday experience or beliefs about how the world works. But, within those groups, are interactions uniformly random? Some folks believe that these modules themselves can be decomposed into sub-modules, and they into sub-sub-modules, etc. Similarly, modules may group together into super-modules, etc. This kind of recursive structure is what I mean by hierarchical group structure. [1]

There's been a lot of interest among both physicists and biologists in methods for extracting either modular or hierarchical structure in networks. In fact, one of my first papers in grad school was a fast algorithm for clustering nodes in very large networks. Many of the methods for getting at the hierarchical structure of networks are rather ad hoc, with the hierarchy produced being largely a byproduct of the particular behavior of the algorithm, rather than something inherent to the network itself. What was missing was a direct model of hierarchy.

Many of you will know (perhaps from here or here), that I've done work in this area with Cris Moore and Mark Newman, and that I care a lot about null models and making appropriate inferences from data. Our first paper on hierarchy is on the arxiv; in it, we showed some fancy things you could do with a model of hierarchy, such as assign connections a "surprisingness" value based on how unlikely they were under our model. Our second paper, in which we show that hierarchy is a very good predictor of missing connections in networks appeared today in Nature. [2,3] There's also a very nice accompanying News & Views piece by Sid Redner. Accurately predicting missing connections has many applications, including the obvious one for homeland security, but also for laboratory or field scientists who construct networks laboriously, testing or looking for one or a few edges at a time.

Another nice thing that came out of this work is that the hierarchy we extract from real networks seems to be extremely good at simultaneously reproducing many other commonly measured statistical features of networks, including things like a right-skewed degree distribution, high (or low) clustering coefficients, etc. In some sense, this suggests that hierarchy may be a fundamental principle of organization for these networks. That is, it may turn out that different kinds of hierarchies of modules is partly what causes real-world networks to look the way they do. General principles like this are wonderful (but not easy) to find, as they suggest we're on the right track to boiling a complex system down to its fundamental parts.

Of course, there are several important missing pieces from this picture, one of which is that real networks are often functional, while the hierarchical model may not completely circumscribe the networks that accomplish the necessary functions for the biological or social context they exist in. In that sense, we still have a long way to go before we understand why things like genetic regulatory networks are shaped the way they are, but hierarchy at least gives us a reasonable way to think about the large-scale organization of these fantastically complex systems.

Update 5 May 2008: Coverage of our results have appeared on Roland Piquepaille's Technology Trends, and also on Slashdot. Now I can live my days out in peace knowing that something I did made it on /. ...

-----

[1] Hierarchical group structure is different from a hierarchy on the nodes themselves, which is more like a military hierarchy or an org-chart, where control or information flows from individuals higher in the hierarchy to other individuals lower in the hierarchy. For gene networks, there is probably some of both kinds of hierarchy, as there are certainly genes that control the behavior of large numbers of other genes. For instance, see

G. Halder, P. Callaerts and W.J Gehring. "Induction of ectopic eyes by targeted expression of the eyeless gene in Drosophila". Science 267, 1788–1792 (1995).

[2] "Hierarchical structure and the prediction of missing links in networks." A. Clauset, C. Moore and M. E. J. Newman. Nature 453, 98 - 101 (2008).

The code for fitting the model to network data (C++), for predicting missing connections in networks (C++), and for visualizing the inferred hierarchical structure (Matlab) is available on my website.

[3] It's especially nice to have this paper in print now as it was the last remaining unpublished chapter of my dissertation. Time for new projects!

posted May 1, 2008 08:58 AM in Networks | permalink | Comments (5)