« Unreasonable effectiveness (part 1) | Main | Unreasonable effectiveness (part 3) »
November 24, 2006
Unreasonable effectiveness (part 2)
In keeping with the theme [1], twenty years after Wigner's essay on The Unreasonable Effectiveness of Mathematics in the Natural Sciences, Richard Hamming (who has graced this blog previously) wrote a piece by the same name for The American Mathematical Monthly (87 (2), 1980). Hamming takes issue with Wigner's essay, suggesting that the physicist has dodged the central question of why mathematics has been so effective. In Hamming's piece, he offers a few new thoughts on the matter: primarily, he suggests, mathematics has been successful in physics because much of it is logically deducible, and that we often change mathematics (i.e., we change our assumptions or our framework) to fit the reality we wish to describe. His conclusion, however, puts the matter best.
From all of this I am forced to conclude both that mathematics is unreasonably effective and that all of the explanations I have given when added together simply are not enough to explain what I set out to account for. I think that we -- meaning you, mainly -- must continue to try to explain why the logical side of science -- meaning mathematics, mainly -- is the proper tool for exploring the universe as we perceive it at present. I suspect that my explanations are hardly as good as those of the early Greeks, who said for the material side of the question that the nature of the universe is earth, fire, water, and air. The logical side of the nature of the universe requires further exploration.
Hamming, it seems, has dodged the question as well. But, Hamming's point that we have changed mathematics to suit our needs is important. Let's return to the idea that computer science and the algorithm offer a path toward capturing the regularity of complex systems, e.g., social and biological ones. Historically, we've demanded that algorithms yield guarantees on their results, and that they don't take too long to return them. For example, we want to know that our sorting algorithm will actually sort a list of numbers, and that it will do it in the time I allow. Essentially, our formalisms and methods of analysis in computer science have been driven by engineering needs, and our entire field reflects that bias.
But, if we want to use algorithms to accurately model complex systems, it stands to reason that we should orient ourselves toward constraints that are more suitable for the kinds of behaviors those systems exhibit. In mathematics, it's relatively easy to write down an intractable system of equations; similarly, it's easy to write down an algorithm who's behavior is impossible to predict. The trick, it seems, will be to develop simple algorithmic formalisms for modeling complex systems that we can analyze and understand in much the same way that we do for mathematical equations.
I don't believe that one set of formalisms will be suitable for all complex systems, but perhaps biological systems are consistent enough that we could use one set for them, and perhaps another for social systems. For instance, biological systems are all driven by metabolic needs, and by a need to maintain structure in the face of degradation. Similarly, social systems are driven by, at least, competitive forces and asymmetries in knowledge. These are needs that things like sorting algorithms have no concept of.
Note: See also part 1 and part 3 of this series of posts.
[1] A common theme, it seems. What topic wouldn't be complete without its own wikipedia article?
posted November 24, 2006 12:21 PM in Things to Read | permalink
Comments
social systems are driven by, at least, competitive forces and asymmetries in knowledge. These are needs that things like sorting algorithms have no concept of.
Aha, but game-theoretic algorithms, auctions, and mechanism design *do* understand such things. One of the more recent pushes in theoretical computer science has been the game-theoretic frame for understanding algorithms, and that has direct impact on social systems.
I'm not sure I understand what "structure in the face of degradation" means though.
Posted by: Suresh at November 25, 2006 03:58 PM
You're absolutely right - TCS's recent interest in mechanism design does account for social competition, etc., and I think it's a great development in computer science. That being said, those results have a strong engineering flavor since they're primarily interested in making guarantees about the system's results in spite of competition, etc. Valuable and interesting, absolutely, but it still doesn't quite feel like a framework through which we can accurately model observed social behavior in complex systems.
As for what I meant by "structure in the face of degradation", I was trying to allude to the fact that living organisms have to work against both the 2nd law of thermodynamics, and other organisms such as parasites, in order to maintain themselves. Our bodies are constantly replacing cells that are lost for various reason - if we ever stopped doing this, we'd eventually die.
Posted by: Aaron at November 25, 2006 06:52 PM