« July 2005 | Main | September 2005 »
August 30, 2005
Reliability in the currency of ideas
The grist of the scientific mill is publications - these are the currency that academics use to prove their worth and contributions to society. When I first dreamt of becoming a scientist, I rationalized that while I would gain less materially than certain other careers, I would be contributing to society in a noble way. But what happens to the currency when its reliability is questionable, when the noblesse is in doubt?
A recent paper in the Public Library of Science (PLoS) Medicine by John Ioannidis discusses "Why most published research findings are false" (New Scientist has a lay-person summary available). While Ioannidis is primarily concerned with results in medicine and biochemistry, his criticism of experimental design, experimenter bias and scientific accuracy likely apply to the broad range of disciplines. In his own words,
The probability that a research claim is true may depend on the study power and bias, the number of other studies on the same question, and, importantly, the ratio of true to no relationships among the relationships probed in each scientific field.
Ioannidis argues is that the current reliance upon the statistical significance p-value in only one direction, i.e., is the chance that the observed data is no different than the null hypothesis measured to be less than some threshold (typically, chance less than 1 in 20), is a dangerous precedent as it ignores the influence of research bias (from things such as finite-size effects, hypothesis and test flexibility, pressure to publish significant findings, etc.). Ioannidis goes on to argue that scientists are often careless in ruling out potential biases in data, methodology and even the hypotheses tested, and that replication by independent research groups is the best way of validating research findings as they constitute the most independent kind of trial possible. That is, confirming an already published result is at least as important as the original finding itself. Yet, he also argues that even then, significance may simply represent broadly shared assumptions.
... most research questions are addressed by many teams, and it is misleading to emphasize the statistically significant findings of any single team. What matters is the totality of the evidence. Diminishing bias through enhanced research standards and curtailing of prejudices may also help. However, this may require a change in scientific mentality that might be difficult to achieve.
In the field of complex systems, where arguably there is a non-trivial amount of pressure to produce interesting and, pardon the expression, universal results, Ioannidis's concerns seem particularly relevant. Without beating the dead horse of finding power laws everyone you look, shouldn't we who seek to explain the complexity of the natural (and man-made) world through simple organizing principles be held to exacting standards of rigor and significance? My work as a referee leads me to believe that my chosen field has insufficiently indoctrinated its practitioners as to the importance of experimental and methodological rigor, and of not over-generalizing or over-stating the importance of your results.
Ioannidis, J. P. A. (2005) "Why most published research findings are false." PLoS Med 2(8):e124
posted August 30, 2005 10:13 AM in Simply Academic | permalink | Comments (0)
August 29, 2005
A return to base.
I returned to New Mexico about two weeks ago, and have, I think, almost gotten my loose ends from the summer tied up to the point that I can consider blogging again on a regular basis. I will certainly be blogging about my newfound insight into the dark world of the credit card industry, the similarities between academia and consulting, and other edifying topics.
Also, as a slight update, the SIAM news article on my work with Cristopher Moore, and in turn with David Kempe and Dimitris Achlioptas on analyzing the bias of the tools that we use to map the Internet has finally appeared online.
Additionally, Philip Ball, who has written about my work on the statistics of terrorism before (here, and here, both for Nature News), has penned another article for The Guardian that discusses Neil Johnson's recent preprint and again Maxwell Young's work with me on terrorism.
posted August 29, 2005 12:54 PM in Blog Maintenance | permalink | Comments (0)