Tuesday, November 30, 2010

Science Reporting and Evidence-Based Journalism

I tore an article out of The American Scholar about a year ago and crumpled it into my pocket to think about later. The pages turned up again this weekend when my wife was sorting through one of the rat’s nests of loose papers I maintain around the house. The article is an extract of a speech by a Washington Post reporter named David Brown. The speech is available here:

http://www.theamericanscholar.org/science-reporting-and-evidence-based-journalism/

Brown talks about ways in which science journalism could be a lot better than it is today. He argues that science journalism, done better, could be a model for improving journalism as a whole. The implication is that our democracy could certainly use some better journalism.

You'd probably be using your time more wisely if you were to read the speech instead of reading this blog post, but I'll carry on for a moment under the assumption that you have time for both.


According to Brown, the most important thing science journalism has that makes it a good model for journalism as a whole is evidence. When you read a story, says Brown, "[N]otice how much space is devoted to describing the evidence for what is purportedly new in this news, and how much is devoted to someone telling you what to think about it. Ask yourself whether there is enough information in the story to permit you to reach your own opinion about its newsworthiness. I think you’ll be surprised. If there isn’t enough information to give you, the reader, a fighting chance to decide for yourself whether something is important, then somebody isn’t doing his or her job."

I decided to apply this test to a short piece of science reporting, which I reproduce here in its entirety:

Lithium, Water, and Suicide

Lithium is a staple prescription for bipolar depression and suicidal tendencies. But it is also a naturally occurring element, with traces found in most of the world's drinking water, and that raises a question: Do levels of lithium in tap water correlate with suicide rates? A recent study by a team of Japanese doctors says yes.

Published May 1 in the British Journal of Psychiatry, the study reported that in 18 municipalities in southwest Japan, towns with relatively low levels of lithium saw higher suicide rates than towns with relatively high levels. The lithium content ranged from 0.7 to 59 micrograms, much lower than the 200 to 400 milligrams usually prescribed to bipolar patients (and much, much lower than the toxicity threshold). Nevertheless, the researchers speculated that even very low levels of lithium can have a cumulative, prophylactic effect on mood swings that might induce suicidal thoughts, completely separate from the effect large doses have on mood disorders. The implication, says lead researcher Takeshi Terao in response to an e-mail inquiry, is that "adding lithium to drinking water may be useful to prevent suicide."

But even Terao admits that further study is needed. Among other concerns, not enough is known about the long-term effects of even low levels of lithium, according to Dr. Allan H. Young, the director of the Institute of Mental Health at the University of British Columbia. Still, given the immense social costs of suicide, the team in Japan concluded in a follow-up paper that adding lithium to drinking water offers "an easy, cheap and substantial strategy for worldwide suicide prevention."
Although this article is better than most in that it includes several pieces of numerical information, it still leaves us pretty hungry for the one piece of information we need: namely, how big was the observed effect?

Reporters might not always realize that publication in a prestigious journal is no guarantee that the study in question matters. Sometimes the study only proves a theoretical point. Sure, maybe the study offers convincing evidence that eating more X will raise your risk of getting Y. But what if it only raises the risk from one in a million to one in a thousand? This is hardly newsworthy. Yet it might very well get published in the newspapers anyway. The headline? "Eating X Makes You A Thousand Times More Likely to Get Y!"

Brown makes this same point when he says,
Science stories, and especially medical stories, have a really good shot of getting out on Page 1. They are inherently interesting and they appeal to what might be termed, somewhat cynically, as the narcissism of the reader. But that often isn’t enough to get them on the front page. To get there, the story must emphasize novelty, potency, and certainty in a way that, as a general rule, rarely exists in a piece of scientific research. That truth is why so many medical stories only mention the [relative] magnitude of change that occurs with a new diagnostic test or treatment, and not the absolute change it brings about.

To be fair, reporters are not entirely to blame. They don't manufacture all of the buzz. The research university itself, ever conscious of building its brand, might even be suggesting the overblown headline in its own press release. More fundamentally, a scientific journal article is itself a rarified form of reporting, and the scientist's job security depends no less than the journalist's upon having something sensational to report. The scientist certainly has no incentive to downplay his own results - not in the research writeup itself, nor later when the reporter calls for a quote.

In any event, by withholding the evidence about the magnitude of the effect observed, "Lithium, Water, and Suicide" fails Brown's basic test of science journalism. It doesn't give us the information we need to decide whether the research in question matters or not. The closest the reporter gets to the notion of effect size or importance is the final "kicker" that adding lithium to drinking water offers "an easy, cheap and substantial strategy for worldwide suicide prevention." But that quote by itself isn't evidence.

I liked Brown's speech, but I think that giving a speech about how science journalism should be better is not actually going to make it any better. Or anyway, so I thought when I read "Lithium, Water, and Suicide." That's because it was in the same issue of The American Scholar in which Brown's speech itself appeared.


Postscript

In case you're curious to find out more about the lithium research, the article is available online. Also, in a later issue of the journal, there is some interesting discussion of the shortcomings of the study. The authors do not dispute the shortcomings, rightly pointing out that the study was meant to be suggestive, and that a lot of work remains to be done before the link between suicide and lithium in tap water can be taken as established. (Note in particular that lithium intake from food is not negligible in comparison with lithium intake from tap water.) And even if the link were established as a matter of basic science, a host of other questions arise when we consider adding lithium to tap water as a public health strategy.

I wanted to find out the rationale for the statement that lithium offers an "easy, cheap and substantial strategy for worldwide suicide prevention," so I did a little digging. If you Google this phrase, you'll see what a buzz the lithium story created a year ago. It even made it onto the 9th annual New York Times "Year in Ideas" list. What you won't find so easily is the follow-up scientific paper that contains this phrase. But I finally tracked it down and paid $31.50 for a copy. The citation is Terao et al., "Even very low but sustained lithium intake can prevent suicide in the general population?" Medical Hypotheses 73 (2009), 811-812.

The striking thing about this paper is that it does little better than the American Scholar article did in explaining the absolute magnitude of the effect in question. Yes, it quotes the slope of the regression line in the original research. But the raw data for that regression were massaged in various ways, and the y-axis of the regression model is a confusing measure called "suicide standardized mortality ratio." So we are never actually told how many suicides might be prevented each year based on a given treatment model. We never find out what that model would cost to implement. We never find out what the public health benefit would be in terms of years of life saved or lost wages regained. Even order of magnitude estimates would have helped. But as it is, if we want to decide for ourselves whether the results matter, then there's almost nothing to go on. And after all, if the scientists themselves aren't providing the necessary information, then how can we expect the reporters to give it to us?