Of Death and Data

What do death rates in New Orleans and the availability of unused embryos for medical research have in common? Both were the subject of prominently reported news stories in the last few weeks – stories we'd advised our own people here at ABC News to avoid.

It happens much more often than I'd like. We’ve been at the forefront in establishing standards for validity and reliability in reporting survey research; too many polls are manufactured, ill-supported or misanalyzed for us to do any less. But while a few news organizations are now doing similar work, many still don’t dig into the research they’re reporting in order to establish first what's really there.

Some of these evaluative efforts are easy – the P.R. polls done on the cheap with compromised and unreliable methodology, or the advocacy and partisan surveys packed with loaded questions and cherry-picked analysis. But others – including academic and health-related studies – can be a good sight more complicated.

A day after we'd steered our radio correspondent Aaron Katersky away from the New Orleans study, for instance, there it was on page one of USA Today: Researchers had found a 47 percent increase in New Orleans' death rate after Hurricane Katrina.

Not a bad headline for a slow news day, and the researchers did have an innovative approach. Expressing concern about the reliability of offical data, they tallied death notices in the local newspaper, the Times-Picayune. They found an average of 1,317 death notices a month from January to June 2006, up from an average of 924 in 2002 and 2003. On a per-capita basis, it was a 47 percent increase.

The report, produced by the New Orleans Health Department and published in the journal "Disaster Medicine and Public Health Preparedness," described this change as statistically significant. Therein lies the problem: When our senior analyst Pat Moynihan read beyond the news release to the report itself, the data tables he saw back on page 18 indicated that the difference the researchers found wasn't statistically significant at all.

Pat then engaged in a fairly lengthy phone conversation with the report's lead author and an e-mail exchange with its second author. Final outcome: Indeed the differences in death notices reported by the study were not statistically significant. You can call us a stick in the mud, but as we see trend data, if a change isn't statistically significant, it's going to have a darned hard time being newsworthy.

That same week we looked at the embryo study, published in the online edition of the journal Science and released (presumably not by coincidence) the day George W. Bush vetoed the latest stem-cell research legislation. The survey reported that a substantial number of people who’ve undergone in-vitro fertilization would donate their unused embryos to stem-cell research, potentially vastly increasing the number of available lines. The AP – which has been much better lately at checking out survey data before jumping in – picked this one up.

We saw problems. The study was described as a "national survey," yet no sampling detail was provided beyond the fact that it'd been conducted among patients at nine fertility clinics. The phrase "national survey" implies (to us, at least) that it's a nationally representative survey. Turns out this is not the case: In conservations with us, the authors allowed that theirs was a so-called convenience sample of clinics, assembled without the benefit of random sampling procedures.

Despite this lack of representative sampling, the researchers extrapolated their results to the estimated full national population of IVF patients. That's hard for us to see as justified for attitudinal results obtained via a convenience sample, certainly not without very substantial qualifications. (We also thought the study used an awfully broad definition of propensity to donate.)

These aren't mere quibbles. I've reported previously on misreporting of scientific data in studies on subjects as disparate as autism, parenting and even naps. Whatever the subject, as studies like these enter the discourse through credible media sources they can influence our thinking and ultimately inform public policy. In deciding whether and how to report them, we need first to check them out in detail, as we do any other alleged news that comes in over the transom. As newspeople, that's our job. Or should be.

Join the Discussion
blog comments powered by Disqus
 
You Might Also Like...