Saturday, July 27, 2024

Should publishers flag replication failures in the same way they flag retractions?

EcologyShould publishers flag replication failures in the same way they flag retractions?


In psychology, published replication attempts have no detectable effect on the rate at which the original paper is cited, whether or not the original result replicated. That’s at least in part because the replications themselves are hardly cited. Andrew Gelman discusses this result, and similar results in other fields. Including discussion of a radical proposal: studies that fail to replicate should be flagged by publishers, just like how retractions are flagged by publishers! Analogous to how the WestLaw database doesn’t just show lawyers all the court cases on issue X, but also indicates which cases have been cited approvingly, affirmed, or overturned in appeals and subsequent cases.

To which: I dunno man.

The analogy to WestLaw is an interesting one that I need to think more about. But right this second, I feel like scientists have to do their job. Which includes knowing the literature. It’s not, and shouldn’t be, up to publishers to know the literature for them.

Plus, are we sure that not knowing the literature is really the problem here? Is “lack of awareness of failed replications” the reason why they’re not cited, and the reason why the original studies continue to be cited? I kind of doubt it.

Which brings me to my next thought: I want to know more about how the original studies are cited, before I get too worked up about the lack of effect of replication failures on their citation rates. For instance, in ecology, Joe Connell’s original 1978 Science paper coining the intermediate disturbance hypothesis still gets cited a ton, even though the IDH hasn’t stood up to empirical or theoretical scrutiny (Fox 2013). But almost all of those citations these days are either from obscure journals, or else they’re throwaway, “fill in the box” citations from papers that often aren’t even about the intermediate disturbance hypothesis. I’m not too bothered that refutations of the IDH haven’t filtered down to obscure journals, and haven’t stopped people from using Connell (1978) as a throwaway citation. This is very much in contrast to citations in court decisions. In court decisions, AFAIK there’s no equivalent of the sorts of throwaway citations that comprise a decent fraction of all the citations in any given ecology paper.

Finally, what exactly counts as a “replication attempt” that publishers ought to flag? Do “conceptual replications” count? Effect sizes in ecology are so heterogeneous (Senior et al. 2016), that I question whether ecology ever has replication attempts that are so exact that they ought to be flagged by publishers in the same way that retractions are flagged. “This study failed to replicate, so you should consider it to be refuted and not cite it, just like how you wouldn’t cite a retracted study” just doesn’t seem like the kind of thing ecologists ever ought to say. You’d end up flagging, and refusing to cite, every study in ecology, including the replications. Again, this seems like a contrast to court cases on WestLaw. There’s no ambiguity as to whether a court decision was overturned or upheld on appeal.

To be clear, I do think that ecologists can and should get better and quicker at abandoning scientific ideas that haven’t panned out, or aren’t likely to pan out. I just don’t think that we’re going to improve on that front by building a search engine or database that automatically flags studies that failed to replicate, however “replicate” is defined. But I could be wrong! Looking forward to hearing what you think.

Check out our other content

Most Popular Articles