Deciding What Studies to Replicate: The Path of Least Resistance

I support the efforts of researchers in our field to conduct direct/close replication attempts of published research. It is something I do myself. It is wonderful to see journal policies starting to embrace replication studies, including special issues in different journals devoted to replication studies in a given area of study. But…not really a but in the sense that I will now soften my support for replication research, but a but in the sense that because less labor/time/cost intensive studies will likely be the focus of the majority of these replication attempts compared to more labor/time/cost intensive studies it seems inevitable that an asymmetry in the precision of effect size estimates will develop over time.

Imagine a researcher that has decided to devote some research effort to replication studies comes across three published studies that pique his interest:

  • Publication 1: recruited a large number of participants from either a University subject pool (i.e., undergraduate students that need to participate in research studies for course credit) or online (e.g., Amazon’s Mechanical Turk).
  • Publication 2: recruited undergraduate students from the University subject pool, but had each participant come into the lab individually because of complex experimental manipulations as well as the collection of biological samples.
  • Publication 3: recruited participants from the community, following participants over a period of two years with multiple testing sessions (both in lab and online).

I do not need to provide more details for these fictional studies to make the point that the labor, time and cost needed to directly replicate the methods of study 3 is much greater than for the other two studies, and is greater for study 2 compared to study 1. Given that researchers do not have access to unlimited resources over a prolonged period of time to conduct their own research let alone direct replications of the research of others (if you do, call me), it is reasonable to conclude that of the fictional studies presented more replication attempts would be made for study 1 than the other two studies. Over time, therefore, more precise estimates of the effect sizes obtained in “easy to run” studies will accumulate compared to “difficult to run” studies. Put another way, one shot correlational and experimental studies involving University students or MTurkers will be the focus of the bulk of replication attempts; studies with special populations (e.g., cross-cultural samples, married couples, parent-child interactions, and many, many others), those collecting “expensive” data (e.g., brain scans, hormonal assays), and studies using longitudinal designs (e.g., daily-diary studies, the early years of marriage, personality development across time, and so on) will be the focus of few, if any in some cases, direct replication attempts. I cannot imagine, for example, obtaining the grant funds necessary to directly replicate a multi-wave study of newly married couples over a period of two or more years [but see comment below–Brent Roberts did receive grant funding along these lines]. Even if funds were on hand to directly replicate a two-week diary study that included pre- and post-diary assessments, the amount of time needed to run the study, and the research assistants needed, would likely dissuade most researchers from endeavouring to replicate this research.

Now that the value of direct/close replication studies is generally recognized, perhaps we need to find ways of incentivizing replication attempts of studies that otherwise would be ignored by most replicators.

2 thoughts on “Deciding What Studies to Replicate: The Path of Least Resistance”

  1. Hi Lorne, no need to imagine that a granting agency would fund a multi-wave longitudinal replication structure because NIA already did so for my studies. These 3 longitudinal studies are not direct replications of each other, but they are close enough in structure to confirm findings across data sets. Moreover, we just published a close replication of a multi-wave longitudinal study with another multi-wave longitudinal study in JPSP. Yes, even longitudinal research can be directly replicated.

    So, while I agree with the sentiment of your post, I think it all boils down to how much we care about a phenomenon. If the topic matters enough, we can and should do the work to show that the phenomenon is robust.

Leave a Reply

Your email address will not be published. Required fields are marked *