Tag Archives: replication

My 2016 Open Science Tour

I have been asked to discuss my views on open science and replication, particularly in my field of social psychology, nine times in 2016 (see my “Open Science Tour” dates below). During these talks, and in discussions that followed, people wanted to know what exactly is open science, and how might a researcher go about employing open science practices?

Overall, many similar questions were asked of me from faculty and students so I thought I would create a list of these frequently asked questions. I do not provide a summary of my responses to these questions, instead wanting readers to consider how they would respond. So, how would you answer these questions? (public google doc for posting answers)

  1. Given that many findings are not, and in many cases cannot, be predicted in advance, how can I pre-register my hypotheses?
  2. If my research is not confirmatory, do I need to use open science practices? Isn’t open science only “needed” when very clear hypotheses are being tested?
  3. How can I share data?
    • What data do I “need” to share? (All of it? Raw data? Aggregated data?)
    • What platforms are available for data sharing? (and what is the “best” one?)
    • What format/software should be used?
    • Is this really necessary?
    • How should I present this to my research ethics board?
  4. Can I publicly share materials that are copyrighted?
  5. What is a data analytic plan?
  6. Is it really important to share code/syntax from my analyses?
  7. Can’t researchers simply “game the system”? That is, conduct research first, then pre-register after results are known (PRARKing), and submit for publication?
  8. Can shared data, or even methods/procedures, be treated as unique “citable units”?
  9. If I pilot test a procedure in order to obtain the desired effects, should the “failed” pilot studies be reported?
    • If so won’t this bias the literature by diluting the evidence in favor of the desired/predicted effect obtained in later studies?
  10. How much importance should I place on statistical power?
    • Given that effect sizes are not necessarily knowable in advance, and straightforward procedures are not available for more complex designs, is it reasonable to expect a power analysis for every study/every analysis?
  11. If I use open science practices but others do not, can they benefit more in terms of publishing more papers because of fewer “restrictions” on them?
    • If yes, how is this fair?

Unique question from students:

  1. Could adopting open science practices result in fewer publications?
  2. Might hiring committees be biased against applicants that are pro open science?
  3. If a student wants to engage in open science practices, but his/her advisor is against this, what should this student do?
  4. If a student wants to publish studies with null findings, but his/her advisor is against this, what should this student do?
  5. Will I “need” to start engaging in open science practices soon?
  6. Will it look good, or bad, to have a replication study (studies) on my CV?
  7. What is the web address for the open science framework? How do I get started?

My Open Science tour dates in 2016 (links to slides provided):

  • January 28, Pre-Conference of the Society of Personality and Social Psychology (SPSP), San Diego, USA
  • June 10, Conference of the Canadian Psychological Association, Victoria, Canada
  • October 3, York University (Psychology), Canada (audio recording)
  • October 11, University of Toronto (Psychology), Canada
  • October 19, University of Guelph (Family Relations and Applied Nutrition), Canada
  • October 21, Illinois State University, (Psychology), USA
  • November 11, Victoria University Wellington (Psychology), New Zealand
  • November 24, University of Western Ontario (Clinical Area), Canada
  • December 2, University of Western Ontario (Developmental Area), Canada

An Inside Perspective of a Registered Replication Report (RRR)

Update: Dan Simons and Bobbie Spellman discuss this Registered Replication Report, and others, on NPR “Science Friday

In the spring of 2014 we (i.e., Irene Cheung, Lorne Campbell and Etienne LeBel) decided to submit a proposal to Perspectives on Psychological Science for a Registered Replication Report (RRR) focusing on Study 1 of Finkel, Rusbult, Kumashiro and Hannon’s (2002) paper testing the causal association between commitment and forgiveness. The product of over 2 years of work by many people including us, the tireless Dan Simons (Editor of the RRR series), a cooperative and always responsive Eli Finkel (the lead author of the research to be replicated), and researchers from 15 other labs all over the world, is now finally published online (http://www.psychologicalscience.org/pdf/Finkel_RRR_FINAL.pdf). Here is our inside perspective of how the process unfolded for this RRR.

The initial vetting stage for the RRR was fairly straightforward. We answered some simple questions on the Replication Pre-Proposal Form, and provided the rationale for why we believed Study 1 of Finkel et al.’s (2002) manuscript was a good candidate for an RRR (e.g., the paper is highly cited, is theoretically important, no prior direct replications have been published). After receiving positive feedback, we were asked to provide a more thorough breakdown of the original study and the feasibility of having multiple labs all over the world conduct the same project independently. In a Replication Proposal and Review Form totaling 47 pages, we provided information regarding (a) the original study and effect(s) of interest, (b) sample characteristics of original and proposed replication studies (including power analysis), (c) researcher characteristics (including relevant training of the researcher collecting data from participants), (d) experimental design of original and proposed studies, (e) data collection (including any proposed differences from the original study, and (f) target data analysis (of both the original and planned replication studies). After receiving excellent feedback and making many edits, a draft of this document was sent to the original corresponding author (Eli Finkel). Eli very quickly provided thorough feedback, and forwarded copies of the original study materials. He also provided thoughtful feedback throughout the process as we made many decisions on how to conduct the replication study and ultimately vetted the final protocol. The RRR editors eventually gave us the green light to go forward with the project.

We were then required to organize the project. The study was programmed on Qualtrics, the protocol requirements were created, the project page on the Open Science Framework (OSF) was established, and eventually a call went out for interested researchers to submit a proposal to independently run the study and contribute data. It is near impossible to estimate the number of emails sent around between Dan, our team, and Eli during this time, as well as the number of small changes made to all of the materials along the way. Leading up to the fall of 2015, all participating labs were ready to start collecting data. Participating labs simply needed to download the necessary materials from the OSF project page, and Dan provided support to many of the labs throughout the process. Prior to data collection, the study was pre-registered on the OSF. Data collection was complete by January 2016, and it was time to prepare the R code needed to analyze the data from each lab as well as conduct the planned meta-analyses. Our team helped test the code, and then Edison Choe (working for APS) wrote the full set of code (verified by Courtney Sodenberg from the OSF). The code needed to be tweaked many times as well as to make small adjustments. All labs then ran the code with their data, and submitted the data and results to their own OSF pages, while our team wrote the manuscript before seeing the full set of results from all labs. Dan and Eli provided feedback on numerous occasions, and the full set of results was not released to us until the manuscript was considered acceptable by all parties. After the results were released we incorporated them into the manuscript and wrote a discussion section. Eli then wrote a response. After making many small edits, and sending copious amounts of email around to Dan and Eli, the manuscript was complete. All participating labs were then provided a copy of the manuscript to review for any required edits, and asked not to discuss the results with anyone not associated with the RRR until the paper was published online. Not surprisingly, a few more edits were indeed required. When completed, the manuscript was sent to the publisher and appeared online first within a week.

Overall, this was a monumental task. The manuscript can be read in minutes, the results digested in a few quick glances at the forest plots. Getting to this point, however, required the time, attention and effort of many individuals over 2 years. Seeing an RRR through to completion requires a lot of dedication, hard work, and painstaking attention to detail; it is not to be entered into lightly. But the process itself, in our opinion, represents the best of what Science can be—researchers working together in an open and transparent manner and sharing the outcome of the research process regardless of the outcome. And the outcome of this process is a wonderful set of publicly available data that helps provide more accurate estimates of the originally reported effect size(s). It is a model of what the scientific process should be, and is slowly becoming.

From the proposing authors of this RRR:

Irene Cheung

Lorne Campbell

Etienne LeBel