I have been asked to discuss my views on open science and replication, particularly in my field of social psychology, nine times in 2016 (see my “Open Science Tour” dates below). During these talks, and in discussions that followed, people wanted to know what exactly is open science, and how might a researcher go about employing open science practices?
Overall, many similar questions were asked of me from faculty and students so I thought I would create a list of these frequently asked questions. I do not provide a summary of my responses to these questions, instead wanting readers to consider how they would respond. So, how would you answer these questions? (public google doc for posting answers)
- Given that many findings are not, and in many cases cannot, be predicted in advance, how can I pre-register my hypotheses?
- If my research is not confirmatory, do I need to use open science practices? Isn’t open science only “needed” when very clear hypotheses are being tested?
- How can I share data?
- What data do I “need” to share? (All of it? Raw data? Aggregated data?)
- What platforms are available for data sharing? (and what is the “best” one?)
- What format/software should be used?
- Is this really necessary?
- How should I present this to my research ethics board?
- Can I publicly share materials that are copyrighted?
- What is a data analytic plan?
- Is it really important to share code/syntax from my analyses?
- Can’t researchers simply “game the system”? That is, conduct research first, then pre-register after results are known (PRARKing), and submit for publication?
- Can shared data, or even methods/procedures, be treated as unique “citable units”?
- If I pilot test a procedure in order to obtain the desired effects, should the “failed” pilot studies be reported?
- If so won’t this bias the literature by diluting the evidence in favor of the desired/predicted effect obtained in later studies?
- How much importance should I place on statistical power?
- Given that effect sizes are not necessarily knowable in advance, and straightforward procedures are not available for more complex designs, is it reasonable to expect a power analysis for every study/every analysis?
- If I use open science practices but others do not, can they benefit more in terms of publishing more papers because of fewer “restrictions” on them?
- If yes, how is this fair?
Unique question from students:
- Could adopting open science practices result in fewer publications?
- Might hiring committees be biased against applicants that are pro open science?
- If a student wants to engage in open science practices, but his/her advisor is against this, what should this student do?
- If a student wants to publish studies with null findings, but his/her advisor is against this, what should this student do?
- Will I “need” to start engaging in open science practices soon?
- Will it look good, or bad, to have a replication study (studies) on my CV?
- What is the web address for the open science framework? How do I get started?
My Open Science tour dates in 2016 (links to slides provided):
- January 28, Pre-Conference of the Society of Personality and Social Psychology (SPSP), San Diego, USA
- June 10, Conference of the Canadian Psychological Association, Victoria, Canada
- October 3, York University (Psychology), Canada (audio recording)
- October 11, University of Toronto (Psychology), Canada
- October 19, University of Guelph (Family Relations and Applied Nutrition), Canada
- October 21, Illinois State University, (Psychology), USA
- November 11, Victoria University Wellington (Psychology), New Zealand
- November 24, University of Western Ontario (Clinical Area), Canada
- December 2, University of Western Ontario (Developmental Area), Canada