Tag Archives: open science

A Commitment to Better Research Practices (BRPs) in Psychological Science

Scientific research is an attempt to identify a working truth about the world that is as independent of ideology as possible.  As we appear to be entering a time of heightened skepticism about the value of scientific information, we feel it is important to emphasize and foster research practices that enhance the integrity of scientific data and thus scientific information. We have therefore created a list of better research practices that we believe, if followed, would enhance the reproducibility and reliability of psychological science. The proposed methodological practices are applicable for exploratory or confirmatory research, and for observational or experimental methods.

  1. If testing a specific hypothesis, pre-register your research[1], so others can know that the forthcoming tests are informative. Report the planned analyses as confirmatory, and report any other analyses or any deviations from the planned analyses as exploratory.
  2. If conducting exploratory research, present it as exploratory. Then, document the research by posting materials, such as measures, procedures, and analytical code so future researchers can benefit from them. Also, make research expectations and plans in advance of analyses—little, if any, research is truly exploratory. State the goals and parameters of your study as clearly as possible before beginning data analysis.
  3. Consider data sharing options prior to data collection (e.g., complete a data management plan; include necessary language in the consent form), and make data and associated meta-data needed to reproduce results available to others, preferably in a trusted and stable repository. Note that this does not imply full public disclosure of all data. If there are reasons why data can’t be made available (e.g., containing clinically sensitive information), clarify that up-front and delineate the path available for others to acquire your data in order to reproduce your analyses.
  4. If some form of hypothesis testing is being used or an attempt is being made to accurately estimate an effect size, use power analysis to plan research before conducting it so that it is maximally informative.
  5. To the best of your ability maximize the power of your research to reach the power necessary to test the smallest effect size you are interested in testing (e.g., increase sample size, use within-subjects designs, use better, more precise measures, use stronger manipulations, etc.). Also, in order to increase the power of your research, consider collaborating with other labs, for example via StudySwap (https://osf.io/view/studyswap/). Be open to sharing existing data with other labs in order to pool data for a more robust study.
  6. If you find a result that you believe to be informative, make sure the result is robust. For smaller lab studies this means directly replicating your own work or, even better, having another lab replicate your finding, again via something like StudySwap.  For larger studies, this may mean finding highly similar data, archival or otherwise, to replicate results. When other large studies are known in advance, seek to pool data before analysis. If the samples are large enough, consider employing cross-validation techniques, such as splitting samples into random halves, to confirm results. For unique studies, checking robustness may mean testing multiple alternative models and/or statistical controls to see if the effect is robust to multiple alternative hypotheses, confounds, and analytical approaches.
  7. Avoid performing conceptual replications of your own research in the absence of evidence that the original result is robust and/or without pre-registering the study. A pre-registered direct replication is the best evidence that an original result is robust.
  8. Once some level of evidence has been achieved that the effect is robust (e.g., a successful direct replication), by all means do conceptual replications, as conceptual replications can provide important evidence for the generalizability of a finding and the robustness of a theory.
  9. To the extent possible, report null findings. In science, null news from reasonably powered studies is informative news.
  10. To the extent possible, report small effects. Given the uncertainty about the robustness of results across psychological science, we do not have a clear understanding of when effect sizes are “too small” to matter. As many effects previously thought to be large are small, be open to finding evidence of effects of many sizes, particularly under conditions of large N and sound measurement.
  11. When others are interested in replicating your work be cooperative if they ask for input. Of course, one of the benefits of pre-registration is that there may be less of a need to interact with those interested in replicating your work.
  12. If researchers fail to replicate your work continue to be cooperative. Even in an ideal world where all studies are appropriately powered, there will still be failures to replicate because of sampling variance alone. If the failed replication was done well and had high power to detect the effect, at least consider the possibility that your original result could be a false positive. Given this inevitability, and the possibility of true moderators of an effect, aspire to work with researchers who fail to find your effect so as to provide more data and information to the larger scientific community that is heavily invested in knowing what is true or not about your findings.

We should note that these proposed practices are complementary to other statements of commitment, such as the commitment to research transparency (http://www.researchtransparency.org/). We would also note that the proposed practices are aspirational.  Ideally, our field will adopt many, of not all of these practices.  But, we also understand that change is difficult and takes time.  In the interim, it would be ideal to reward any movement toward better research practices.

Brent W. Roberts, Rolf A. Zwaan, Lorne Campbell

[1] van ’t Veer, A. E., & Giner-Sorolla, R. (2016). Pre-registration in social psychology—A discussion and suggested template. Journal of Experimental Social Psychology, 67, 2–12. doi:10.1016/j.jesp.2016.03.004

My 2016 Open Science Tour

I have been asked to discuss my views on open science and replication, particularly in my field of social psychology, nine times in 2016 (see my “Open Science Tour” dates below). During these talks, and in discussions that followed, people wanted to know what exactly is open science, and how might a researcher go about employing open science practices?

Overall, many similar questions were asked of me from faculty and students so I thought I would create a list of these frequently asked questions. I do not provide a summary of my responses to these questions, instead wanting readers to consider how they would respond. So, how would you answer these questions? (public google doc for posting answers)

  1. Given that many findings are not, and in many cases cannot, be predicted in advance, how can I pre-register my hypotheses?
  2. If my research is not confirmatory, do I need to use open science practices? Isn’t open science only “needed” when very clear hypotheses are being tested?
  3. How can I share data?
    • What data do I “need” to share? (All of it? Raw data? Aggregated data?)
    • What platforms are available for data sharing? (and what is the “best” one?)
    • What format/software should be used?
    • Is this really necessary?
    • How should I present this to my research ethics board?
  4. Can I publicly share materials that are copyrighted?
  5. What is a data analytic plan?
  6. Is it really important to share code/syntax from my analyses?
  7. Can’t researchers simply “game the system”? That is, conduct research first, then pre-register after results are known (PRARKing), and submit for publication?
  8. Can shared data, or even methods/procedures, be treated as unique “citable units”?
  9. If I pilot test a procedure in order to obtain the desired effects, should the “failed” pilot studies be reported?
    • If so won’t this bias the literature by diluting the evidence in favor of the desired/predicted effect obtained in later studies?
  10. How much importance should I place on statistical power?
    • Given that effect sizes are not necessarily knowable in advance, and straightforward procedures are not available for more complex designs, is it reasonable to expect a power analysis for every study/every analysis?
  11. If I use open science practices but others do not, can they benefit more in terms of publishing more papers because of fewer “restrictions” on them?
    • If yes, how is this fair?

Unique question from students:

  1. Could adopting open science practices result in fewer publications?
  2. Might hiring committees be biased against applicants that are pro open science?
  3. If a student wants to engage in open science practices, but his/her advisor is against this, what should this student do?
  4. If a student wants to publish studies with null findings, but his/her advisor is against this, what should this student do?
  5. Will I “need” to start engaging in open science practices soon?
  6. Will it look good, or bad, to have a replication study (studies) on my CV?
  7. What is the web address for the open science framework? How do I get started?

My Open Science tour dates in 2016 (links to slides provided):

  • January 28, Pre-Conference of the Society of Personality and Social Psychology (SPSP), San Diego, USA
  • June 10, Conference of the Canadian Psychological Association, Victoria, Canada
  • October 3, York University (Psychology), Canada (audio recording)
  • October 11, University of Toronto (Psychology), Canada
  • October 19, University of Guelph (Family Relations and Applied Nutrition), Canada
  • October 21, Illinois State University, (Psychology), USA
  • November 11, Victoria University Wellington (Psychology), New Zealand
  • November 24, University of Western Ontario (Clinical Area), Canada
  • December 2, University of Western Ontario (Developmental Area), Canada

How to Publish an Open Access Edited Volume on the Open Science Framework (OSF)

Edited volumes are collections of chapters on a particular topic by various experts. In my own experience as a co-editor of three (3) edited volumes, the editors select the topic, select and invite the experts (or authors), and identify a publisher. Once secured, a publisher typically offers a cash advance to the editor(s) along with a small percentage of sales going forward in the form of royalties. The publisher may also provide reviewing services for the collection of chapters, and will advertise the edited volume when it is released. The two primary ways for consumers to access the chapters is to (a) purchase the book, or (b) obtain a copy of the book from a library.

With technological advances it is now possible to publish edited volumes without a professional publishing company. Why would someone choose to not use a publishing company? Indeed, they are literally publication experts. Perhaps the biggest reason is that the resulting volume will be open access, or available to anyone with a connection to the internet, free of charge. There are also some career advantages to sharing knowledge open access. Also, a publishing company is simply not needed for all publication projects.

There are very likely many different ways to publish an edited volume without using a professional publishing company. Below, I outline one possibility that involves using the Open Science Framework (OSF). Suggestions for improving these suggested steps are welcome.

Steps to Using the OSF to publish an Open Access Edited Volume

  1. Identify a topic for the edited volume, and then identify a list of experts that you would like to invite to contribute chapters.
  2. If you do not have an OSF account, create one (it is free). Create a new project page for your edited volume, and give it the title of the proposed edited volume. Select one of the licensing options for your project to grant copyright permission for this work.
  3. Draft a proposal for your edited volume (e.g., the need for this particular collection of chapters, goals of the volume, target audience, and so on). Add this file to the project page.
  4. Send an email inviting potential authors, providing a link to your OSF project page so they can read your proposal.
    • You can make the project page public from the start and simply share the link, or,
    • You can keep the project page private during the development of the edited volume and “share” a read-only link to the project page with prospective authors only.
  5. Ask all authors that accepted the invitation to create on OSF account. Then create a component for each individual chapter; components are part of the parent project, but are treated as independent entities in the OSF. Use the proposed title for each chapter as the title of the component. Add the author(s) as administrators for the relevant component (e.g., A. Smith has agreed to author chapter #4; add A. Smith as an administrator of component #4).
  6. Ask authors to upload a copy of their first draft by the selected deadline. Provide feedback on every chapter.
    • One option is to download a copy of the chapter, make edits using the track changes option, and then upload a copy of the edited chapter using the same title as the original in order to take advantage of the “version control” function of the OSF (i.e., all versions of the chapter will be available on the project page in chronological order, with the most recent version at the top of the list).
  7. Ask authors to upload their revised chapter using the same title (again to take advantage of the “version control” function of the OSF).
  8. When the chapters are completed, “register” the project and all components. This will “freeze” all of the files, meaning changes can no longer be made. The registered components, or chapters, represent the final version of edited volume. Then…
    • Make all of the components, as well as the main project registration, public;
    • Enable the “comments” option so that anyone can post comments within each component (e.g., to discuss the material presented in the chapter);
    • Click the link to obtain a Digital Object Identifier (DOI) for each component (i.e., chapter).
  9. Advertise the edited volume
    • Use social media, including Facebook discussion groups and Twitter (among others). Encourage readers to leave comments for each chapter on the OSF pages;
    • Ask your University to issue a press release;
    • Ask your librarian for tips on how to advertise your new Open Access edited volume (librarians are an excellent resource!!).

Prior to following these steps to create your own Open Access edited volume on the OSF (or by using a different approach), there are some pros and cons to consider:

Pros

  • You have created an edited volume that is completely Open Access
  • The volume cost no money to create, no money to advertise, and no money to purchase
  • Given that the chapters are available to a wider audience than a traditional edited volume released by a for profit publishing company, it is likely that they will actually reach a wider audience as well and have a greater scientific impact

Cons

  • You do not receive a cash advance or royalties
  • You do not receive any assistance from a publisher for reviewing or advertising
  • This approach is new compared to traditional publishing, and therefore you may be concerned that you will not receive proper credit from others (e.g., people evaluating your contributions to science when deciding to hand out grant funds, jobs, promotions, and so on)

Final Thoughts

There is usually more than one way to achieve the same aim. Professional publishing companies work with academics to create many edited volumes every year, but creating an edited volume does not inherently require the assistance of a professional publishing company. The purpose of this post was to present one alternative using the functionality of the Open Science Framework to publish an edited volume that is Open Access. I am sure there are even more ways to achieve this aim.

Teaching Open Science

In November 2015 I gave a workshop at the University of Toronto Mississauga on “Doing Open Science” (slides: https://osf.io/kz2u5/). During, and following, the workshop I spoke with attendees and heard two particular responses from this audience of graduate students and post-docs. First, they all believed that open science is becoming more important in our field. Second, most of them were unsure how to get started with open science in their own research. In fact, these are the two responses I hear most from others when discussing open science—it seems important, but how do I do it in my own lab?

More resources are now becoming available including a manual of best practices offered by BITSS and a list of course syllabi on the topic hosted on the Open Science Framework (OSF). My recent blog on organizing my own open science offered some suggestions for how to adopt open science practices (see also this paper). A Facebook post to the Psychology Methods Discussion Group asking how to pre-register study details also generated some useful feedback. Perusing public registrations of research projects on the OSF can also provide many examples of how to share details of the research process. And the newly introduced AsPredicted.org is a site devoted to making pre-registration very straightforward and fairly simple. Information is therefore becoming more available if one is motivated to look for it.

Psychology graduate programs typically have students take courses on statistical approaches to data analysis as well as on research methods. In these courses students read texts and papers, and learn where to find additional information. They also learn the values of their academic elders regarding the scientific process (e.g., predicting outcomes using statistical analyses with particular methodological designs). It seems to me, however, that going forward it is critical that we start routinely teaching open science practices to our students so (a) they know where to find information on open science, and (b) they learn that the research community that is training them values open science. It also seems practical to introduce material (or courses) on open science given that many journals are beginning to incentivize open science practices. Graduate students that adopt open science practices (as part of science 2.0) may therefore have an advantage in the job market compared to students that maintain the traditional closed science practices. As one final incentive to embrace the teaching of open science to your students, there are now awards available for doing it!