The December 2014 issue of Personal Relationships contains a paper by me and two co-authors (Timothy Loving and Etienne LeBel) in which we discuss how relationship scientists can transition to greater transparency in the research process. 2014 is also the year that my lab began the transition to greater transparency with our own research projects (e.g., making study materials/procedures/hypotheses available on the Open Science Framework: https://osf.io/sa9im/). Making this transition involves many challenges, and in my lab we have discussed regularly how to overcome these challenges, keeping in mind one of John Lennon’s song lyrics: “Well I tell them there’s no problem, only solutions”.
We did not arrive at this point overnight. When I was in graduate school the culture of psychological science was to share relevant aspects of the research process when submitting manuscripts for peer review. Of course, this meant that research conducted but not included in these manuscripts was never publicly shared, nor was research presented in rejected manuscripts shared. I began graduate school in 1996, and the internet was beginning to take off at that time (i.e., the beginning of “dot.com” era). New research articles, however, were still introduced primarily in print. At that time (not all that long ago, but long enough), most journals still required multiple copies of manuscripts to be submitted via regular mail, and decision letters were received by regular mail. The results of research were therefore shared primarily via print publications, and with limited page space in journals it was not prudent to devote a lot of space to sharing all details of the research process.
A lot has changed since my graduate school days, including:
- The internet. Academic journals have historically been limited by the number of pages available per volume. The internet smashes through this barrier, making print page space largely irrelevant. Many print journals now make supplementary material available online, and many other journals have solely an online presence. It actually seems odd now that journals should have page limitations at all given the existence of a technology that renders page limitations inconsequential. And websites such as the Open Science Framework (https://osf.io/) are available for researchers to post a great deal of material about their research projects at any time during the life of a given project (i.e., before, during, and after running a study).
- The publication of articles such as Ioannidis (2005), Simmons, Nelson, and Simonsohn (2011), and the entire 2012 special issue of Perspectives on Psychological Science highlighted the importance of enhancing the transparency of the research process. Reading these articles reminded me of Lykken’s (1991) excellent piece asking “What’s wrong with psychology anyway?”, as well as articles by Meehl (any year) focusing on doing “good” science, and Kerr (1998) on hypothesizing after the results are known (or HARKing). It also introduced me to the writings/lectures of theoretical physicist Richard Feynman, who did not pussyfoot around in his discussions of how to do science: total scientific honesty, while doing your best to prove yourself wrong.
After reading these articles, as well as many others on the topic, I found myself in agreement with the argument that greater transparency in the research process is a good thing for the progress of science (i.e., for accumulating an accurate knowledge base of how the world works), and that we now have the technology to make this happen. When I came to this realization I did not yet have an account on the OSF, had not yet posted any study details for any of my original research projects, and was not exactly certain what these posts should include going forward. After thinking it through for a few months and discussing the issues with Tim and Etienne (culminating in our paper on the topic), I am now committed to following our own suggestions. It takes time to adjust, but my lab is working every day to make these adjustments. We (Campbell et al, 2014) explicitly state in our paper that our suggestions are just that—suggestions. My only strong recommendation to researchers is to do what you think is best for advancing scientific discovery.
If you feel that scientific discovery benefits from:
- Researchers sharing their carefully crafted hypotheses prior to data analyses, then do it.
- Researchers sharing all study procedures and methods, then do it.
- Researchers sharing all study materials, then do it.
- Researchers sharing their data analytic plans prior to data analysis, then do it.
- Researchers sharing the differences between the planned confirmatory analyses and subsequent exploratory analyses, then do it.
I accept that not everyone will feel that scientific discovery benefits from greater transparency in the research process, and I encourage researchers who feel this way to share and argue their doubts. In the not too distant future it may be possible to empirically test the robustness of published findings emanating from research projects where materials/procedures/hypotheses were made publicly available or not, obviating the need for what are, so far, philosophical discussions on the topic.
But if you do feel that greater transparency will benefit scientific discovery, keep in mind these words of wisdom from Aristotle: “We do not act rightly because we have virtue or excellence, but we rather have those because we have acted rightly. We are what we repeatedly do. Excellence, then, is not an act but a habit.” In other words, saying that transparency is a good thing is nice, but in the end our actions speak louder than our words.
Researchers have to make decisions on a regular basis, and one additional decision to make now is considering the transition to transparency. I respect the choices of my fellow researchers as they consider the merits of adopting more transparent research practices. Personally, for the reasons discussed, I have decided to make the transition to greater transparency in the research process in my own lab. I can report that it has been a wonderful experience so far.
Campbell, L., Loving, T.J., & LeBel, E.P. (2014). Enhancing transparency of the research process to increase accuracy of findings: A guide for relationship researchers. Personal Relationships, 21, 531-545.
Ioannidis, J.P.A. (2005). Why most published research findings are false. PLoS Medicine, 2, e124.
Kerr, N.L. (1998). HARKing: Hypothesizing after results are known. Personality and Social Psychology Review, 2, 196-217.
Lykken, D.T. (1991). What’s wrong with psychology anyway? In D. Cicchetti & W.M. Grove (Eds.), Thinking clearly about psychology, Volume 1: Matters of public interest (pp. 3-39). Minneapolis: University of Minnesota Press.
Meehl, P.E. (1967). Theory-testing in psychology and physics: A methodological paradox. Philosophy of Science, 34, 103-115.
Meehl, P.E. (1990). Why summaries of research on psychological theories are often uninterpretable. Psychological Reports, 66, 195-244.
Simmons, J.P., Nelson, L.D., & Simonsohn, U. (2011). False-positive psychology: Undisclosed
flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22, 1359-1366.
Special Issue, (2012). Perspectives on Psychological Science, 7.