Week 12: Transforming Discovery

In this final week of the class I discussed the idea of transforming the process of discovery in our own labs. When I started my undergraduate program in 1992 I learned about the discoveries of others in my courses and textbooks. When I started my graduate program in 1996 I also started to implement what I had learned about the process of discovery with my own research. During this time I often asked myself, and others, “what is typically done by other researchers in this area of research?” I wanted to make sure I was doing what was considered acceptable at the time. For example, using a particular measure of attachment orientations because it seems to be used a lot and thus would not be questioned by reviewers. Or using a particular analytic method because it was frequently used and thus seemingly defensible during the review process. And when it came time to analyze data I learned to run a lot of different models with a number of different combinations of responses across multiple measures to find results that were consistent with our original expectations. Then when it came to write manuscripts, learning what results to not only include but also exclude. It simply felt to me that “this is how science is done” in our field, so I did it that way.

But really it did not have to done in the manner briefly described above. I was searching for what I thought was the best way to be a scientist and in the process also discovered the existing norms for how to be a scientist. Sometimes existing norms overlap with what is best for scientific discovery and dissemination, but sometimes they don’t. I don’t claim to know exactly how scientists should go about making their discoveries, but I do feel strongly that when we feel confident enough to share our results publicly we need to also share to the best of our abilities at the time how exactly we obtained those results. For me that is like a latent construct of open science practices–be open and transparent–that influences what I do throughout the research process. In the early days of the open science movement, I felt that actions could speak just as loud as all the words that were flying around. When others were wondering out loud if sharing details of the research process was worth it, might be costly to the researcher, etc., I felt that I could simply point to our own experiences as examples of how it could be done. Yes, you can preregister hypotheses and *still* conduct analyses not planned in advance. Yes, you can even preregister exploratory research and their is value in doing so. Yes, you can share the measures you used in your study. Yes, you can be open and transparent with longitudinal research designs. Yes, you can share data all the time (sometimes publicly, sometimes via other means). Yes, you can share the syntax you used to produce the results that your presented in your manuscript. Yes, you can preprint your work. And on and on. Debates are fun and all, but when many researchers pondered whether they could/should do these things, we simply did them. One of my hopes was that when new graduate students were learning about the process of discovery, they might stumble across some of our open and transparent research practices and think it was something they could do, something that was becoming normative in the field. Whereas some colleagues saw open science practices as warning flags for the process of discovery, I encouraged my students to see them as challenges for which we have the opportunity to develop solutions to our own process of discovery. With the existence of today’s technologies there are numerous ways to share our research process and make it available for scrutiny, and no solid arguments for keeping this process “available upon request”. That is also the reason why I wanted to teach a course on open and reproducible science. I wanted to do what I could to share these tools with these early career researchers in hopes that they would see value in adopting them in their own research.

My final take home message here: when sharing the results of your research also share how you obtained those results as openly and transparently as possible.

I feel relieved to finally finish this series of posts to accompany the weekly lectures for my course on open and reproducible science. It seemed obvious to me that being open and transparent could also apply to the courses we teach, meaning we could share syllabi and course notes. This series of posts serves as my own personal lecture notes for each class. If you have read them I hope they have been of some value.

Week 11: Extended Research Networks

In this class I introduced to students the idea of scaling up open science practices for use in extended research networks. When I first taught this course many of these initiatives were relatively new and some were untested, and the students were excited about the possibility of these large scale collaborations. I will only discuss a few of the current extended research networks in this post.

One of the earlier extended research networks that incorporated open science practices was the Registered Replication Reports (RRR) initiative originally offered by Perspectives on Psychological Science and originally headed up by Daniel Simons. The basic idea was that individuals or groups of researchers would propose a study that they would like to re-run on a large scale with a number of independent labs after input from the original author(s). When a proposal was approved the submitting researchers worked closely with Dan and the original author(s) to reproduce the methods of the original study as closely as possible (or in some cases settle on a particular method/approach they felt was optimal to assess the effect of interest). A call would then go out to the research community, asking others to use the agreed upon methods/measures and collect a given amount of data to contribute to the project. All study details were shared with this extended group on the Open Science Framework. When the data was collected members of the extended research team submitted their data to the person in charge of overseeing the statistical analyses. This person was not part of any of the research groups, and the statistical analyses as well as the syntax used to run these analyses were agreed upon in advance. The team that submitted the original proposal for the replication project worked with Dan and the original author to draft a methods/results section in advance of knowing the results; the goal was to be able to drop the results into the already prepared manuscript. When all was ready the results would be revealed. I participated in one of these projects (I wrote about it here). Here is a link to the final product. Overall these projects were focused on large scale replication research, and presently RRRs are now offered via the journal Advances in Methods and Practices in Psychological Science.

Another successful extended research network is the Many Babies initiative. From their website, Many Babies “is a collaborative project for replication and best practices in developmental psychology research. Our goal is to bring researchers together to address difficult outstanding theoretical and methodological questions about the nature of early development and how it is studied.” The basic idea here is to enhance collaboration between labs all over the world that collect data from babies in an open and transparent manner. This also helps with increasing sample sizes, given that any individual lab faces challenges collecting data from large samples of babies. Check it out.

Lastly I will mention the Psychological Science Accelerator. From their website: “The Psychological Science Accelerator is a globally distributed network of psychological science laboratories with 1328 members representing 84 countries on all six populated continents, that coordinates data collection for democratically selected studies.” They have many committees to assist with every aspect of the research process for everyone involved (e.g., translation, ethics review, statistical analyses, and so on), and the entire process is guided by open and transparent research interests. I was part of some of the early discussions of this initiative and am very impressed with the leadership team during the handful of years it has existed. They are truly inspirational. This type of large scale extended research network seems to be an ideal manner to test ideas with lots of data, but more importantly data from all over the world. This allows for testing group/cultural differences in the effects of interest. Check out the results from the first project of this initiative here.

I have not gone into any detail on the Many Labs projects that sparked a lot of discussion, or other initiatives that sought to bring together researchers from different Universities and countries to collectively test hypotheses in an open and transparent manner. Overall, there are many exciting options available to researchers at all stages of their career to get involved in these extended research networks.

Week 10: Openly Sharing Research Reports/Manuscripts

When I first taught this course, pre-print servers, or other online resources for sharing research reports and manuscripts, were not as popular or well known as they are today (Spring 2023). My goal with this class was to introduce the idea of sharing manuscripts prior to/after publication as well in lieu of publication in a peer reviewed journal. I showed them a few different options available at the time, including the one hosted by the library system at Western University (where we are located).

Overall the students seemed concerned about how it would be perceived to share a manuscript publicly before it was accepted for publication at a peer reviewed journal (e.g., “will the journal want to publish my paper if I have already “published” it?”). As part of this discussion I showed them sherpa romeo, a site that allows one to view the open access policies of a lot of journals and thus help one determine if they can/should share a preprint of a manuscript. The students were also concerned, however, with sharing a copy of the paper that was accepted for publication in case the journal would forbid this practice (and maybe even revoke acceptance of a manuscript); sherpa romeo is helpful here as well. A lot of fear associated with sharing outside the mainstream publication system! Fair enough, that is why I teach this material in the class and have an open discussion where I make sure to listen to the concerns of the students.

In this class I also discuss thinking beyond the typical research report as material worthy of sharing publicly. For example, stimuli used in the research that will not be part of the manuscript but others may want to use for their own research. I discussed how they could share this material in such a way that it could be both used but also cited. It was appealing to the students to think that they could have aspects of their research beyond the manuscript itself appear in, for example, google scholar and also be cited. The same goes for unique methods as well as data sets. Lastly, we discussed the idea of open peer review and associated pros and cons.

I have been sharing preprints for many years now, mostly (but not exclusively) on psyarxiv. Most of the manuscripts shared their are now published in peer reviewed journals, but some are not. For example, here is a brief paper now published at the Journal of Research in Personality that is also on psyarxiv. Google scholar tells me the published paper has been cited a whopping 4 times. But as you can see on psyarxiv it has been downloaded over 2000 times to date. This may mean absolutely nothing, but perhaps it means that the paper is having an impact not measured by citations alone. Also, you can see on psyarxiv that after the paper is published in a peer reviewed journal the author(s) can update the preprint with the published DOI. One example of a manuscript that exists only as a preprint focuses on a qualitative analysis of “ghosting” (in this case relationship dissolution by ending all contact with a partner) that was lead by former awesome graduate student Rebecca Koessler. This paper has been downloaded over 3000 times, suggesting it has been helpful in some way to others; if it had remained tucked away in our hard drives only it would obviously not have had this level of attention. Interestingly enough this preprint has also been cited 11 times according to Google scholar. From this perspective it was therefore of value to share this research as a preprint even though it was not published in a peer reviewed journal. My approach to open science practices has been to lead by example, so I appreciate that my own experiences with sharing preprints has resulted in noticeable attention to the research when the paper is published in a peer reviewed journal or not. I will likely use these papers, and others, as examples of sharing preprints if I teach this course again.