Week 6: Sharing Materials, Procedures, and Data Analytic Plans

In this class we discussed the importance of sharing study materials, procedures, and as many decisions as possible regarding planned (and exploratory) analyses. Sharing materials and detailed procedures allows other researchers to reproduce the study in their own labs without needing to contact the original researcher(s). This is important because the original researcher(s) may no longer have access to old study materials (e.g., lost, changed jobs, old files thrown out, has left academics and did not bring materials/files with him/her), and eventually all researchers will die. Publicly sharing research materials and procedures helps ensure that your science does not die with you.

A few years ago my lab decided to conduct a close replication of an important study in the field of relationship science—study 3 of Murray et al. (2002). In this study, both partners of 67 heterosexual relationships participated in a laboratory experiment to see if individuals relatively high or low in self-esteem responded differently in the face of a relationship threat. Partners were seated in the same room, but with their backs to each other, to answer a number of paper and pencil questionnaires. The researchers manipulated relationship threat in half of the couples by leading one partner to believe that his or her partner perceived there to be many problems in the relationship (a very clever manipulation to be honest). The predicted two-way interaction between self-reported self-esteem and experimental condition emerged for some of the self-reported dependent variables, showing that in the “relationship threat” condition individuals with low self-esteem minimized the importance of the relationship whereas those with high self-esteem affirmed their love for their partners. After reading the paper closely to assemble the materials and scales needed to reproduce the study procedures, we realized that many of these scales were created by the original researchers and that we would need to ask the corresponding author (Dr. Sandra Murray) for copies or we could not run the planned replication study. Dr. Murray responded positively to our request, and luckily had copies of the study materials that she forwarded to us. We were able to run our replication study, and a pre-print of the article containing all of the study details and links to OSF (for materials, data, code) can be found here.

The moral of this story is that if we (or others of course) did not attempt to replicate this study, and if Dr. Murray no longer had access to these study materials, it simply would not have been possible for anyone else to closely reproduce the full set of procedures for this study going forward. It seems that this is the case for the majority of all published research in the field of psychology—the reported results remain in print, but the study materials and lab notes discussing how to, for example, properly employ manipulations are lost to future generations of researchers. But it does not have to be this way anymore.

The New Normal

Ideally, for every research project the researcher(s) should publicly share the following on a site such as the Open Science Framework (OSF) (go here to learn more about how to use the OSF to start sharing with the research community):

  • A list of all measures, with citations and web links where available, used in the study. List in order of presentation to participants, or discuss procedures used to randomize the order of presentation
  • A copy of all measures used in the study, except for copyrighted material. This is particularly important for measures created in-house given that the items are not readily available to others
  • Provide instructions for how to score all scales used in the study (e.g., indicate items to be reverse coded, to create scores for each participants calculate an average of all items on the scale, and so on)
  • Detailed description of any manipulations. Did you use single or double blinding?
  • Detailed description of the interactions between researchers and participants in lab settings
  • Description of the study design
  • Consider creating a flow-chart of the study design, discussing what happened at each stage of the study from beginning to end
  • Post pictures of the study setting (e.g., lab rooms) where appropriate
  • Consider creating a methods video (i.e., record a typical run of the experimental protocol with mock participants)

Sharing data analytic plans created prior to conducting analyses is also important to help clarify the difference between what was expected up front and what was assessed after getting started with the planned analyses. This plan should be included in the study pre-registration (see here for more information on pre-registration).

Here are some things to consider including in a data analytic plan:

  • If not included elsewhere, discuss the stopping rule to be used to terminate data collection (e.g., after recruiting a minimum of x number of participants data collection will cease)
  • Indicate rules for removing participants from the data set (e.g., failed attention checks, scores that are considered outliers based on some pre-determined criteria, participants that did not meet pre-determined inclusion and/or exclusion criteria)
  • Consider what types of descriptive data are important to present for your study (e.g., correlations, means, and standard deviations of study variables, frequency of responses on key variables)

Confirmatory Analyses

  • Define your alpha level (e.g., .05, .01, .005)
  • Do you plan to use any type of correction for your alpha level given the number of planned tests?
  • Given your hypotheses and the methods used to test your hypotheses, what types of statistical analyses are appropriate to use? Discuss what tests you plan to use, what variables will be used as predictor and outcome variables, and the critical effect(s) for each planned model.
  • What options will you consider if your planned models violate assumptions of that particular test?
  • How do you intend to conduct simple effects analyses (if applicable)?
  • Where appropriate, consider providing a figure of expected outcomes for each planned model
  • What type of effect size will you be reporting?
  • Will you present 95% confidence intervals of effects? Or some other measure of sensitivity?

 Exploratory Analyses

  • Can include many of the same elements relevant for confirmatory analyses (e.g., define your alpha, consideration of assumptions of tests to be used)
  • Provide a guiding framework for how exploratory analyses will be approached (an example of this can be found here)

Remember, sharing is caring.

Reference:

Murray, S. L., Rose, P., Bellavia, G., Holmes, J., & Kusche, A. (2002). When rejection stings: How self-esteem constrains relationship- enhancement processes. Journal of Personality and Social Psychology, 83, 556–573.