All posts by Lorne Campbell

Professor of social psychology at the University of Western Ontario. My research interests focus on romantic relationship processes, interpersonal attraction, individual differences, evolutionary psychology, and meta-science.

What if I can’t do Open Science?

Note: This post was written for the March 2016 newsletter of the Australian Psychology Society’s Psychology of Relationships Interest Group (PORIG). I have made a few small changes, and now include the following link to a talk I gave discussing issues similar to this post at the “Navigating the New Era of Social & Personality Psychology” preconference of the 2016 SPSP main conference: https://www.youtube.com/watch?v=QdUtnA8vUn8 (audio is a bit wonky).

This is a question I hear from time to time, particularly from relationship science scholars. Although the bulk of extant relationship research involved data collected from one individual involved in a relationship, typically at one point in time (see Kashy, Campbell & Harris, 2006), a sizeable minority of the field’s work involves data that is much more complex (e.g., longitudinal, self-report and observational dyadic data). Adding to this complexity is the time and expense associated with recruiting dating and/or married couples for these studies, the difficulty of obtaining and coding behavioural/interactive data, and in some cases the expense associated with obtaining particular measures (e.g., hormonal assays of saliva and/or blood). It is often challenging to obtain large samples necessary to increase statistical power and thus reduce the probability of Type II errors, and the false-discovery rate, for these complex studies. And studies of this magnitude rarely set out to test one pre-specified hypothesis; instead, these projects will collect a large amount of data across a number of measures/constructs with the goal of testing many hypotheses to be forged in the future. It is these types of projects (ones in development, those being run presently, as well those already completed that offer a large amount of available data) that are often referred to when researchers inquire, “What if I can’t do open science?”.

What is open science? Briefly, “open science” refers to the public sharing of all aspects of the research process (for more details see: https://en.wikipedia.org/wiki/Open_science; see also https://osf.io/3swkp/). This sharing involves, for example, (a) publicly disclosing study hypotheses prior to actually testing them, (b) making available all of the study materials and procedures (e.g., on the Open Science Framework, https://osf.io/), and (c) publicly disclosing a data analytic plan (i.e., how you plan to test your hypotheses given the measures/procedures of your study). Is it possible to “do” open science when implementing complex study designs as discussed above that are enacted by many relationship scholars?

In my opinion, yes; not only is it possible, it is being done. The complex designs employed by relationship scholars (and others), however, do require unique open science solutions. I, along with my colleagues Timothy Loving and Etienne LeBel, suggested many unique solutions for these complex designs in a paper we published in Personal Relationships in 2014. Instead of reiterating the points we made in that paper, I want to instead briefly share how my research team has engaged in open science practices for different types of research projects in the field of relationship science.

(1) If my research is largely exploratory, how can I publicly disclose hypotheses and data analytic plans?

When research projects are largely exploratory, you can share your study materials as you can with confirmatory research projects. You can also briefly state that your research project is meant to explore possible associations among certain study variables, and the reason(s) for the exploratory nature of the study. You can also provide, if appropriate, a set of guidelines for how you plan to explore the data collected (templates for different types of disclosures, for both confirmatory and exploratory research, can be found here: https://osf.io/m7f8d/).

For example, Kiersten Dobson (a graduate student in my lab), has posted the following information on the OSF (link here: https://osf.io/4xcpy/): description of the study (including planned sample and analytic goals for the exploratory analyses), study materials, and methods. She then posted a copy of the obtained data set, and discussed the follow-up research currently being conducted that followed from the results of the initial exploratory study.

(2) What if the data I am using to test my hypotheses comes from a large dataset that already exists?

In this instance, if the dataset is not your own it may not be possible to publicly post all of the study materials and methods. You can, however, post a document that outlines all of the measures you plan to use from this dataset to test your hypotheses (note that it’s not necessary to include this information in your manuscript; rather, post this information on the OSF and simply link to it in your manuscript). You can also disclose the hypotheses you plan to test and the proposed data analytic plan. If the dataset is your own, you can also post a copy of all study materials.

Early in 2015 a graduate student from the VU Amsterdam (Asuman Buyukcan-Tetik; now a Professor at Sabanci University, Istanbul) visited my lab for three months. We proposed testing new hypotheses by analyzing already existing data—a large dataset collected under the supervision of the PI Catrin Finkenauer. Prior to conducting analyses, Asuman publicly posted, and pre-registered (i.e., the files are “frozen” and cannot be edited or deleted), information about (a) the project and hypotheses, (b) the method, and (c) the strategy of planned analyses. I want to point out that the strategy of analysis contained a few different options that were dependent to some degree on the outcome of the initial analyses planned (i.e., we were only partly certain of what we expected to find; link here: https://osf.io/d7x2p/).

Also in 2014, Rhonda Balzarini, a graduate student in my lab, was given the opportunity to use a large dataset (over 3000 participants) collected by a group of researchers based in Universities throughout the USA (PIs, Bjarne Holmes, Justin Lehmiller, Jennifer Harman, and Nicole Atkins). Rhonda and I met regularly for about two months in the fall of 2014 to discuss our research interests with respect to the dataset and to derive specific hypotheses prior to looking at any of the actual data (i.e., no peeking). Rhonda then publicly disclosed (and pre-registered), prior to analyses, our hypotheses, the methods and measures used in our analyses (link here: https://osf.io/vs574/).

(3) What if in my study I plan to test more than one set of hypotheses? Also, maybe I don’t know what the other hypotheses are yet so how could I possible “pre-register” them?

Big, complex dyadic studies, as mentioned above, rarely set out to test one set of hypotheses. It is entirely possible, however, to make all study materials/procedures publicly available, and to disclose the first set of planned hypotheses along with a data analytic plan. This is the approach taken by Taylor Kohut, a post-doc in my lab, for a large scale longitudinal dyadic study with an experimental intervention (3 conditions) initiated at the midpoint of the study. Instead of trying to explain the study here, I will simply refer you to all of the study information (and I do mean ALL of it) posted on the OSF: https://osf.io/yksxt/ (including study rationale, methods/measures, analytic strategy, and our recruitment plan). It is guaranteed that we will develop new hypotheses in the future, hypotheses that have not yet been considered or discussed. When that time comes we will simply add a new component to the OSF project that discusses our new hypotheses and data analytic plan, and what variables from the original study we plan to use.

(4) What if I can’t make data available? Or code?

There are at least a few concerns with sharing data: (1) other people can use it and benefit from your efforts, and (2) what if participants can identify their own, or their partner’s, sensitive data? With respect to the first concern, the OSF allows users to create a DOI for all files, including datasets, so that if someone does choose to use your data they can properly cite the dataset. Additionally, users on the OSF can license their datasets in seconds, making it legally mandatory for anyone using the dataset to properly cite its use. With respect to the second concern, there are many ways to de-identify data sets, and ways to restrict the use of datasets to other researchers (e.g., post the data on the OSF to a private project page or component, and then grant access to that page or component when asked by other researchers). This is admittedly a big issue that warrants a much longer discussion, and is something we discuss in more detail in a paper in press at the Journal of Personality and Social Psychology (LeBel, Campbell, & Loving, in press; for a pre-accepted draft of this manuscript click here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2616384).

In my lab we are moving toward posting data needed to reproduce the results of any analyses reported in our manuscripts, and we also post the code/syntax that we used to run our models. At this link https://osf.io/ryfse/ you can find data sets and code needed to reproduce the analyses in a paper in press at the Journal of Experimental Social Psychology, and at this link https://osf.io/me7jp/ you can find the data files (available upon request, but they are physically present in a component linked to this OSF page) and code (for SAS) needed to reproduce the analyses presented in this publication: http://www.collabra.org/articles/10.1525/collabra.24/. One benefit of this practice is that we independently run all study analyses to ensure we can reproduce the results reported in the manuscript prior to submitting for peer review.

There are undoubtedly other scenarios not discussed here that require novel open science solutions. My goal was to share a few of the questions I have heard most often, and show some of the answers we have come up with in my lab. Open science can be done with complex study designs within the field of relationship science, because it is being done. I therefore suggest that the question I posed at the outset of this piece be changed from “What if I can’t do open science?” to “How can I do open science with this study?”.

We now have the tools available to move away from the more “closed science” practices that have been typical of our field to this time. Using these tools is of course a choice, but not using them is also a choice. I have chosen to engage in open science practices for my own research going forward. My experiences so far suggest to me that open science practices have not stifled my creativity, limited what I choose to study, limited the exploration of ideas, or otherwise burdened my ability to discover new things (such as they were).

References

Campbell, L., Loving, T. J., & LeBel, E. P. (2014). Enhancing transparency of the research process to increase accuracy of findings: A guide for relationship researchers. Personal Relationships, 21(4), 531-545.

Kashy, D.A., Campbell, L., & Harris, D.W. (2006). Advances in data analytic approaches for relationships research: The broad utility of hierarchical linear modeling. In A. Vangelisti & D. Perlman (Eds.), The Cambridge Handbook of Personal Relationships (pp. 73-90). New York: Cambridge University Press.

Why are Top Journals Top Journals?

Researchers need to publish manuscripts to both advance science and their careers. This latter fact was made clear recently as I was a member of a departmental committee tasked with evaluating the performance of each faculty member. When it came time to discuss publications there were two themes: (1) how many publications did this person have in the given time frame (we “expect” approximately 3 per year), and (2) were they published in top journals? No mention of the actual research conducted, or the strength of the methods employed, because the committee was not asked to read any of these publications. The number of publications and the prestige of publication outlet therefore served as proxies for research performance.

I want to focus this discussion on the prestige of academic journals. What makes a journal a Top Journal? We all seem to know one when we see one, but specifically defining why one journal is better than another is tricky. When I ask this question of others a variety of factors are discussed, such as (a) rejection rate (higher rejection rate seems to equal better journal), (b) a place where the successful academics tend to publish their work, (c) the impact of the research published in a journal on the rest of the field, (d) visibility of the journal (i.e., are most people in the field familiar with the journal?), (e) perception that the research needs to be particularly novel/ground-breaking to be published in the journal, and so on. These all seem like reasonable points, but it does suggest a fairly static hierarchy of journal prestige, and the quality of the research discussed in particular manuscripts within each journal is implied as a function of the prestige of the journal.

I then conducted some Internet searches to see how journal prestige is assessed. Most ranking systems that I could find rely largely on citation counts of articles published within a journal (e.g., the much loved, and loathed, impact factor). For example, the SCImago journal ranking provides information on thousands of journals, including a long list of psychology journals that can be ranked on a few different factors, including: SJR (a measure of “prestige”), H-index (number of articles in a journal cited a given number of times), and Impact Factor. A great feature of this site is that you can download the rankings in an excel file. I downloaded the file, and decided to add some ranking information recently put together by Uli Schimmack: the R-index of the journal.* I direct readers to Uli’s blog to learn more about the R-index and how it is calculated, but briefly it uses the estimates of post-hoc power calculated for each published article, and R-index scores increase when power is higher and decreases when publication bias is present (according to Uli, and awaiting verification). So, it is calculated based on information presented in each paper published in a given journal (i.e., based on p-values calculated from reported statistical tests that are then used to estimate post-hoc power; see also the N-pact factor), rather than on how many times each paper published in a given journal is cited by others.

So, do indices based on citation counts correlate with an index based on post-hoc power of published articles? No. The scatterplot below indicates a slightly negative relation between the R-index and IF of a journal. There is R code available to recreate this scatterplot and to to calculate correlations with the SJR and H-index as well here (spoiler alert—the R-index does not significantly correlate with them either).

So, Top Journals do not seem to publish studies with relatively more post-hoc power and thus results more likely to replicate compared to lower tier (dare I say “specialty”) journals (at least according to the R-index). Is journal prestige therefore merely popularity?

* the R-index currently ranks 54 psychology journals (more to come I believe). The R-index was calculated for each section of JPSP, whereas the SCImago rankings gives scores for the entire journal; I therefore selected the highest R-index score for JPSP. A few journals listed in the R-index ranking were not included in the SCImago rankings. Overall, a total of 50 R-indices were entered into the data file. Also, I used the R-index calculated for articles published in journals between 2010-2014 to be more consistent with the time frame of the SCImago rankings (based on 2014 numbers)

Journal IF and Rindex

 

 

Teaching Open Science

In November 2015 I gave a workshop at the University of Toronto Mississauga on “Doing Open Science” (slides: https://osf.io/kz2u5/). During, and following, the workshop I spoke with attendees and heard two particular responses from this audience of graduate students and post-docs. First, they all believed that open science is becoming more important in our field. Second, most of them were unsure how to get started with open science in their own research. In fact, these are the two responses I hear most from others when discussing open science—it seems important, but how do I do it in my own lab?

More resources are now becoming available including a manual of best practices offered by BITSS and a list of course syllabi on the topic hosted on the Open Science Framework (OSF). My recent blog on organizing my own open science offered some suggestions for how to adopt open science practices (see also this paper). A Facebook post to the Psychology Methods Discussion Group asking how to pre-register study details also generated some useful feedback. Perusing public registrations of research projects on the OSF can also provide many examples of how to share details of the research process. And the newly introduced AsPredicted.org is a site devoted to making pre-registration very straightforward and fairly simple. Information is therefore becoming more available if one is motivated to look for it.

Psychology graduate programs typically have students take courses on statistical approaches to data analysis as well as on research methods. In these courses students read texts and papers, and learn where to find additional information. They also learn the values of their academic elders regarding the scientific process (e.g., predicting outcomes using statistical analyses with particular methodological designs). It seems to me, however, that going forward it is critical that we start routinely teaching open science practices to our students so (a) they know where to find information on open science, and (b) they learn that the research community that is training them values open science. It also seems practical to introduce material (or courses) on open science given that many journals are beginning to incentivize open science practices. Graduate students that adopt open science practices (as part of science 2.0) may therefore have an advantage in the job market compared to students that maintain the traditional closed science practices. As one final incentive to embrace the teaching of open science to your students, there are now awards available for doing it!

Organized Open Science

Over a year ago I committed to adopting more transparent research practices. Since then I have been adding projects and registrations to my Open Science Framework account (https://osf.io/sa9im/). Over time, and with new students joining the lab and new collaborations with colleagues being established, many of these research projects are at different stages of completion. I have also been asked, many times, variations of this question: “What information should I include in the files to put on the OSF?”; as well as this question: “When should I put this information on the OSF?”. Answering these questions has helped create informal guidelines for how we do open science in our lab, but I realized recently that there was in fact a lot of variation within the lab regarding what information was included in the files posted to each project page, the number of files posted, when they were being posted, the timing of registering projects, and if/when projects were being made public. I felt the need to get my open science organized.

I decided to use my OSF account to create an organizational system that is open and transparent. Check it out: https://osf.io/jrd8f/. The public project page includes some templates for different types of disclosure forms for research projects that all lab members and other collaborators can easily access. These templates indicate the types of information I typically prefer to be made available for my research projects, but of course not all of these disclosures are needed for every research project; these are guidelines, not inflexible rules. The page also includes an excel sheet to keep track of different OSF milestones for each research project. This is a master file listing all of our research projects that also includes links to the project and registration OSF pages, and asks the person taking the lead on a research project to indicate if she or he has put the information in question together into disclosure files and uploaded these files to the OSF. All lab members can then check to see the OSF status of each project, and quickly link to other lab members’ projects/registrations for tips on how to create the disclosure files (e.g., formatting, organization of the information, how much information presented, and so on). The excel file is empty right now—our lab is just getting started with this new organizational system. It will start to fill up soon.*

My primary motivation for creating this new organizational system was to standardize the process of doing open science in my lab. I am open to suggestions for improvements, additions, or other changes.

*a few studies have been added now, taking a total of a few minutes. As time goes on, there will be a noticeable increase in the consistency of disclosure statements for studies

The Space Between Theory and Hypotheses in Social Psychology

This summer I looked closely at the books on my bookshelf to  rekindle my affection with these old flames. One book that stood out was written by Morton Deutsch and Robert M. Krauss (1965) simply titled “Theories in Social Psychology”. The first chapter has a section labelled “The Nature of Theory”. In my opinion it is still very relevant today and is a must read for social and personality psychologists. I will quote from this section a little as I paraphrase the main points:

1. Theories in the physical sciences often make definitive hypotheses that are logical extensions of the theory in question.

– For example (my example, not one used by Deutsch & Krauss), Einstein’s General Theory of Relativity made a specific hypothesis regarding how much light should be observed to bend around large objects (among other hypotheses of course). Everyone that understood the theory could derive the same specific hypothesis regarding the effects of gravity on the bending of light.*

2. Theories in Social Psychology do not typically make definitive hypotheses that are logical extensions of the theory in question. Instead, “…the ‘derivations’ from most of the theories in social psychology are usually not unequivocal, or strictly logical, for they skip steps, they depend on unexpressed assumptions, and they rest on the criterion of intuitive reasonableness or plausibility rather than on formal logical criteria of consistency.” (p. 7). As they go on to say on page 11, most theoretically derived hypotheses in social psychology are plausible inferences that can be made from an understanding of the theory rather than logical deductions dictated by the theory itself.

– This was not meant as slight to social psychological theories, but was rather pointed out as a reflection of the nascent state of our theories that does not allow for specific, definitive hypotheses to be derived. As theories are rigorously tested over time, some hypotheses should (hopefully) consistently replicate, others not so much, and a better understanding of the nomological network will emerge (Cronbach & Meehl, 1955).

As Feynman discusses in his Cornell lectures (https://www.youtube.com/watch?v=EYPapE-3FRw), using the scientific method we compute the consequences of new laws/theories, and then compare these consequences with observations. If the observations are not as predicted, then the theory is not confirmed (or as he says in this lecture, they are “wrong”). But as Deutsch and Kraus point out, computing the consequences of social psychological theories is not as straightforward as it is for theories in the physical sciences. Given this fact, it is likely that at times experts could derive discordant, yet plausible, hypotheses from the same social psychological theory. Theory testing and theory building in social psychology is therefore challenging.

Then I ran this simple thought experiment (N =1): imagine asking 20 research teams with expertise on, say, adult attachment theory to independently make specific hypotheses regarding the association between, for example, scores on anxious attachment and avoidant attachment with feelings of acceptance from a romantic partner across three experimental conditions: (1) control (no intervention), (2) relationship threat (e.g., when one partner does not want to spend time with the target participant), and (3) relationship boost (e.g., when one partner does want to spend time with the target participant). Next, have a team of coders rate the similarity of generated hypotheses. How similar would they be? How many different, plausible hypotheses would be put forward? My guess is that there would be some consistency across research teams, and some variability as well. I can generate what I feel are plausible, yet discordant, hypotheses all on my own. Feel free to conduct this thought experiment with your theory of choice. Not every expert, considering the same body of information, will generate the same hypotheses.

I am not blaming the theory for not being to make definitive hypotheses at all times. But it does seem that building theory would benefit from more truly confirmatory research being conducted that tests specific, plausible, and pre-registered hypotheses, with the results published regardless of the outcomes. Because as Feynman also says in the lecture linked to above, vague theories can accommodate almost any result, making them rather useless as theories.

* at this point it is obvious I am not a physicist, p < .00000000000001.

Cronbach, L.J., & Meehl, P.E. (1955). Construct validity in psychological tests. Psychological Bulletin, 52, 281-302.

Deutsch, M., & Krauss, R.M. (1965). Theories in social psychology. New York: Basic Books Inc..

The Power of Peer Review

Below I make two observations, after making an assumption on pre-publication peer-review:

Assumption:

Pre-publication peer review is viewed as an essential step in the publication process. It represents a “stamp of approval”—colleagues with expertise in this area of research agree that the research presented in the published manuscript properly tests an original research question/hypothesis (or is at least super cool and clever). Without pre-publication peer review a published article is not taken seriously by the field at large (e.g., an article published on someone’s website site but not yet published by a journal after one or more rounds of pre-publication peer review is not really considered a “publication” by most of my colleagues at least; or at least as less of a publication).

Observations:

(1) statistical power, generally obtained by recruiting high numbers of participants in our studies (yes, it is more complex than high N), is becoming more valued in the psychological sciences (no citations needed—general knowledge). Having 30% power was so pre one year ago.

(2) Pre-publication peer review typically involves the opinions of two “expert” reviewers (sometimes 3, maybe even 4), and one editor, on only one manuscript.

Now, if we consider reviewers as participants in a study designed to answer questions about the value of a given article, what kind of inter-rater agreement would we expect from 2-4 participants for 1 article? What kind of power would this study have to identify true scientific merit? This is essentially a poorly designed mixed quantitative/qualitative case study. If this truly were a study it would be rejected by reviewers, if the editor did not already issue a desk reject—the methods are atrocious. I know, it is like comparing apples and oranges. That said it seems to me that a standard we are beginning to apply with much more rigour when conducting our research (i.e., increasing statistical power) is one that is woefully absent from the process of how we as a field evaluate the merits of this research after it is conducted.

And as it turns out, there really is no evidence that pre-publication peer review is good for science: https://www.timeshighereducation.co.uk/content/the-peer-review-drugs-dont-work. In this piece written by Richard Smith, he offers up an alternative to pre-publication peer review: “With the World Wide Web everything can be published, and the world can decide what’s important and what isn’t.” In other words, publish what you like and rely on post-publication peer review to separate the wheat from the chaff. With post-publication peer review there is no limit to the number of people that can provide expert commentary, and such commentary can be made at any time after the manuscript is published (not only during the “review” process).

In fact all of us could follow this advice right now if we truly wanted to. How? Here is one example (and of course there are other options available): publicly post all study materials, data sets, and data analytic code on the Open Science Framework (OSF). Write a manuscript to put the results in context. Publicly post that manuscript on the OSF, and obtain a DOI by clicking that option when posting—your manuscript is now published with no pre-publication peer review, no Open Access publication fees, and no waiting months and months for reviews from a traditional journal with high rejection rates largely because they are limited by how many paper pages the publisher has contractually agreed to print per year. Click the option to allow comments to be posted by anyone, and advertise your paper on social media and other outlets and encourage colleagues to provide critical commentary. Revise your manuscript based on the comments. Repeat with other projects.

This suggestion may sound radical, but consider that in the above scenario ANY consumer of the research can look at the study scales, re-analyze the data if desired, and make suggestions to the author on additional data analyses or for follow-up studies. A dialogue between people posting comments about the manuscript could develop that is publicly available for other consumers of the research to read through. Just, wow. Now think of the last paper you read in the “top” journal in your field—odds are you will not have access to study materials, data, data analytic code, reviewer’s comments, or the editor’s comments. Both papers are published. If these two hypothetical papers were on the same topic, a topic that you wanted to follow up with your own research, which paper would you rely on more heavily when developing your study?

Herbert Stein once said, “If something cannot go on forever, it will stop.” Given the existence of the World Wide Web, the inefficiencies of pre-publication peer review, and let’s also throw in a tired system of privately controlled academic publication that is still largely ruled by paper and huge subscription costs, it seems that how we have gone about publishing our research for many years now cannot go on forever, and it will stop. Other ways of publishing our research will take its place. Or I am completely wrong.

Deciding What Studies to Replicate: The Path of Least Resistance

I support the efforts of researchers in our field to conduct direct/close replication attempts of published research. It is something I do myself. It is wonderful to see journal policies starting to embrace replication studies, including special issues in different journals devoted to replication studies in a given area of study. But…not really a but in the sense that I will now soften my support for replication research, but a but in the sense that because less labor/time/cost intensive studies will likely be the focus of the majority of these replication attempts compared to more labor/time/cost intensive studies it seems inevitable that an asymmetry in the precision of effect size estimates will develop over time.

Imagine a researcher that has decided to devote some research effort to replication studies comes across three published studies that pique his interest:

  • Publication 1: recruited a large number of participants from either a University subject pool (i.e., undergraduate students that need to participate in research studies for course credit) or online (e.g., Amazon’s Mechanical Turk).
  • Publication 2: recruited undergraduate students from the University subject pool, but had each participant come into the lab individually because of complex experimental manipulations as well as the collection of biological samples.
  • Publication 3: recruited participants from the community, following participants over a period of two years with multiple testing sessions (both in lab and online).

I do not need to provide more details for these fictional studies to make the point that the labor, time and cost needed to directly replicate the methods of study 3 is much greater than for the other two studies, and is greater for study 2 compared to study 1. Given that researchers do not have access to unlimited resources over a prolonged period of time to conduct their own research let alone direct replications of the research of others (if you do, call me), it is reasonable to conclude that of the fictional studies presented more replication attempts would be made for study 1 than the other two studies. Over time, therefore, more precise estimates of the effect sizes obtained in “easy to run” studies will accumulate compared to “difficult to run” studies. Put another way, one shot correlational and experimental studies involving University students or MTurkers will be the focus of the bulk of replication attempts; studies with special populations (e.g., cross-cultural samples, married couples, parent-child interactions, and many, many others), those collecting “expensive” data (e.g., brain scans, hormonal assays), and studies using longitudinal designs (e.g., daily-diary studies, the early years of marriage, personality development across time, and so on) will be the focus of few, if any in some cases, direct replication attempts. I cannot imagine, for example, obtaining the grant funds necessary to directly replicate a multi-wave study of newly married couples over a period of two or more years [but see comment below–Brent Roberts did receive grant funding along these lines]. Even if funds were on hand to directly replicate a two-week diary study that included pre- and post-diary assessments, the amount of time needed to run the study, and the research assistants needed, would likely dissuade most researchers from endeavouring to replicate this research.

Now that the value of direct/close replication studies is generally recognized, perhaps we need to find ways of incentivizing replication attempts of studies that otherwise would be ignored by most replicators.

“Where Should we Try to Publish this Paper?”

At some point during the research process, someone on the team asks “where should we try to publish this paper?”. One common strategy is to make a list of possible journals, starting with the “best” possible outlet, then the next “best”, and so on. The definition of “best” in this context typically refers to the perceived prestige of the journal, and the belief that publishing in some outlets can boost one’s career more strongly than publishing in other outlets. In practice, therefore, the title of the journal where a study is published serves as a heuristic for evaluating the quality and significance of the research without ever having to read the actual paper. For example, if Pat’s research was published in what is considered a top tier journal, someone may think, “Pat’s research was published in the Journal of GREAT, so Pat must have conducted theoretically guided novel research, across a few studies, that is likely to significantly change our thinking on the topic of the research. And Pat clearly knows how to package the research to ultimately be published in this journal. Pat is probably going to get a good job/tenure/hired at a Business School.”. If Pat’s research was published in a less prestigious journal, someone may think, “Pat’s research was only published in the Journal of ORDINARY, a place I may only consider sending a paper if I cannot get it published at a better journal. I wonder if Pat is going to get a job/tenure/good H index over time? ”. For many researchers I have talked with, publication outlet can play an important role in the research process, guiding their decisions for what topics to study and how to study them. After sitting on my department’s annual performance and evaluation committee many times, I understand the desire to publish in particular outlets given the weight these types of committees place on journal impact factors and other indicators of journal prestige. Journal “quality” is valued broadly.

But judging a research paper based on where it is published can have important unintended consequences (Brembs, Button & Munafo, 2013), and to me does not make as much sense as judging a research paper based on the actual research it presents*. Anyway, the academic publishing landscape that has existed for so long and has allowed for a fairly stable hierarchy of highly desirable to less desirable publication outlets to develop is now changing. For a long time, publishing the findings of research meant printing words on paper, collating the printed pages of multiple manuscripts until a page restriction was reached, binding the pages together to make multiple copies of a journal issue, and then mailing these copies to paid subscribers. This process takes a lot of expertise, equipment, and money (hence the page restrictions). There is therefore a lot of competition among researchers to publish their research in the limited number of pages available (a limited resource), with publications serving as currency for career advancement. There are simply not enough print journal pages available each year for all researchers to publish their research. In this system, therefore, a publication represents more than a report communicating the results of one’s research; it represents the ability to navigate the publication process in order to secure a portion of the limited pages available for publication for oneself compared to others.

Today, there are many more options for making the results of research publically available (i.e., publishing), meaning printing words on a restricted number of pages of paper that are mailed to paid subscribers is no longer the only game in town. Digital technology, combined with the rapid growth of the internet and internet connectivity all over the world, makes page restrictions for publication meaningless—space is no longer a real restriction. It has the potential to also make journals themselves somewhat redundant—instead of searching for research papers by perusing the table of contents of various specialized journals sponsored by different societies teamed up with private publishing houses, search engines can locate anything on the web on a topic of interest very quickly. Indeed, when recently searching the internet for material on a particular topic, I came across papers published in traditional academic journals, chapters in edited volumes, blog posts, news and magazine articles, conference abstracts, papers currently under review but made available on various sites (e.g., arXiv.org, osf.io, ssrn.com), theses/dissertations, porn sites (let’s face it, all internet searches can lead to porn), youtube videos/lectures, as well as graduate and undergraduate research papers. Also popping up a lot more in these searchers are papers published in what are currently considered non-traditional journals, such as the open access journals Frontiers in Psychology and PLOS One (among others of course). A new open access journal called Collabra recently opened its digital doors for business. Open access papers can be downloaded by anyone in the world with an internet connection (not the case for most traditional journals). It is therefore much easier to publish something today than it was, say, when I started graduate school in 1996. This blog post is publicly available, will pop up in web searches, and I can obtain a DOI for this post with ease.

The restrictions on publishing are crumbling. Going forward, the challenge for researchers will likely not lie in getting papers published—there is always space on the internet to present results, and this demand (every academic needs to publish their research) is being met creatively with publishing capacity outside the traditional academic publishing establishment. The new challenges facing researchers going forward will therefore be getting our ideas and our data noticed. New ways of bestowing status and prestige on researchers will undoubtedly develop that do not include the ability to publish in select outlets. What will they include? Good ideas are always good ideas, but maybe the future rock stars of academia will be the ones with ideas that consistently replicate. Or maybe they will simply be the best looking researchers. Hard to tell really.

So how do I respond to the question of where our team should try to publish new papers? Having shared my thoughts on the future of publishing, it is probably not surprising to hear that I now find it hard to get excited thinking of how to “craft” a paper to be viewed favourably by reviewers of particular journals (will it be theoretical enough? Are there enough studies? Do the results tell a coherent story?). I now get more excited thinking of the non-traditional options available, and the ones yet to come. More steak (I love steak; seriously, ask my friends), less sizzle.

* Yes, there are a lot of predatory journals that publish almost anything sent to them for a fee, and I receive a lot of invitations from these journals to quickly publish my most recent research on cancer/physics/post-modern reflections on post-modernism/any other topic you can think of, but I am not referring to these journals in this discussion

Reference

Brembs, B., Button, K., & Munafò, M. (2013) Deep impact: unintended consequences of journal rank. Front. Hum. Neurosci. 7:291. doi: 10.3389/fnhum.2013.00291.

Confessions of a Replicator

I am a replicator. I have undertaken, and continue to undertake, close replications of the research of others, as well as my own. What is my motivation? (see https://traitstate.wordpress.com/2015/03/13/how-do-you-feel-when-something-fails-to-replicate/, for a discussion of the perceived “psychology of the replicators”). I warn readers up front that my story is rather simple and boring.

I have been attending academic conferences since 1998, and one of things I enjoy most is sitting down with colleagues for cocktails and lively discussions. One topic of discussion always seems to be recently published, or about to be published, research. I have often asked, and been asked: “So, what do you think of that study?” That question can chart the course for many hours of debate. During one such discussion in Halifax (2012) I was talking with two colleagues about research on Attachment Theory and preferences for warm food options published in Psychological Science (Vess, 2012). We generally agreed with the notion that activating the attachment concerns of more anxiously attached individuals could make them feel “alone”, motivating them to seek out comfort (or warmth). At the same time I expressed some uncertainty regarding the methodological approach taken in the research. We all expressed our views, moved on to other topics, and eventually went our separate ways.

A little while later, when reflecting on this discussion, I felt that ideally it would be better for me to run the study to see if I could obtain the same pattern of results rather than simply voice my uncertainty regarding the manner in which the study was run. Why? My colleagues and I were essentially raising empirical questions about this research during our discussion, questions that should therefore be answered with data and not simply words. As the person raising many of these questions, I felt obliged to help answer them as best I can. I truly wanted to know if the results would replicate using the methods described by Vess, and the only way to find this out would be to collect additional data using these methods. With this data, I would be able to help answer, instead of only ask, this question: does the effect replicate? Working with Etienne LeBel, and with the input of the original researcher (Vess), we ran two large scale close replication studies of the original study 1 and published the results in Psychological Science (LeBel & Campbell, 2013).

So, what is my motivation as a replicator? When I feel some uncertainty about an idea, or methodological approach used to test an idea, I feel that data trump opinions. So I now devote some research effort to closely replicating published research findings, at the same time attempting to directly replicate my own research findings going forward when feasible.

Oh, and one other thing—somebody needs to do it, so why not me? For that matter, why not you?

References

LeBel, E. P., & Campbell, L. (2013). Heightened sensitivity to temperature cues in highly anxiously attached individuals: Real or elusive phenomenon? Psychological Science, 24, 2128-2120.

Vess, M (2012). Warm thoughts: Attachment anxiety and sensitivity to temperature cues. Psychological Science, 23, 472-474.

Opening Statement at “Transparency/replicability” Roundtable #RRIG2015

At the close relationships pre-conference (#RRIG2015), taking place on February 26th prior to the conference of the Society of Personality and Social Psychology (SPSP: http://spspmeeting.org/2015/General-Info.aspx), there is a roundtable discussion on “methodological and replication issues for relationship science”. Discussants include Jeff Simpson, Shelley Gable, Eli Finkel, and Tim Loving (one of my co-authors on a recent paper on the very topic of the roundtable: http://onlinelibrary.wiley.com/doi/10.1111/pere.12053/abstract). Each discussant has a few minutes at the beginning of the roundtable to make an opening statement. Tim’s opening statement, or at least a very close approximation of what he plans to say, appears below.

Tim Loving’s Opening Statement:

“As a relationship scientist — with emphasis on ‘scientist’, I believe strongly that it’s important for us to regularly take stock of what is we as a field are trying to achieve and give careful thought to the best way of getting there. In my view, and if I may speak for my colleagues Lorne and Etienne – and this is not unique to us by any means — we view our jobs as one of trying to provide as accurate an explanation of the ‘real world’ as is possible. One way we can increase accuracy in that explanation is by being fully transparent in how we do science. The conclusions we draw are the pinnacle of the research process, but can only be interpreted meaningfully when there is clear accounting of how these conclusions were achieved. Yet it is our results and conclusions that make up the bulk of published manuscripts. Transparency in the research process has typically been taken for granted, as something that is available upon request because there is not enough room to put these details in print. This quirk of academic publishing, of being limited by how many print pages are available to a journal, has therefore had the indirect effect of shining a brighter light on the final destination of the research process while casting a shadow on the journey.

We echo the suggestions of scholars across disciplines, including many within our own, and across many decades, to shine the light brightly on the entire research journey, to share more openly how we obtained our results. To be clear, these issues have been discussed for centuries. Indeed, when the Royal Society was established in England in 1660, essentially creating what we now refer to as science, such was the importance placed on transparency in the research process that in the meetings they would witness each other conduct their experiments. This principle applies to all scientific disciplines – and we are no exception. In fact, given the complexity of our subject matter, where boundary conditions are the rule rather than the exception, I’d say we’re primed to take the lead in the call for research transparency and to serve as a model for other disciplines.

Unfortunately, discussions of ‘best practices’ in our field have come along at the same time as replication issues and outright fraud have publically plagued other subdisciplines in our broader field, social psychology. But it’s important to remember that issues such as statistical power, sample size, transparency, and so on were being discussed well prior to the last few years. These issues may have served as a catalyst in our field to start having this discussion — but a quick look at writings in other disciplines makes it very clear we’d be having this discussion at some point anyway — the train was coming one way or another.

Finally, I want to say a few words about fears that becoming more transparent will place an undue burden on researchers. I’ll leave aside for now the fact that burdens are irrelevant if we care about truly providing accurate explanations of what happens in the real world; rather, let’s talk more broadly about change. As a new graduate student, I initially learned that the best way to deal with the dependency in dyadic data was to separate males and females and run the analyses separately. Then, low and behold — APIM and multi-level modeling, and other techniques, came about to help us deal with the dependency statistically. Guess what? Those techniques were new, and dare I say ‘hard’ to learn and do, relative to the old standard of just splitting our samples. But, we did it. And we did it because it was the best way of helping us understand what was really going on.

This is just one example – there are countless others — of how change advanced our field. And now we sit here on the edge of another change in our field — the question is whether we want to fight the change kicking and screaming or embrace it because it’s the right thing to do. We as a group have the ability to start the change now, and it will only take one academic generation. Each of us can take the time to set up an OSF account — or mechanism of choice — to share our studies, from conception to conclusion and beyond — because it will make us slow down a bit and be deliberate about what we’re doing and help others carefully evaluate what we do as well – not because we’re after each other, but because we’re all contributing to the same knowledgebase and care about our subject matter above and beyond our CVs. I’m making the shift in my own lab – yes, this somewhat old dog can learn new tricks – and I’m no worse for the wear. And, more importantly, it only took a few minutes.”

Thanks – and I look forward for what I’m sure will be a lively discussion.”