Opening Statement at “Transparency/replicability” Roundtable #RRIG2015

At the close relationships pre-conference (#RRIG2015), taking place on February 26th prior to the conference of the Society of Personality and Social Psychology (SPSP: http://spspmeeting.org/2015/General-Info.aspx), there is a roundtable discussion on “methodological and replication issues for relationship science”. Discussants include Jeff Simpson, Shelley Gable, Eli Finkel, and Tim Loving (one of my co-authors on a recent paper on the very topic of the roundtable: http://onlinelibrary.wiley.com/doi/10.1111/pere.12053/abstract). Each discussant has a few minutes at the beginning of the roundtable to make an opening statement. Tim’s opening statement, or at least a very close approximation of what he plans to say, appears below.

Tim Loving’s Opening Statement:

“As a relationship scientist — with emphasis on ‘scientist’, I believe strongly that it’s important for us to regularly take stock of what is we as a field are trying to achieve and give careful thought to the best way of getting there. In my view, and if I may speak for my colleagues Lorne and Etienne – and this is not unique to us by any means — we view our jobs as one of trying to provide as accurate an explanation of the ‘real world’ as is possible. One way we can increase accuracy in that explanation is by being fully transparent in how we do science. The conclusions we draw are the pinnacle of the research process, but can only be interpreted meaningfully when there is clear accounting of how these conclusions were achieved. Yet it is our results and conclusions that make up the bulk of published manuscripts. Transparency in the research process has typically been taken for granted, as something that is available upon request because there is not enough room to put these details in print. This quirk of academic publishing, of being limited by how many print pages are available to a journal, has therefore had the indirect effect of shining a brighter light on the final destination of the research process while casting a shadow on the journey.

We echo the suggestions of scholars across disciplines, including many within our own, and across many decades, to shine the light brightly on the entire research journey, to share more openly how we obtained our results. To be clear, these issues have been discussed for centuries. Indeed, when the Royal Society was established in England in 1660, essentially creating what we now refer to as science, such was the importance placed on transparency in the research process that in the meetings they would witness each other conduct their experiments. This principle applies to all scientific disciplines – and we are no exception. In fact, given the complexity of our subject matter, where boundary conditions are the rule rather than the exception, I’d say we’re primed to take the lead in the call for research transparency and to serve as a model for other disciplines.

Unfortunately, discussions of ‘best practices’ in our field have come along at the same time as replication issues and outright fraud have publically plagued other subdisciplines in our broader field, social psychology. But it’s important to remember that issues such as statistical power, sample size, transparency, and so on were being discussed well prior to the last few years. These issues may have served as a catalyst in our field to start having this discussion — but a quick look at writings in other disciplines makes it very clear we’d be having this discussion at some point anyway — the train was coming one way or another.

Finally, I want to say a few words about fears that becoming more transparent will place an undue burden on researchers. I’ll leave aside for now the fact that burdens are irrelevant if we care about truly providing accurate explanations of what happens in the real world; rather, let’s talk more broadly about change. As a new graduate student, I initially learned that the best way to deal with the dependency in dyadic data was to separate males and females and run the analyses separately. Then, low and behold — APIM and multi-level modeling, and other techniques, came about to help us deal with the dependency statistically. Guess what? Those techniques were new, and dare I say ‘hard’ to learn and do, relative to the old standard of just splitting our samples. But, we did it. And we did it because it was the best way of helping us understand what was really going on.

This is just one example – there are countless others — of how change advanced our field. And now we sit here on the edge of another change in our field — the question is whether we want to fight the change kicking and screaming or embrace it because it’s the right thing to do. We as a group have the ability to start the change now, and it will only take one academic generation. Each of us can take the time to set up an OSF account — or mechanism of choice — to share our studies, from conception to conclusion and beyond — because it will make us slow down a bit and be deliberate about what we’re doing and help others carefully evaluate what we do as well – not because we’re after each other, but because we’re all contributing to the same knowledgebase and care about our subject matter above and beyond our CVs. I’m making the shift in my own lab – yes, this somewhat old dog can learn new tricks – and I’m no worse for the wear. And, more importantly, it only took a few minutes.”

Thanks – and I look forward for what I’m sure will be a lively discussion.”

Leave a Reply

Your email address will not be published. Required fields are marked *