The Rules of Replication: Part II – Rich Lucas (The Desk Reject)

Cece doesn’t understand the rules of the couch. Do replication studies need special rules? In my previous post I focused on the question of whether replicators need to work with original authors when conducting their replication studies. I argued that this rule is based on the problematic idea that original authors somehow own an effect and that their reputations will be harmed if that effect turns out to be fragile or to have been a false positive.

be your own a**hole – Simine Vazire (sometimes i'm wrong)

[DISCLAIMER: The opinions expressed in my posts are personal opinions, and they do not reflect the editorial policy of Social Psychological and Personality Science or its sponsoring associations, which are responsible for setting editorial policy for the journal.]

Giraffe1

do you feel frustrated by all the different opinions about what good science looks like?  do you wish there were some concrete guidelines to help you know when to trust your results?  well don't despair! it's true that many of the most hotly debated topics in replicability don't have neat answers.  we could go around and around forever.  so in these tumultuous times, i like to look for things i can hold on to - things that have mathematical answers. here's one: what should we expect p-values for real effects to look like?  everyone's heard a lot about this,* thanks to p-curve, but every time i run the numbers i find that my intuitions are off. way off.  so i decided to write a blog post to try to make these probabilities really sink in. Continue reading

Making good on a promise – Brent Roberts (pigee)

At the end of my previous blog “Because, change is hard“, I said, and I quote: “So, send me your huddled, tired essays repeating the same messages about improving our approach to science that we’ve been making for years and I’ll post, repost, and blog about them every time.” Well, someone asked me to repost their’s.  So here is it is: http://www.nature.com/news/no-researcher-is-too-junior-to-fix-science-1.21928.  It is a nice piece by John Tregoning. Speaking of which, there were two related blogs posted right after the change is hard piece that are both worth reading.  The first by Dorothy Bishop is brilliant and counters my pessimism so effectively I’m almost tempted to call her Simine Vazire: http://deevybee.blogspot.co.uk/2017/05/reproducible-practices-are-future-for.html And if you missed it James Heathers has a spot on post about the New Bad People: https://medium.com/@jamesheathers/meet-the-new-bad-people-4922137949a1 Continue reading

The Rules of Replication – Rich Lucas (The Desk Reject)

Recently I traveled to Vienna for APS’s International Conference on Psychological Science, where I gave a talk on “The Rules of Replication.” Thanks to the other great talks in the session, it was well attended. But as anyone who goes to academic conferences knows, “well attended” typically means that at best, there may have been a couple hundred people in the room. And it seems like kind of a waste to prepare a talk—one that I will probably only give once—for such a limited audience.

Because, change is hard – Brent Roberts (pigee)

I reposted a quote from a paper on twitter this morning entitled “The earth is flat (p > 0.05): Significance thresholds and the crisis of unreplicable research.” The quote, which is worth repeating, was “reliable conclusions on replicability…of a finding can only be drawn using cumulative evidence from multiple independent studies.” An esteemed colleague (Daniël Lakens @lakens) responded “I just reviewed this paper for PeerJ. I didn’t think it was publishable. Lacks structure, nothing new.” Setting aside the typical bromide that I mostly curate information on twitter so that I can file and read things later, the last clause “nothing new” struck a nerve. It reminded me of some unappealing conclusions that I’ve arrived at about the reproducibility movement that lead to a different conclusion—that it is very, very important that we post and repost papers like this if we hope to move psychological science towards a more robust future. From my current vantage, producing new and innovative insights about reproducibility is not the point. There has been almost nothing new in the entire reproducibility discussion. And, that is okay. I mean, the methodologists (whether terroristic or not) have been telling us for decades that our typical approach to evaluating our research findings is problematic. Continue reading

Perspectives You Won’t Read in Perspectives: Thoughts on Gender, Power, & Eminence – Simine Vazire (sometimes i'm wrong)

[DISCLAIMER: The opinions expressed in my posts are personal opinions, and they do not reflect the editorial policy of Social Psychological and Personality Science or its sponsoring associations, which are responsible for setting editorial policy for the journal.]

This is a guest post by Katie Corker on behalf of a group of us

Rejection hurts. No amount of Netflix binge watching, nor ice cream eating, nor crying to one's dog* really takes the sting out of feeling rejected. Yet, as scientific researchers, we have to deal with an almost constant stream of rejection - there's never enough grant money or journal space to go around.
Which brings us to today's topic. All six of us were recently rejected** from the Perspectives in Psychological Science special issue featuring commentaries on scientific eminence. The new call for submissions was a follow-up to an earlier symposium entitled "Am I Famous Yet?", which featured commentaries on fame and merit in psychological research from seven eminent white men and Alice Eagly.*** The new call was issued in response to a chorus of nasty women and other dissidents who insisted that their viewpoints hadn't been represented by the scholars in the original special issue. The new call explicitly invited these "diverse perspectives" to speak up (in 1,500 words or less****).
Each of the six of us independently rose to the challenge and submitted comments. None of us were particularly surprised to receive rejections - after all, getting rejected is just about the most ordinary thing that can happen to a practicing researcher. Continue reading

Learning exactly the wrong lesson – Sanjay Srivastava (The Hardest Science)

For several years now I have heard fellow scientists worry that the dialogue around open and reproducible science could be used against science – to discredit results that people find inconvenient and even to de-fund science. And this has not just been fretting around the periphery. I have heard these concerns raised by scientists who hold policymaking positions in societies and journals. A recent article by Ed Yong talks about this concern in the present political climate.

In this environment, many are concerned that attempts to improve science could be judo-flipped into ways of decrying or defunding it. “It’s been on our minds since the first week of November,” says Stuart Buck, Vice President of Research Integrity at the Laura and John Arnold Foundation, which funds attempts to improve reproducibility.

The worry is that policy-makers might ask why so much money should be poured into science if so many studies are weak or wrong? Or why should studies be allowed into the policy-making process if they’re inaccessible to public scrutiny? At a recent conference on reproducibility run by the National Academies of Sciences, clinical epidemiologist Hilda Bastian says that she and other speakers were told to consider these dangers when preparing their talks.

One possible conclusion is that this means we should slow down science’s movement toward greater openness and reproducibility. As Yong writes, “Everyone I spoke to felt that this is the wrong approach. Continue reading