The Rules of Replication – Rich Lucas (The Desk Reject)

Recently I traveled to Vienna for APS’s International Conference on Psychological Science, where I gave a talk on “The Rules of Replication.” Thanks to the other great talks in the session, it was well attended. But as anyone who goes to academic conferences knows, “well attended” typically means that at best, there may have been a couple hundred people in the room. And it seems like kind of a waste to prepare a talk—one that I will probably only give once—for such a limited audience.

Because, change is hard – Brent Roberts (pigee)

I reposted a quote from a paper on twitter this morning entitled “The earth is flat (p > 0.05): Significance thresholds and the crisis of unreplicable research.” The quote, which is worth repeating, was “reliable conclusions on replicability…of a finding can only be drawn using cumulative evidence from multiple independent studies.” An esteemed colleague (Daniël Lakens @lakens) responded “I just reviewed this paper for PeerJ. I didn’t think it was publishable. Lacks structure, nothing new.” Setting aside the typical bromide that I mostly curate information on twitter so that I can file and read things later, the last clause “nothing new” struck a nerve. It reminded me of some unappealing conclusions that I’ve arrived at about the reproducibility movement that lead to a different conclusion—that it is very, very important that we post and repost papers like this if we hope to move psychological science towards a more robust future. From my current vantage, producing new and innovative insights about reproducibility is not the point. There has been almost nothing new in the entire reproducibility discussion. And, that is okay. I mean, the methodologists (whether terroristic or not) have been telling us for decades that our typical approach to evaluating our research findings is problematic. Continue reading

Perspectives You Won’t Read in Perspectives: Thoughts on Gender, Power, & Eminence – Simine Vazire (sometimes i'm wrong)

[DISCLAIMER: The opinions expressed in my posts are personal opinions, and they do not reflect the editorial policy of Social Psychological and Personality Science or its sponsoring associations, which are responsible for setting editorial policy for the journal.]

This is a guest post by Katie Corker on behalf of a group of us

Rejection hurts. No amount of Netflix binge watching, nor ice cream eating, nor crying to one's dog* really takes the sting out of feeling rejected. Yet, as scientific researchers, we have to deal with an almost constant stream of rejection - there's never enough grant money or journal space to go around.
Which brings us to today's topic. All six of us were recently rejected** from the Perspectives in Psychological Science special issue featuring commentaries on scientific eminence. The new call for submissions was a follow-up to an earlier symposium entitled "Am I Famous Yet?", which featured commentaries on fame and merit in psychological research from seven eminent white men and Alice Eagly.*** The new call was issued in response to a chorus of nasty women and other dissidents who insisted that their viewpoints hadn't been represented by the scholars in the original special issue. The new call explicitly invited these "diverse perspectives" to speak up (in 1,500 words or less****).
Each of the six of us independently rose to the challenge and submitted comments. None of us were particularly surprised to receive rejections - after all, getting rejected is just about the most ordinary thing that can happen to a practicing researcher. Continue reading

Learning exactly the wrong lesson – Sanjay Srivastava (The Hardest Science)

For several years now I have heard fellow scientists worry that the dialogue around open and reproducible science could be used against science – to discredit results that people find inconvenient and even to de-fund science. And this has not just been fretting around the periphery. I have heard these concerns raised by scientists who hold policymaking positions in societies and journals. A recent article by Ed Yong talks about this concern in the present political climate.

In this environment, many are concerned that attempts to improve science could be judo-flipped into ways of decrying or defunding it. “It’s been on our minds since the first week of November,” says Stuart Buck, Vice President of Research Integrity at the Laura and John Arnold Foundation, which funds attempts to improve reproducibility.

The worry is that policy-makers might ask why so much money should be poured into science if so many studies are weak or wrong? Or why should studies be allowed into the policy-making process if they’re inaccessible to public scrutiny? At a recent conference on reproducibility run by the National Academies of Sciences, clinical epidemiologist Hilda Bastian says that she and other speakers were told to consider these dangers when preparing their talks.

One possible conclusion is that this means we should slow down science’s movement toward greater openness and reproducibility. As Yong writes, “Everyone I spoke to felt that this is the wrong approach. Continue reading

False-positive psychology five years later – Sanjay Srivastava (The Hardest Science)

Joe Simmons, Leif Nelson, and Uri Simonsohn have written a 5-years-later[1] retrospective on their “false-positive psychology” paper. It is for an upcoming issue of Perspectives on Psychological Science dedicated to the most-cited articles from APS publications. A preprint is now available. It’s a short and snappy read with some surprises and gems. For example, footnote 2 notes that the Journal of Consumer Research declined to adopt their disclosure recommendations because they might “dull … some of the joy scholars may find in their craft.” No, really. For the youngsters out there, they do a good job of capturing in a sentence a common view of what we now call p-hacking: “Everyone knew it was wrong, but they thought it was wrong the way it’s wrong to jaywalk. We decided to write ‘False-Positive Psychology’ when simulations revealed it was wrong the way it’s wrong to rob a bank.”[2] The retrospective also contains a review of how the paper has been cited in 3 top psychology journals. About half of the citations are from researchers following the original paper’s recommendations, but typically only a subset of them. The most common citation practice is to justify having barely more than 20 subjects per cell, which they now describe as a “comically low threshold” and take a more nuanced view on. Continue reading

Is Using Profanity a Sign of Honesty? – Scott McGreal (Unique—Like Everybody Else)

A recent paper suggests that profanity may be a reflection of emotional honesty and candor. However, closer examination of the studies' results casts doubt on this idea.

looking under the hood – Simine Vazire (sometimes i'm wrong)

[DISCLAIMER: The opinions expressed in my posts are personal opinions, and they do not reflect the editorial policy of Social Psychological and Personality Science or its sponsoring associations, which are responsible for setting editorial policy for the journal.]

Screen Shot 2017-03-02 at 4.07.25 PM lipstick on a hippo

before modern regulations, used car dealers didn't have to be transparent.  they could take any lemon, pretend it was a solid car, and fleece their customers.  this is how used car dealers became the butt of many jokes.
scientists are in danger of meeting the same fate.*  the scientific market is unregulated, which means that scientists can wrap their shaky findings in nice packaging and fool many, including themselves.  in a paper that just came out in Collabra: Psychology,** i describe how lessons from the used car market can save us.  this blog post is the story of how i came up with this idea.
last summer, i read Smaldino and McElreath's great paper on "The natural selection of bad science."  i agreed with almost everything in there, but there was one thing about it that rattled me.  their argument rests on the assumption that journals do a bad job of selecting for rigorous science. they write "An incentive structure that rewards publication quantity will, in the absence of countervailing forces, select for methods that produce the greatest number of publishable results." (p. Continue reading