Monthly Archives: March 2013

Internet Ranting and the Myth of Catharsis – Scott McGreal (Unique—Like Everybody Else)

Recent research has found that internet rants make people more angry than before not less. This builds on previous findings that "venting" actually makes anger worse than before and can lead to aggressive behavior. Expressing anger in a constructive non-aggressive way can lead to more beneficial outcomes than mindless ranting or venting.

read more

Pre-publication peer review can fall short anywhere – Sanjay Srivastava (The Hardest Science)

The other day I wrote about a recent experience participating in post-publication peer review. Short version: I picked up on some errors in a paper published in PLOS ONE, which led to a correction. In my post I made the following observation:

Is this a mark against pre-publication peer review? Obviously it’s hard to say from one case, but I don’t think it speaks well of PLOS ONE that these errors got through. Especially because PLOS ONE is supposed to emphasize “a high technical standard” and reporting of “sufficient detail” (the reason I noticed the issue with the SDs was because the article did not report effect sizes).

But this doesn’t necessarily make PLOS ONE worse than traditional journals like Psychological Science or JPSP, where similar errors get through all the time and then become almost impossible to correct.

My intention was to discuss pre- and post-publication peer review generally, and I went out of my way to cite evidence that mistakes can happen anywhere. But some comments I’ve seen online have characterized this as a mark against PLOS ONE (and my “I don’t think it speaks well of PLOS ONE” phrasing probably didn’t help). So I would like to note the following:

1. After my blog post went up yesterday, somebody alerted me that the first author of the PLOS ONE paper has posted corrections to 3 other papers on her personal website. The errors are similar to what happened at PLOS ONE. Continue reading

Reflections on a foray into post-publication peer review – Sanjay Srivastava (The Hardest Science)

Recently I posted a comment on a PLOS ONE article for the first time. As someone who had a decent chunk of his career before post-publication peer review came along — and has an even larger chunk of his career left with it around — it was an interesting experience.

It started when a colleague posted an article to his Facebook wall. I followed the link out of curiosity about the subject matter, but what immediately jumped out at me was that it was a 4-study sequence with pretty small samples. (See Uli Schimmack’s excellent article The ironic effect of significant results on the credibility of multiple-study articles [pdf] for why that’s noteworthy.) That got me curious about effect sizes and power, so I looked a little bit more closely and noticed some odd things. Like that different N’s were reported in the abstract and the method section. And when I calculated effect sizes from the reported means and SDs, some of them were enormous. Like Cohen’s d > 3.0 level of enormous. (If all this sounds a little hazy, it’s because my goal in this post is to talk about my experience of engaging in post-publication review — not to rehash the details. You can follow the links to the article and comments for those.)

In the old days of publishing, it wouldn’t have been clear what to do next. Continue reading

PYM Enters the Terrible Twos! – Michael Kraus (Psych Your Mind)

Two years ago today, this blog was born. Thanks to you, PYM readers, this once tiny blog venture has been an overwhelming success--both in terms of outreach, and I think, in terms of fun (at least for the bloggers)! Let's check out some of the PYM blog stats after the jump.

Read More->

the truth is not optional: five bad reasons (and one mediocre one) for defending the status quo – Tal Yarkoni ([citation needed])

You could be forgiven for thinking that academic psychologists have all suddenly turned into professional whistleblowers. Everywhere you look, interesting new papers are cropping up purporting to describe this or that common-yet-shady methodological practice, and telling us what we can collectively do to solve the problem and improve the quality of the published literature. In just the last year or so, Uri Simonsohn introduced new techniques for detecting fraud, and used those tools to identify at least 3 cases of high-profile, unabashed data forgery. Simmons and colleagues reported simulations demonstrating that standard exploitation of research degrees of freedom in analysis can produce extremely high rates of false positive findings. Pashler and colleagues developed a “Psych file drawer” repository for tracking replication attempts. Several researchers raised trenchant questions about the veracity and/or magnitude of many high-profile psychological findings such as John Bargh’s famous social priming effects. Wicherts and colleagues showed that authors of psychology articles who are less willing to share their data upon request are more likely to make basic statistical errors in their papers. And so on and so forth. The flood shows no signs of abating; just last week, the APS journal Perspectives in Psychological Science announced that it’s introducing a new “Registered Replication Report” section that will commit to publishing pre-registered high-quality replication attempts, irrespective of their outcome.

Personally, I think these are all very welcome developments for psychological science. They’re solid indications that we psychologists are going to be able to police ourselves successfully in the face of some pretty serious problems, and they bode well for the long-term health of our discipline. My sense is that the majority of other researchers–perhaps the vast majority–share this sentiment. Still, as with any zeitgeist shift, there are always naysayers. In discussing these various developments and initiatives with other people, I’ve found myself arguing, with somewhat surprising frequency, with people who for various reasons think it’s not such a good thing that Uri Simonsohn is trying to catch fraudsters, or that social priming findings are being questioned, or that the consequences of flexible analyses are being exposed. Continue reading

The Misunderstood Personality Profile of Wikipedia Members – Scott McGreal (Unique—Like Everybody Else)

A widely reported study claimed that Wikipedia members have disagreeable and close-minded personality traits. However, the study report contains serious errors and these claims are therefore very misleading.

read more

Just Do It! – Brent Donnellan (The Trait-State Continuum)

I want to chime in about the exciting new section in Perspectives on Psychological Science dedicated to replication.  (Note: Sanjay and David have more insightful takes!). This is an important development and I hope other journals follow with similar policies and guidelines.  I have had many conversations about methodological issues with colleagues over the last several years and I am constantly reminded about how academic types can talk themselves into inaction at the drop of a hat. That fact that something this big is actually happening in a high profile outlet is breathtaking (but in a good way!).

Beyond the shout out to Perspectives, I want to make a modest proposal:  Donate 5 to 10% of your time to replication efforts.  This might sound like a heavy burden but I think it is a worthy goal. It is also easier to achieve with some creative multitasking.   Steer a few of those undergraduate honors projects toward a meaningful replication study or have first year graduate students pick a study and try to replicate it during their first semester on campus.  Then make sure to take an active role in the process to make these efforts worthwhile for the scientific community.  Beyond that, let yourself be curious!  If you read about an interesting study, try to replicate it. Continue reading

The PoPS replication reports format is a good start – Sanjay Srivastava (The Hardest Science)

Big news today is that Perspectives on Psychological Science is going to start publishing pre-registered replication reports. The inaugural editors will be Daniel Simons and Alex Holcombe, who have done the serious legwork to make this happen. See the official announcement and blog posts by Ed Yong and Melanie Tannenbaum. (Note: this isn’t the same as the earlier plan I wrote about for Psychological Science to publish replications, but it appears to be related.)

The gist of the plan is that after getting pre-approval from the editors (mainly to filter for important but as-yet unreplicated studies), proposers will create a detailed protocol. The original authors (and maybe other reviewers?) will have a chance to review the protocol. Once it has been approved, the proposer and other interested labs will run the study. Publication will be contingent on carrying out the protocol but not on the results. Collections of replications from multiple labs will be published together as final reports.

I think this is great news. In my ideal world published replications would be more routine, and wouldn’t require all the hoopla of prior review by original authors, multiple independent replications packaged together, etc. etc. Continue reading

A Replication Initiative from APS – David Funder (funderstorms)

Several of the major research organizations in psychology, including APA, EAPP (European Association of Personality Psychology) and SPSP, have been talking about the issue of replicability of published research, but APS has made the most dramatic move so far to actually do something about it.  The APS journal Perspectives on Psychological Science today announced a new policy to enable the publication of pre-registered, robust studies seeking to replicate important published findings.  The journal will add a new section for this purpose, edited by Dan Simons and Alex Halcombe.  For details, click here.

This idea has been kicked around in other places, including proposals for new journals exclusively dedicated to replication studies.  One of the most interesting aspects of the new initiative is that instead of isolating replications in an independent journal few people might see, they will appear in an already widely-read and prestigious journal with a high impact factor.

When a similar proposal — in the form of a suggested new journal — was floated in a meeting I attended a few weeks ago, it quickly stimulated controversy. Some saw the proposal as a self-defeating attack on our own discipline that would only undermine the credibility of psychological research.  Others saw it as a much-needed self-administered corrective action; better to come from within the field than be imposed from outside. And still others — probably the largest group — raised and got a bit bogged down in worrying about specifics of implementation.  For example, what will stop a researcher from running a failed replication study, and only then “pre-registering” it?  How many failed replications does it take to overturn the conclusions of a published study, and what does “failed replication” mean exactly, anyway?  What degree of statistical power should replication studies be required to have, and what effect size should be used to make this calculation? Continue reading