What happens when the authors of studies linking candidate gene polymorphisms to response to drug consumption tried to replicate their own research?
As many of you know, the saga of replication problems continues unabated in social and personality psychology. The most recent dust up being over the ability of some researchers to replicate Dijksterhuis’ professor prime studies and the ensuing arguments over those attempts.
While social and personality psychologists “discuss” the adequacies of the replication attempts in our field a truly remarkable paper was published in Neuropsychopharmacology (Hart, de Wit, & Palmer, 2013). The second and third authors have a long collaborative history working on the genetics of drug addiction. In fact, they have published 12 studies linking variations in candidate genes, such as BDNF, DRD2, and COMT to intermediary phenotypes related to drug addiction. As they note in the introduction to their paper, these studies have been cited hundreds of times and would lead one to believe that single SNPS or variations in specific genes are strongly linked to the way people react to amphetamines.
The 12 original studies all relied on a really nice experimental paradigm. The participants received placebos and varying doses of amphetamines across several sessions, and the experimenters and participants were blind to what dose they received. The order of drug administration was counterbalanced. Then, participants rated their drug-related experience over the few hours that they stayed in the lab. Across the 12 studies the authors, their post docs, and graduate students published studies linking the genetic polymorphisms to outcomes like feelings of anxiety, elation, vigor, positive mood, and even concrete outcomes such as heart rate and blood pressure. Continue reading
This week in PIG-IE we discussed the just published paper by an all-star team of “skeptical” researchers that examined the reliability of neuroscience research. It was a chance to take a break from our self-flagellation to see whether some of our colleagues suffer from similar problematic research practices.
Button, K. S., Ioannidis, J. P., Mokrysz, C., Nosek, B. A., Flint, J., Robinson, E. S., & Munafò, M. R. Continue reading
A large survey found that self-identified dog people and cat people differ in all of the Big Five personality traits. Cat people have a personality profile that stands out from other people more than that of dog people. Whether these personality differences affect other life outcomes such as happiness, has not been explored.
In a previous post, I wrote about the contentious atmosphere that so often surrounds replication studies, and fantasized a world in which one might occasionally see replication researchers and the original authors come together in “a joint effort to share methods, look at data together, and come to a collaborative understanding of an important scientific issue.” Happily one example that comes close to this ideal has been recently accepted for publication in Psychological Science — the same journal that published the original paper. The authors of both the original and replication studies appear to have worked together to share information about procedures and analyses, which while perhaps not a full collaboration, is at least cooperation of a sort that’s seen too rarely. The result was that the original, intriguing finding did not replicate; two large new studies obtained non-significant findings in the wrong direction. The hypothesis that anxiously attached people might prefer warm foods when their attachment concerns are activated was provocative, to say the least. But it seems to have been wrong.
With this example now out there, I hope others follow the same path towards helping the scientific literature perform the self-correcting process that, in principle, is its principal distinctive advantage. I also hope that, one of these days, an attempt to independently replicate a provocative finding will actually succeed! Now that would be an important step forward.
Sanjay Srivastava and Job van Wolferen have also commented on this replication study.
Etienne LeBel writes:
My colleague [Lorne Campbell] and I just got a paper accepted at Psych Science that reports on the outcome of two strict direct replications where we worked very closely with the original author to have all methodological design specifications as similar as those in the original study (and unfortunately did not reproduce the original finding).
We believe this is an important achievement for the “replication movement” because it shows that (a) attitudes are changing at the journal level with regard to rewarding direct replication efforts (to our knowledge this is the first strictly direct replications to be published at a top journal like Psych Science [JPSP eventually published large-scale failed direct replications of Bem's ESP findings, but this was of course a special case]) and (b) that direct replication endeavors can contribute new knowledge concerning a theoretical idea while maintaining a cordial, non-adversarial atmosphere with the original author. We really want to emphasize this point the most to encourage other researchers to engage in similar direct replication efforts. Science should first and foremost be about the ideas rather than the people behind the ideas; we’re hoping that examples like ours will sensibilize people to a more functional research culture where it is OK and completely normal for ideas to be revised given new evidence.
An important achievement indeed. The original paper was published in Psychological Science too, so it is especially good to see the journal owning the replication attempt. And hats off to LeBel and Campbell for taking this on. Someday direct replications will hopefully be more normal, but in world we currently live in it takes some gumption to go out and try one.
I also appreciated the very fact-focused and evenhanded tone of the writeup. If I can quibble, I would have ideally liked to see a statistical test contrasting their effect against the original one - testing the hypothesis that the replication result is different from the original result. I am sure it would have been significant, and it would have been preferable over comparing the original paper’s significant rejection of the null versus the replications non-significant test against the null. Continue reading
I’m working on a TOP SEKKRIT* project involving large-scale data mining of the psychology literature. I don’t have anything to say about the TOP SEKKRIT* project just yet, but I will say that in the process of extracting certain information I needed in order to do certain things I won’t talk about, I ended up with certain kinds of data that are useful for certain other tangential analyses. Just for fun, I threw some co-authorship data from 2,000+ Psychological Science articles into the d3.js blender, and out popped an interactive network graph of all researchers who have published at least 2 papers in Psych Science in the last 10 years**. It looks like this:
You can click on the image to take a closer (and interactive) look.
I don’t think this is very useful for anything right now, but if nothing else, it’s fun to drag Adam Galinsky around the screen and watch half of the field come along for the ride. There are plenty of other more interesting things one could do with this, though, and it’s also quite easy to generate the same graph for other journals, so I expect to have more to say about this later on.
* It’s not really TOP SEKKRIT at all–it just sounds more exciting that way.
** Or, more accurately, researchers who have co-authored at least 2 Psych Science papers with other researchers who meet the same criterion. Continue reading
Recent research has found that internet rants make people more angry than before not less. This builds on previous findings that "venting" actually makes anger worse than before and can lead to aggressive behavior. Expressing anger in a constructive non-aggressive way can lead to more beneficial outcomes than mindless ranting or venting.