Psychedelic Drugs and the Nature of Personality Change – Scott McGreal (Unique—Like Everybody Else)

A recent study found increases in openness to experience following a dose of LSD. More detailed studies on psychedelics may lead to a deeper understanding of personality change.

SPPS Special Issue on Research Methods – Simine Vazire (sometimes i'm wrong)

Social Psychological and Personality Science is now accepting submissions for a forthcoming special issue on “New developments in research methods for social/personality psychology.”

Recent advances in research design (e.g., crossed designs; Westfall, Kenny, & Judd, 2014), analysis (e.g., Bayesian approaches; Wagenmakers et al., under review), and meta-science (e.g., p-curve; Simonsohn, Simmons, & Nelson, in press) have opened up new possibilities for improving research methods in social and personality psychology.

Continue reading

the good, the bad, and the ugly – Simine Vazire (sometimes i'm wrong)


one of the themes of the replicability movement has been the Campaign for Real Data (Kaiser, 2012).  the idea is that real data, data that haven't been touched up by QRPs, are going to be imperfect, sometimes inconsistent.  part of what got us into this mess is the expectation that each paper needs to tell a perfect story, and any inconsistent results need to be swept under the rug.
whenever this comes up, i worry that we are sending researchers a mixed message.  on one hand we're saying that we should expect results to be messy.  on the other hand we're saying that we're going to expect even more perfection than before.  p = .04 used to be just fine, now it makes editors and reviewers raise an eyebrow and consider whether there are other signs that the result may not be reliable.  so which is it, are we going to tolerate more messiness or are we going to expect stronger results?
on the face of it, these two values (more tolerance for messiness vs. more precise/significant estimates) seem contradictory.  but when we dig a little deeper, i don't think they are.  and i think it's important for people to be clear about what kind of messy is good-messy and what kind of messy is bad-messy. Continue reading

Using R to collect Twitter data – Carol Tweten (Person X Situation)

We (the Personality & Well-Being Lab at MSU) are currently conducting a study in which we are collecting the tweets of our Twitter-using participants. The learning curve for me to actually do this in R was a bit steep, so I thought I’d share what I’ve learned here. I’ll do my  best to make this a sort of ‘guide’ for other researchers because I really feel as though more researchers should incorporate online behavior data into their work. Here are the steps I went through. I’ll also include comments on methodological decisions we had to make when designing our study. Preliminary Steps:
  1. In order to collect data from Twitter, you first need to register an API (Application Programming Interface). (Note that you need to be signed into an existing Twitter account to do this. Additionally, your Twitter account needs to have an associated mobile phone number. This can be added in the account settings in the “Mobile” section.)
    1. Click on ‘Create New App’.
    2. Fill in the Name, Description, and Website. Continue reading

What if Gilbert is Right? – David Funder (funderstorms)

I. The Story Until Now (For late arrivals to the party) Over the decades, since about 1970, social psychologists conducted lots of studies, some of which found cute, counter-intuitive effects that gained great attention. After years of private rumblings that many of these studies – especially some of the cutest ones – couldn’t be replicated, a crisis suddenly broke out into the open (1). Failures to replicate famous and even beloved findings began to publicly appear, become well known, and be thoroughly argued-over, not always in the most civil of terms. The “replicability crisis” became a thing. But how bad was the crisis really? The accumulation of anecdotal stories and one-off failures to replicate was perhaps clarified to some extent by a major project organized by the Center for Open Science (COS), published last November, in which labs around the world tried to replicate 100 studies and, depending on your definition, “replicated” only 36% of them (2). In the face of all this, some optimists argued that social psychology shouldn’t really feel so bad, because failed replicators might simply be incompetent, if not actually motivated to fail, and the typical cute, counter-intuitive effect is a delicate flower that can only bloom under the most ideal climate and careful cultivation. Optimists of a different variety (including myself) also pointed out that psychology shouldn’t feel so bad, but for a different reason: problems of replicability are far from unique to our field. Failures to reproduce key findings have become seen as serious problems within biology, biochemistry, cardiac medicine, and even – and disturbingly –cancer research. It was widely reported that the massive biotech company Amgen was unable to replicate 47 out of 53 of seemingly promising cancer biology studies. If we have a problem, we are far from alone. II. And Then Came Last Friday’s News (3) Prominent psychology professors Daniel Gilbert and Tim Wilson published an article that “overturned” (4) the epic COS study. Continue reading

is this what it sounds like when the doves cry? – Simine Vazire (sometimes i'm wrong)


there are so many (very good) stories already about the RP:P, it's easy to feel like we're overreacting.  but there is a lot at stake.  some people feel that the reputation of our field is at stake.  i don't share that view.  i trust the public to recognize that this conversation about our methods is healthy and normal for science.  the public accepts that science progresses slowly - we waited over 40 years for the higgs boson, and, according to wikipedia, we're still not sure we found it.  i don't think we're going to look that bad if psychologists, as a field, ask the public for some patience while we improve our methods.  if anything, i think what makes us look bad is when psychology studies are reported in a way that is clearly miscalibrated, that makes us sound much more confident than scientists have any right to be when just starting out investigating a new topic. what i think is at stake is not the reputation of our field, but our commitment to trying out these new practices and seeing how our results look. in the press release, Gilbert is quoted as saying that the RP:P paper led to changes in policy at many scientific journals.  that's not my impression.  my impression is that the changes that happened came before the RP:P was published.  i also haven't seen a lot of big changes. Continue reading

Evaluating a new critique of the Reproducibility Project – Sanjay Srivastava (The Hardest Science)

Over the last five years psychologists have been paying more and more attention to issues that could be diminishing the quality of our published research — things like low power, p-hacking, and publication bias. We know these things can affect reproducibility, but it can be hard to gauge their practical impact. The Reproducibility Project: Psychology (RPP), published last year in Science, was a massive, coordinated effort to produce an estimate of where several of the field’s top journals stood in 2008 before all the attention and concerted improvement began. The RPP is not perfect, and the paper is refreshingly frank about its limitations and nuanced about its conclusions. But all science proceeds on fallible evidence (there isn’t any other kind), and it has been welcomed by many psychologists as an informative examination of the reliability of our published findings. Welcomed by many, but not welcomed by all. In a technical commentary released today in Science, Dan Gilbert, Gary King, Stephen Pettigrew, and Tim Wilson take exception to the conclusions that the RPP authors and many scientists who read it have reached. They offer re-analyses of the RPP, some incorporating outside data. They maintain that the RPP authors’ conclusions are wrong, and on re-examination the data tell us that “the reproducibility of psychological science is quite high.” (The RPP authors published a reply.) What should we make of it? Continue reading