Why doesn’t personality psychology have a replication crisis? – David Funder (funderstorms)

Because It’s Boring

“[Personality psychology] has reduced the chances of being wrong but palpably increased the fact of being boring. In making that transition, personality psychology became more accurate but less broadly interesting.”  — Roy Baumeister (2016, p. 6) Many fields of research – not just social psychology but also biomedicine, cancer biology, economics, political science, and even physics – are experiencing crises of replicability.  Recent and classic results are challenged by reports that when new investigators try to repeat them, often they simply can’t.  This fact has led to gnashing of teeth and rending of garments, not to mention back-and-forth controversies pitting creativity against rigor (see the article quoted in the epigram), and spawned memorable phrases such as “replication police” and “shameless little bullies.” But, as the quote above attests, personality psychology seems to be immune.  In particular, I am not aware of any major finding (1) in personality psychology that has experienced the kind of assault on its reliability that has been inflicted upon many findings in social psychology (2).  Why not?  Is it because personality psychology is boring?  Maybe so, and I’ll come back to that point at the end, but first let’s consider some other

Possible Reasons Personality Psychology Does Not Have a Replication Crisis

  1. Personality Psychology Takes Measurement Seriously
The typical study in personality measures some attribute of persons (usually a personality trait) and also measures an outcome such as a behavior, a level of attainment, or an indicator of mental or physical health. Continue reading

Psychedelic Drugs and the Nature of Personality Change – Scott McGreal (Unique—Like Everybody Else)

A recent study found increases in openness to experience following a dose of LSD. More detailed studies on psychedelics may lead to a deeper understanding of personality change.

SPPS Special Issue on Research Methods – Simine Vazire (sometimes i'm wrong)

Social Psychological and Personality Science is now accepting submissions for a forthcoming special issue on “New developments in research methods for social/personality psychology.”

Recent advances in research design (e.g., crossed designs; Westfall, Kenny, & Judd, 2014), analysis (e.g., Bayesian approaches; Wagenmakers et al., under review), and meta-science (e.g., p-curve; Simonsohn, Simmons, & Nelson, in press) have opened up new possibilities for improving research methods in social and personality psychology.

Continue reading

the good, the bad, and the ugly – Simine Vazire (sometimes i'm wrong)

Bird

one of the themes of the replicability movement has been the Campaign for Real Data (Kaiser, 2012).  the idea is that real data, data that haven't been touched up by QRPs, are going to be imperfect, sometimes inconsistent.  part of what got us into this mess is the expectation that each paper needs to tell a perfect story, and any inconsistent results need to be swept under the rug.
whenever this comes up, i worry that we are sending researchers a mixed message.  on one hand we're saying that we should expect results to be messy.  on the other hand we're saying that we're going to expect even more perfection than before.  p = .04 used to be just fine, now it makes editors and reviewers raise an eyebrow and consider whether there are other signs that the result may not be reliable.  so which is it, are we going to tolerate more messiness or are we going to expect stronger results?
yes.
on the face of it, these two values (more tolerance for messiness vs. more precise/significant estimates) seem contradictory.  but when we dig a little deeper, i don't think they are.  and i think it's important for people to be clear about what kind of messy is good-messy and what kind of messy is bad-messy. Continue reading

Using R to collect Twitter data – Carol Tweten (Person X Situation)

We (the Personality & Well-Being Lab at MSU) are currently conducting a study in which we are collecting the tweets of our Twitter-using participants. The learning curve for me to actually do this in R was a bit steep, so I thought I’d share what I’ve learned here. I’ll do my  best to make this a sort of ‘guide’ for other researchers because I really feel as though more researchers should incorporate online behavior data into their work. Here are the steps I went through. I’ll also include comments on methodological decisions we had to make when designing our study. Preliminary Steps:
  1. In order to collect data from Twitter, you first need to register an API (Application Programming Interface). (Note that you need to be signed into an existing Twitter account to do this. Additionally, your Twitter account needs to have an associated mobile phone number. This can be added in the account settings in the “Mobile” section.)
    1. Click on ‘Create New App’.
    2. Fill in the Name, Description, and Website. Continue reading

What if Gilbert is Right? – David Funder (funderstorms)

I. The Story Until Now (For late arrivals to the party) Over the decades, since about 1970, social psychologists conducted lots of studies, some of which found cute, counter-intuitive effects that gained great attention. After years of private rumblings that many of these studies – especially some of the cutest ones – couldn’t be replicated, a crisis suddenly broke out into the open (1). Failures to replicate famous and even beloved findings began to publicly appear, become well known, and be thoroughly argued-over, not always in the most civil of terms. The “replicability crisis” became a thing. But how bad was the crisis really? The accumulation of anecdotal stories and one-off failures to replicate was perhaps clarified to some extent by a major project organized by the Center for Open Science (COS), published last November, in which labs around the world tried to replicate 100 studies and, depending on your definition, “replicated” only 36% of them (2). In the face of all this, some optimists argued that social psychology shouldn’t really feel so bad, because failed replicators might simply be incompetent, if not actually motivated to fail, and the typical cute, counter-intuitive effect is a delicate flower that can only bloom under the most ideal climate and careful cultivation. Optimists of a different variety (including myself) also pointed out that psychology shouldn’t feel so bad, but for a different reason: problems of replicability are far from unique to our field. Failures to reproduce key findings have become seen as serious problems within biology, biochemistry, cardiac medicine, and even – and disturbingly –cancer research. It was widely reported that the massive biotech company Amgen was unable to replicate 47 out of 53 of seemingly promising cancer biology studies. If we have a problem, we are far from alone. II. And Then Came Last Friday’s News (3) Prominent psychology professors Daniel Gilbert and Tim Wilson published an article that “overturned” (4) the epic COS study. Continue reading

is this what it sounds like when the doves cry? – Simine Vazire (sometimes i'm wrong)

  IMG_2064

there are so many (very good) stories already about the RP:P, it's easy to feel like we're overreacting.  but there is a lot at stake.  some people feel that the reputation of our field is at stake.  i don't share that view.  i trust the public to recognize that this conversation about our methods is healthy and normal for science.  the public accepts that science progresses slowly - we waited over 40 years for the higgs boson, and, according to wikipedia, we're still not sure we found it.  i don't think we're going to look that bad if psychologists, as a field, ask the public for some patience while we improve our methods.  if anything, i think what makes us look bad is when psychology studies are reported in a way that is clearly miscalibrated, that makes us sound much more confident than scientists have any right to be when just starting out investigating a new topic. what i think is at stake is not the reputation of our field, but our commitment to trying out these new practices and seeing how our results look. in the press release, Gilbert is quoted as saying that the RP:P paper led to changes in policy at many scientific journals.  that's not my impression.  my impression is that the changes that happened came before the RP:P was published.  i also haven't seen a lot of big changes. Continue reading