Category Archives: sometimes i’m wrong

now is the time to double down on self-examination – Simine Vazire (sometimes i'm wrong)

[DISCLAIMER: The opinions expressed in my posts are personal opinions, and they do not reflect the editorial policy of Social Psychological and Personality Science or its sponsoring associations, which are responsible for setting editorial policy for the journal.]

IMG_8627 (1) it can be tempting, when contemplating the onslaught that science is likely to face from the next administration and congress, to scrub away any sign of self-criticism or weakness that could be used against us.  as a "softer" science, psychology has reason to be especially nervous.*

but hiding our flaws is exactly the wrong response.  if we do that, we will be contributing to our own demise. the best weapon anti-science people can use against us is to point to evidence that we are no different from other ways of knowing.  that we have no authority when it comes to empirical/scientific questions.  our authority comes from the fact that we are open to scrutiny, to criticism, to being wrong.  the failed replications, and the fact that we are publishing and discussing them openly, is the best evidence we have that we are a real science.  that we are different from propaganda, appeals to authority, or intuition.  we are falsifiable.  the proof is that we have, on occasion, falsified ourselves.
we should wear our battle with replicability as a badge of honor. Continue reading

who will watch the watchers? – Simine Vazire (sometimes i'm wrong)

 
[DISCLAIMER: The opinions expressed in my posts are personal opinions, and they do not reflect the editorial policy of Social Psychological and Personality Science or its sponsoring associations, which are responsible for setting editorial policy for the journal.]
IMG_8976 he makes it look easy.
i wasn't going to write a blog post about susan fiske's column.  many others have already raised excellent points about the earlier draft of her column, and about the tone discussion more generally.  but i have two points to make, and poor self-control.
point #1.
i have a complicated relationship with the tone issue.  on one hand, i hate the "if you can't stand the heat, get out of the kitchen" attitude.  being able to stand the heat is not a personal accomplishment.  it's often a consequence of privilege.  those who have been burnt many times before (by disadvantage, silencing, etc.) are less likely to want to hang out in a place with a ton of heat.  and we need to make our field welcoming to those people who refuse to tolerate bullshit.  we need more people like that, not fewer. Continue reading

i have found the solution and it is us – Simine Vazire (sometimes i'm wrong)

 [DISCLAIMER: The opinions expressed in my posts are personal opinions, and they do not reflect the editorial policy of Social Psychological and Personality Science or its sponsoring associations, which are responsible for setting editorial policy for the journal.]

Happydog bear, having recently joined SIPS

i have found scientific utopia.*

sometimes, when i lay awake at night, it's hard for me to believe that science will ever look the way i want it to look,** with everyone being skeptical of preliminary evidence, conclusions being circumscribed, studies being pre-registered, data and materials being open, and civil post-publication criticism being a normal part of life.
then i realized that utopia already exists.  it's how we treat replication studies.
i've never tried to do a replication study,*** but some of my best friends (and two of my grad students) are replicators.  so i know a little bit about the process of trying to get a replication study published.  short version: it's super hard.
we (almost always) hold replication studies to an extremely high standard.  that's why i'm surprised whenever i hear people say that researchers do replications in order to get an 'easy' publication.  replications are not for the faint of heart.  if you want to have a chance of getting a failed replication**** published in a good journal, here's what you often have to do: Continue reading

don’t you know who i am? – Simine Vazire (sometimes i'm wrong)

 [DISCLAIMER: The opinions expressed in my posts are personal opinions, and they do not reflect the editorial policy of Social Psychological and Personality Science or its sponsoring associations, which are responsible for setting editorial policy for the journal.]

Elephantseal05elephant seal, throwing his weight around

when i started my first job as associate editor, i was worried that i would get a lot of complaints from disgruntled authors.  i wasn't afraid of the polite appeals based on substantive issues, i was worried about the complaints that appeal to the authors' status, the "don't you know who i am?" appeal.

i never did get that kind of response, at least not from authors. but i saw something worse - a pretty common attitude that we should be judging papers based, in part, on who wrote them.  socially sanctioned status bias. not so much at the journals i worked with, but in the world of journals more broadly. like the Nature editorial, on whether there should be author anonymity in peer review, that argued that "identifying authors stimulates referees to ask appropriate questions (for example, differentiating between a muddy technical explanation and poor experimental technique)." the argument seems to be that some people should be given a chance to clear up their muddy explanations and others should not. or the editor who wrote in The Chronicle of Higher Education just a few days ago that "Editors rarely send work out to trusted reviewers if it comes from unproven authors using jazz-hands titles."  leaving aside the contentious issue of jazz-hand titles, when did we accept that it was ok to treat papers from 'unproven authors' differently? Continue reading

your inner third grader – Simine Vazire (sometimes i'm wrong)

 [DISCLAIMER: The opinions expressed in my posts are personal opinions, and they do not reflect the editorial policy of Social Psychological and Personality Science or its sponsoring associations, which are responsible for setting editorial policy for the journal.]

Embarrassedbear

it felt like a confessional. 'sometimes, we say we predicted things that we didn't actually predict.'  i paused, embarrassed.  'i know.' 'i'm sorry,' she said, 'but that sounds like something even a third grader would know is wrong.' 'i know.' i tried not to make excuses, but to explain how this happened.  how an entire field convinced itself that HARKing (Hypothesizing After the Results are Known, Kerr, 1998) is ok. Continue reading

SPPS Special Issue on Research Methods – Simine Vazire (sometimes i'm wrong)

Social Psychological and Personality Science is now accepting submissions for a forthcoming special issue on “New developments in research methods for social/personality psychology.”

Recent advances in research design (e.g., crossed designs; Westfall, Kenny, & Judd, 2014), analysis (e.g., Bayesian approaches; Wagenmakers et al., under review), and meta-science (e.g., p-curve; Simonsohn, Simmons, & Nelson, in press) have opened up new possibilities for improving research methods in social and personality psychology.

Continue reading

the good, the bad, and the ugly – Simine Vazire (sometimes i'm wrong)

Bird

one of the themes of the replicability movement has been the Campaign for Real Data (Kaiser, 2012).  the idea is that real data, data that haven't been touched up by QRPs, are going to be imperfect, sometimes inconsistent.  part of what got us into this mess is the expectation that each paper needs to tell a perfect story, and any inconsistent results need to be swept under the rug.
whenever this comes up, i worry that we are sending researchers a mixed message.  on one hand we're saying that we should expect results to be messy.  on the other hand we're saying that we're going to expect even more perfection than before.  p = .04 used to be just fine, now it makes editors and reviewers raise an eyebrow and consider whether there are other signs that the result may not be reliable.  so which is it, are we going to tolerate more messiness or are we going to expect stronger results?
yes.
on the face of it, these two values (more tolerance for messiness vs. more precise/significant estimates) seem contradictory.  but when we dig a little deeper, i don't think they are.  and i think it's important for people to be clear about what kind of messy is good-messy and what kind of messy is bad-messy. Continue reading

is this what it sounds like when the doves cry? – Simine Vazire (sometimes i'm wrong)

  IMG_2064

there are so many (very good) stories already about the RP:P, it's easy to feel like we're overreacting.  but there is a lot at stake.  some people feel that the reputation of our field is at stake.  i don't share that view.  i trust the public to recognize that this conversation about our methods is healthy and normal for science.  the public accepts that science progresses slowly - we waited over 40 years for the higgs boson, and, according to wikipedia, we're still not sure we found it.  i don't think we're going to look that bad if psychologists, as a field, ask the public for some patience while we improve our methods.  if anything, i think what makes us look bad is when psychology studies are reported in a way that is clearly miscalibrated, that makes us sound much more confident than scientists have any right to be when just starting out investigating a new topic. what i think is at stake is not the reputation of our field, but our commitment to trying out these new practices and seeing how our results look. in the press release, Gilbert is quoted as saying that the RP:P paper led to changes in policy at many scientific journals.  that's not my impression.  my impression is that the changes that happened came before the RP:P was published.  i also haven't seen a lot of big changes. Continue reading

it’s the end of the world as we know it… and i feel fine – Simine Vazire (sometimes i'm wrong)

IMG_7179 what, me worry?

several people have told me recently that they are incredibly depressed about the news from psychology (and medicine, and political science, and economics, and biology...).  which has made me wonder, why am i completely fine? let's be real. one reason i'm not upset is because i have tenure (and a job that i love).  my heart goes out to everyone trying to navigate this brave new world without job security. but even some of my tenured friends are depressed.  so what's my deal? maybe it's just my sunny disposition.* more likely, i think i am particularly good at focusing on the right counterfactuals. Continue reading

and now for something a little more uplifting – Simine Vazire (sometimes i'm wrong)

IMG_7123

if you read my blog, you might think everything is shit and we might as well go drink whiskey and play euchre.  that's definitely plan b.  but for now, i'm sticking with plan a: SIPS.* you should come, too.

Society for the Improvement of Psychological Science (SIPS)** Inaugural Meeting June 6-8th, 2016 Center for Open Science, Charlottesville, VA

SIPS is a new group created to bring together scholars working to improve methods and practices in psychological science. The aim of the inaugural meeting is to generate ideas, goals, and actionable plans to improve psychological science, including:

-Improving the training and research practices in psychological science -Improving institutional practices to incentivize better scientific practices (e.g., journals, societies, departments, and universities) -Conducting meta-science, empirical tests of reforms, and critical self-evaluation -Outreach within and outside psychology (including attention to diversity) The meeting will be a dynamic agenda of very brief presentations, open discussion, break-out work, and action planning.  We have a draft agenda here. If you're interested in coming, sign up here!

Because of practical constraints, registration is limited to approximately 60 participants for the inaugural meeting. Continue reading