Category Archives: sometimes i’m wrong

looking under the hood – Simine Vazire (sometimes i'm wrong)

[DISCLAIMER: The opinions expressed in my posts are personal opinions, and they do not reflect the editorial policy of Social Psychological and Personality Science or its sponsoring associations, which are responsible for setting editorial policy for the journal.]

Screen Shot 2017-03-02 at 4.07.25 PM lipstick on a hippo

before modern regulations, used car dealers didn't have to be transparent.  they could take any lemon, pretend it was a solid car, and fleece their customers.  this is how used car dealers became the butt of many jokes.
scientists are in danger of meeting the same fate.*  the scientific market is unregulated, which means that scientists can wrap their shaky findings in nice packaging and fool many, including themselves.  in a paper that just came out in Collabra: Psychology,** i describe how lessons from the used car market can save us.  this blog post is the story of how i came up with this idea.
last summer, i read Smaldino and McElreath's great paper on "The natural selection of bad science."  i agreed with almost everything in there, but there was one thing about it that rattled me.  their argument rests on the assumption that journals do a bad job of selecting for rigorous science. they write "An incentive structure that rewards publication quantity will, in the absence of countervailing forces, select for methods that produce the greatest number of publishable results." (p. Continue reading

power grab: why i’m not that worried about false negatives – Simine Vazire (sometimes i'm wrong)

[DISCLAIMER: The opinions expressed in my posts are personal opinions, and they do not reflect the editorial policy of Social Psychological and Personality Science or its sponsoring associations, which are responsible for setting editorial policy for the journal.]

Bear26

i've been bitching and moaning for a long time about the low statistical power of psych studies.  i've been wrong. our studies would be underpowered, if we actually followed the rules of Null Hypothesis Significance Testing (but kept our sample sizes as small as they are).  but the way we actually do research, our effective statistical power is actually very high, much higher than our small sample sizes should allow. let's start at the beginning. background (skip this if you know NHST)

NHST tableNull Hypothesis Significance Testing (over)simplified

in this table, power is the probability of ending up in the bottom right cell if we are in the right column (i.e. Continue reading

now is the time to double down on self-examination – Simine Vazire (sometimes i'm wrong)

[DISCLAIMER: The opinions expressed in my posts are personal opinions, and they do not reflect the editorial policy of Social Psychological and Personality Science or its sponsoring associations, which are responsible for setting editorial policy for the journal.]

IMG_8627 (1) it can be tempting, when contemplating the onslaught that science is likely to face from the next administration and congress, to scrub away any sign of self-criticism or weakness that could be used against us.  as a "softer" science, psychology has reason to be especially nervous.*

but hiding our flaws is exactly the wrong response.  if we do that, we will be contributing to our own demise. the best weapon anti-science people can use against us is to point to evidence that we are no different from other ways of knowing.  that we have no authority when it comes to empirical/scientific questions.  our authority comes from the fact that we are open to scrutiny, to criticism, to being wrong.  the failed replications, and the fact that we are publishing and discussing them openly, is the best evidence we have that we are a real science.  that we are different from propaganda, appeals to authority, or intuition.  we are falsifiable.  the proof is that we have, on occasion, falsified ourselves.
we should wear our battle with replicability as a badge of honor. Continue reading

who will watch the watchers? – Simine Vazire (sometimes i'm wrong)

 
[DISCLAIMER: The opinions expressed in my posts are personal opinions, and they do not reflect the editorial policy of Social Psychological and Personality Science or its sponsoring associations, which are responsible for setting editorial policy for the journal.]
IMG_8976 he makes it look easy.
i wasn't going to write a blog post about susan fiske's column.  many others have already raised excellent points about the earlier draft of her column, and about the tone discussion more generally.  but i have two points to make, and poor self-control.
point #1.
i have a complicated relationship with the tone issue.  on one hand, i hate the "if you can't stand the heat, get out of the kitchen" attitude.  being able to stand the heat is not a personal accomplishment.  it's often a consequence of privilege.  those who have been burnt many times before (by disadvantage, silencing, etc.) are less likely to want to hang out in a place with a ton of heat.  and we need to make our field welcoming to those people who refuse to tolerate bullshit.  we need more people like that, not fewer. Continue reading

i have found the solution and it is us – Simine Vazire (sometimes i'm wrong)

 [DISCLAIMER: The opinions expressed in my posts are personal opinions, and they do not reflect the editorial policy of Social Psychological and Personality Science or its sponsoring associations, which are responsible for setting editorial policy for the journal.]

Happydog bear, having recently joined SIPS

i have found scientific utopia.*

sometimes, when i lay awake at night, it's hard for me to believe that science will ever look the way i want it to look,** with everyone being skeptical of preliminary evidence, conclusions being circumscribed, studies being pre-registered, data and materials being open, and civil post-publication criticism being a normal part of life.
then i realized that utopia already exists.  it's how we treat replication studies.
i've never tried to do a replication study,*** but some of my best friends (and two of my grad students) are replicators.  so i know a little bit about the process of trying to get a replication study published.  short version: it's super hard.
we (almost always) hold replication studies to an extremely high standard.  that's why i'm surprised whenever i hear people say that researchers do replications in order to get an 'easy' publication.  replications are not for the faint of heart.  if you want to have a chance of getting a failed replication**** published in a good journal, here's what you often have to do: Continue reading

don’t you know who i am? – Simine Vazire (sometimes i'm wrong)

 [DISCLAIMER: The opinions expressed in my posts are personal opinions, and they do not reflect the editorial policy of Social Psychological and Personality Science or its sponsoring associations, which are responsible for setting editorial policy for the journal.]

Elephantseal05elephant seal, throwing his weight around

when i started my first job as associate editor, i was worried that i would get a lot of complaints from disgruntled authors.  i wasn't afraid of the polite appeals based on substantive issues, i was worried about the complaints that appeal to the authors' status, the "don't you know who i am?" appeal.

i never did get that kind of response, at least not from authors. but i saw something worse - a pretty common attitude that we should be judging papers based, in part, on who wrote them.  socially sanctioned status bias. not so much at the journals i worked with, but in the world of journals more broadly. like the Nature editorial, on whether there should be author anonymity in peer review, that argued that "identifying authors stimulates referees to ask appropriate questions (for example, differentiating between a muddy technical explanation and poor experimental technique)." the argument seems to be that some people should be given a chance to clear up their muddy explanations and others should not. or the editor who wrote in The Chronicle of Higher Education just a few days ago that "Editors rarely send work out to trusted reviewers if it comes from unproven authors using jazz-hands titles."  leaving aside the contentious issue of jazz-hand titles, when did we accept that it was ok to treat papers from 'unproven authors' differently? Continue reading

your inner third grader – Simine Vazire (sometimes i'm wrong)

 [DISCLAIMER: The opinions expressed in my posts are personal opinions, and they do not reflect the editorial policy of Social Psychological and Personality Science or its sponsoring associations, which are responsible for setting editorial policy for the journal.]

Embarrassedbear

it felt like a confessional. 'sometimes, we say we predicted things that we didn't actually predict.'  i paused, embarrassed.  'i know.' 'i'm sorry,' she said, 'but that sounds like something even a third grader would know is wrong.' 'i know.' i tried not to make excuses, but to explain how this happened.  how an entire field convinced itself that HARKing (Hypothesizing After the Results are Known, Kerr, 1998) is ok. Continue reading