Author Archives: Simine Vazire

Guest Post by Shira Gabriel: Don’t Go Chasing Waterfalls – Simine Vazire (sometimes i'm wrong)

 [DISCLAIMER: The opinions expressed in my posts, and guest posts, are personal opinions, and they do not reflect the editorial policy of Social Psychological and Personality Science or its sponsoring associations, which are responsible for setting editorial policy for the journal.]

Guest post by Shira Gabriel

Don’t go chasing waterfalls, please stick to the rivers and the lakes that you're used to. I haven’t always been the most enthusiastic respondent to all the changes in the field around scientific methods.  Even changes that I know are for the better, like attending to power and thus running fewer studies with more participants, I have gone along with grudgingly.  It is the same attitude I have towards eating more vegetables and less sugar.  I miss cake and I am tired of carrots, but I know that it is best in the long run. I miss running dozens of studies in a semester, but I know that it is best in the long run. It is not like I never knew about power, but I never focused on it, like many other people in the field.  I had vague ideas of how big my cell sizes should be (ideas that were totally wrong, I have learned since) and I would run studies using those vague ideas.  If I got support for my hypotheses-- great! Continue reading

results blind vs. results bling* – Simine Vazire (sometimes i'm wrong)

 

[DISCLAIMER: The opinions expressed in my posts are personal opinions, and they do not reflect the editorial policy of Social Psychological and Personality Science or its sponsoring associations, which are responsible for setting editorial policy for the journal.]

Octopus24 show-off

in many areas of science, our results sections are kind of like instagram posts.  beautiful, clear, but not necessarily accurate. researchers can cherry-pick the best angle, filter out the splotches, and make an ordinary hot dog look scrumptious (or make a lemon look like spiffy car).**  but what's even more fascinating to me is that our reaction to other people's results are often like our reactions to other people's instagram posts: "wow! that's aMAZing! how did she get that!" i've fallen prey to this myself.  i used to teach bem's chapter on "writing the empirical journal article," that tells researchers to think of their dataset as a jewel, and "to cut and polish it, to select the facets to highlight, and to craft the best setting for it."  i taught this to graduate students, and then i would literally turn around, read a published paper, and think "what a beautiful jewel!"*** Continue reading

alpha wars – Simine Vazire (sometimes i'm wrong)

[DISCLAIMER: The opinions expressed in my posts are personal opinions, and they do not reflect the editorial policy of Social Psychological and Personality Science or its sponsoring associations, which are responsible for setting editorial policy for the journal.]

  IMG-0290 gomi

i was going to do a blog post on having thick skin but still being open to criticism, and how to balance those two things.  then a paper came out, which i’m one of 72 authors on, and which drew a lot of criticism, much of it from people i respect a ton and often agree with (one of them is currently on my facebook profile picture, and one of the smartest people i know).  so this is going to be a two-fer blog post.  one on my thoughts about arguments against the central claim in the paper, and one on the appropriate thickness for scientists’ skin. PART I: the substantive argument* in our paper we argue that we should introduce a new threshold for statistical significance (and for claiming new discoveries), and make it .005. i want to take a moment to emphasize some other things we said in the paper.  we said that .005 should not be a threshold for publication. Continue reading

what is rigor? – Simine Vazire (sometimes i'm wrong)

[DISCLAIMER: The opinions expressed in my posts are personal opinions, and they do not reflect the editorial policy of Social Psychological and Personality Science or its sponsoring associations, which are responsible for setting editorial policy for the journal.]

IMG_0710

recently i wrote a thing where i was the eight millionth person to say that we should evaluate research based on its scientific qualities (in this case, i was arguing that we should not evaluate research based on the status or prestige of the person or institution behind it).  i had to keep the column short, so i ended with the snappy line "Let's focus less on eminence and more on its less glamorous cousin, rigor." simple, right? the question of how to evaluate quality came up again on twitter,* and Tal Yarkoni expressed skepticism about whether scientists can agree on what makes for a high quality paper.

Tal tweet

there's good reason to be skeptical that scientists - even scientists working on the same topic - would agree on the quality of a specific paper.  indeed, the empirical evidence regarding consensus among reviewers during peer review suggests there is ample disagreement (see this systematic review, cited in this editorial that is absolutely worth a read).

so, my goal here is to try to outline what i mean by "rigor" - to operationally define this construct at least in the little corner of the world that is my mind. Continue reading

what is rigor? – Simine Vazire (sometimes i'm wrong)

[DISCLAIMER: The opinions expressed in my posts are personal opinions, and they do not reflect the editorial policy of Social Psychological and Personality Science or its sponsoring associations, which are responsible for setting editorial policy for the journal.]

IMG_0710

recently i wrote a thing where i was the eight millionth person to say that we should evaluate research based on its scientific qualities (in this case, i was arguing that we should not evaluate research based on the status or prestige of the person or institution behind it).  i had to keep the column short, so i ended with the snappy line "Let's focus less on eminence and more on its less glamorous cousin, rigor." simple, right? the question of how to evaluate quality came up again on twitter,* and Tal Yarkoni expressed skepticism about whether scientists can agree on what makes for a high quality paper.

Tal tweet

there's good reason to be skeptical that scientists - even scientists working on the same topic - would agree on the quality of a specific paper.  indeed, the empirical evidence regarding consensus among reviewers during peer review suggests there is ample disagreement (see this systematic review, cited in this editorial that is absolutely worth a read).

so, my goal here is to try to outline what i mean by "rigor" - to operationally define this construct at least in the little corner of the world that is my mind. Continue reading

be your own a**hole – Simine Vazire (sometimes i'm wrong)

[DISCLAIMER: The opinions expressed in my posts are personal opinions, and they do not reflect the editorial policy of Social Psychological and Personality Science or its sponsoring associations, which are responsible for setting editorial policy for the journal.]

Giraffe1

do you feel frustrated by all the different opinions about what good science looks like?  do you wish there were some concrete guidelines to help you know when to trust your results?  well don't despair! it's true that many of the most hotly debated topics in replicability don't have neat answers.  we could go around and around forever.  so in these tumultuous times, i like to look for things i can hold on to - things that have mathematical answers. here's one: what should we expect p-values for real effects to look like?  everyone's heard a lot about this,* thanks to p-curve, but every time i run the numbers i find that my intuitions are off. way off.  so i decided to write a blog post to try to make these probabilities really sink in. Continue reading

be your own a**hole – Simine Vazire (sometimes i'm wrong)

[DISCLAIMER: The opinions expressed in my posts are personal opinions, and they do not reflect the editorial policy of Social Psychological and Personality Science or its sponsoring associations, which are responsible for setting editorial policy for the journal.]

Giraffe1

do you feel frustrated by all the different opinions about what good science looks like?  do you wish there were some concrete guidelines to help you know when to trust your results?  well don't despair! it's true that many of the most hotly debated topics in replicability don't have neat answers.  we could go around and around forever.  so in these tumultuous times, i like to look for things i can hold on to - things that have mathematical answers. here's one: what should we expect p-values for real effects to look like?  everyone's heard a lot about this,* thanks to p-curve, but every time i run the numbers i find that my intuitions are off. way off.  so i decided to write a blog post to try to make these probabilities really sink in. Continue reading

Perspectives You Won’t Read in Perspectives: Thoughts on Gender, Power, & Eminence – Simine Vazire (sometimes i'm wrong)

[DISCLAIMER: The opinions expressed in my posts are personal opinions, and they do not reflect the editorial policy of Social Psychological and Personality Science or its sponsoring associations, which are responsible for setting editorial policy for the journal.]

This is a guest post by Katie Corker on behalf of a group of us

Rejection hurts. No amount of Netflix binge watching, nor ice cream eating, nor crying to one's dog* really takes the sting out of feeling rejected. Yet, as scientific researchers, we have to deal with an almost constant stream of rejection - there's never enough grant money or journal space to go around.
Which brings us to today's topic. All six of us were recently rejected** from the Perspectives in Psychological Science special issue featuring commentaries on scientific eminence. The new call for submissions was a follow-up to an earlier symposium entitled "Am I Famous Yet?", which featured commentaries on fame and merit in psychological research from seven eminent white men and Alice Eagly.*** The new call was issued in response to a chorus of nasty women and other dissidents who insisted that their viewpoints hadn't been represented by the scholars in the original special issue. The new call explicitly invited these "diverse perspectives" to speak up (in 1,500 words or less****).
Each of the six of us independently rose to the challenge and submitted comments. None of us were particularly surprised to receive rejections - after all, getting rejected is just about the most ordinary thing that can happen to a practicing researcher. Continue reading

looking under the hood – Simine Vazire (sometimes i'm wrong)

[DISCLAIMER: The opinions expressed in my posts are personal opinions, and they do not reflect the editorial policy of Social Psychological and Personality Science or its sponsoring associations, which are responsible for setting editorial policy for the journal.]

Screen Shot 2017-03-02 at 4.07.25 PM lipstick on a hippo

before modern regulations, used car dealers didn't have to be transparent.  they could take any lemon, pretend it was a solid car, and fleece their customers.  this is how used car dealers became the butt of many jokes.
scientists are in danger of meeting the same fate.*  the scientific market is unregulated, which means that scientists can wrap their shaky findings in nice packaging and fool many, including themselves.  in a paper that just came out in Collabra: Psychology,** i describe how lessons from the used car market can save us.  this blog post is the story of how i came up with this idea.
last summer, i read Smaldino and McElreath's great paper on "The natural selection of bad science."  i agreed with almost everything in there, but there was one thing about it that rattled me.  their argument rests on the assumption that journals do a bad job of selecting for rigorous science. they write "An incentive structure that rewards publication quantity will, in the absence of countervailing forces, select for methods that produce the greatest number of publishable results." (p. Continue reading

power grab: why i’m not that worried about false negatives – Simine Vazire (sometimes i'm wrong)

[DISCLAIMER: The opinions expressed in my posts are personal opinions, and they do not reflect the editorial policy of Social Psychological and Personality Science or its sponsoring associations, which are responsible for setting editorial policy for the journal.]

Bear26

i've been bitching and moaning for a long time about the low statistical power of psych studies.  i've been wrong. our studies would be underpowered, if we actually followed the rules of Null Hypothesis Significance Testing (but kept our sample sizes as small as they are).  but the way we actually do research, our effective statistical power is actually very high, much higher than our small sample sizes should allow. let's start at the beginning. background (skip this if you know NHST)

NHST tableNull Hypothesis Significance Testing (over)simplified

in this table, power is the probability of ending up in the bottom right cell if we are in the right column (i.e. Continue reading