It’s deja vu all over again – Brent Roberts (pigee)

I seem to replicate the same conversation on Twitter every time a different sliver of the psychological guild confronts open science and reproducibility issues. Each conversation starts and ends the same way as conversations I’ve had or seen 8 years ago, 4 years ago, 2 years ago, last year, or last month. In some ways that’s a good sign. Awareness of the issue of reproducibility and efforts to improve our science are reaching beyond the subfields that have been at the center of the discussion.   Greater engagement with these issues is ideal. The problem is that each time a new group realizes that their own area is subject to criticism, they raise the same objections based on the same misconceptions, leading to the same mistaken attack on the messengers: They claim that scholars pursuing reproducibility or meta-science issues are a highly organized phalanx of intransigent, inflexible, authoritarians who are insensitive to important differences among subfields and who to impose a monolithic and arbitrary set of requirements to all research.   In these “conversations,” scholars recommending changes to the way science is conducted have been unflatteringly described as sanctimonious, despotic, authoritarian, doctrinaire, and militant, and creatively labeled with names such as shameless little bullies, assholes, McCarthyites, second stringers, methodological terrorists, fascists, Nazis, Stasi, witch hunters, reproducibility bros, data parasites, destructo-critics, replication police, self-appointed data police, destructive iconoclasts, vigilantes, accuracy fetishists, and human scum. Yes, every one of those terms has been used in public discourse, typically by eminent (i.e., senior) psychologists.   Villainizing those calling for methodological reform is ingenious, particularly if you have no compelling argument against the proposed changes*. Continue reading

Measuring Happiness Is Harder (But Maybe Also Easier) Than You Think – Rich Lucas (The Desk Reject)

tl;dr: Experiential measures of subjective well-being solve, in a clear and obvious way, one possible threat to validity that affects global self-reports. The obviousness of this advantage, however, sometimes leads people to overlook the fact that experiential measures may have their own unique threats to validity. Specifically, the fact that experiential measures require respondents to repeatedly answer the same questions over and over again may pull for responses that deviate from what the researcher expects.

Are Conservatives Healthier than Liberals? – Scott McGreal (Unique—Like Everybody Else)

New research suggests that conservatives may have a health advantage because they value personal responsibility more. Thsi may reflect greater conscientiousness.

How to Measure Happiness – Rich Lucas (The Desk Reject)

Anyone who has ever taught Intro Psych knows that one of the most popular lectures is the one that covers visual illusions. It’s easy to understand why. Illusions like the famous Müller-Lyer illusion shown below provide a clear example of times when our perception of the world is clearly, and demonstrably wrong. Although it’s almost impossible not to see the three lines as differing in length, simply removing the arrows at the ends shows that they are all, in fact, the same.

Using R to Create Multiple Choice Exams – Rich Lucas (The Desk Reject)

About ten years ago, I made one of the best work-related decisions I’ve ever made: Switching from Windows (and SPPS) to linux (and R). Not having to deal with the endless load times of SPSS or the inevitable Windows slowdowns that make you think, after six months of use, that you need a new computer—plus all the unfixable bugs, unexplainable crashes, and generally strange behavior—has done more for my blood pressure than any medication or change in diet ever could.

Are Murderers Unfairly Labelled as Psychopaths? – Scott McGreal (Unique—Like Everybody Else)

Despite claims that psychopaths are unfairly stigmatised and labelled as killers, the link between murder and psychopathy is a strong one.

This is your Brain on Psychology – This is your Psychology on Brain (a guest post by Rob Chavez) – Sanjay Srivastava (The Hardest Science)

The following is a guest post by Rob Chavez. If I’m ever asked ‘what was a defining moment of your career?’, I can think of a very specific instance that has stuck with me since my early days as a student in social neuroscience. I was at a journal club meeting where we were discussing an early paper using fMRI to investigate facial processing when looking at members of different racial groups. In this paper, the researchers found greater activity in the amygdala for viewing black faces than for white faces. Although the authors were careful not to say it explicitly, the implication for most readers was clear: The ‘threat center’ turned on for black faces more than white faces, therefore the participants may have implicit fear of black faces. Several students in the group brought up criticisms of that interpretation revolving around how the amygdala is involved in other processes, and we started throwing around ideas for study designs to possibly tease apart alternative explanations (e.g. lower-level visual properties, ambiguity, familiarity) that might also account for the amygdala activity. Then it happened: The professor in the room finally chimed in. “Look, these are interesting ideas, but they don’t really tell us anything about racial bias. I don’t really care about what the amygdala does; I just care what it can tell us about social psychology.” Even in the nascent stages of my career, I was somewhat flabbergasted. Continue reading

Does Regional Temperature Affect Personality? – Scott McGreal (Unique—Like Everybody Else)

A study finds that regional temperatures are associated with personality traits in a sort of "Goldilocks effect": mild environments may have beneficial effects on development.

Data analysis is thinking, data analysis is theorizing – Sanjay Srivastava (The Hardest Science)

There is a popular adage about academic writing: “Writing is thinking.” The idea is this: There is a simple view of writing as just an expressive process – the ideas already exist in your mind and you are just putting them on a page. That may in fact be true some of the time, like for day-to-day emails or texting friends or whatever. But “writing is thinking” reminds us that for most scholarly work, the process of writing is a process of continued deep engagement with the world of ideas. Sometimes before you start writing you think you had it all worked out in your head, or maybe in a proposal or outline you wrote. But when you sit down to write the actual thing, you just cannot make it make sense without doing more intellectual heavy lifting. It’s not because you forgot, it’s not because you “just” can’t find the right words. It’s because you’ve got more thinking to do, and nothing other than sitting down and trying to write is going to show that to you.* Something that is every bit as true, but less discussed and appreciated, is that in the quantitative sciences, the same applies to working with data. Data analysis is thinking. The ideas you had in your head, or the general strategy you wrote into your grant application or IRB protocol, are not enough. If that is all you have done so far, you almost always still have more thinking to do. Continue reading

the proof of the pudding is not, it turns out, in the eating – Simine Vazire (sometimes i'm wrong)

[DISCLAIMER: The opinions expressed in my posts are personal opinions, and they do not reflect the editorial policy of Social Psychological and Personality Science or its sponsoring associations, which are responsible for setting editorial policy for the journal.]
Bears eating pumpkinshappy halloween
here's an argument i've heard against registered reports and results-blind reviewing: "judging studies based on their methods is like judging a baking contest based on the recipes."  the implication being that this would be ridiculous.
i've been thinking a lot about this analogy and i love it.  not because i agree with it, but because i think it gets at the crux of the disagreement about the value of negative (null) results.  it's about whether we think the value of a study comes from its results or its methods.
the baking contest analogy rests on the assumption that the goal of science is to produce the best-tasting results.  according to this logic, the more we can produce delicious, mouth-watering results, the better we're doing.  accumulating knowledge is like putting together a display case of exquisite desserts.  and being able to produce a delicious result is itself evidence that your methods are good.  we know a good study when we see a juicy result.  after all, you wouldn't be able to produce a delicious cake if your recipe was crap.
this analogy probably sounds reasonable in part because of how we talk about negative results - as failures.  in baking that's probably apt - i don't want to eat your failed donut.*  but in science, the negative result might be the accurate one - you can't judge the truthiness of the result from the taste it leaves in your mouth. Continue reading