Author Archives: David Funder

The Real Source of the Replication Crisis – David Funder (funderstorms)

“Replication police.” “P-squashers.” “Hand-wringers.” “Hostile replicators.”  And of course, who can ever forget, “shameless little bullies.”  These are just some of the labels applied to what has become known as the replication movement, an attempt to improve science (psychological and otherwise) by assessing whether key findings can be reproduced in independent laboratories.

Replication researchers have sometimes targeted findings they found doubtful.  The grounds for finding them doubtful have included (a) the effect is “counter-intuitive” or in some way seems odd (1), (b) the original study had a small N and an implausibly large effect size, (c) anecdotes (typically heard at hotel bars during conferences) abound concerning naïve researchers who can’t reproduce the finding, (d) the researcher who found the effect refuses to make data public, has “lost” the data or refuses to answer procedural questions, or (e) sometimes, all of the above.

Fair enough. If a finding seems doubtful, and it’s important, then it behooves the science (if not any particular researcher) to get to the bottom of things.  And we’ve seen a lot of attempts to do that lately. Famous findings by prominent researchers have been put  through the replication wringer, sometimes with discouraging results.  But several of these findings also have been stoutly defended, and indeed the failure to replicate certain prominent effects seems to have stimulated much of the invective thrown at replicators more generally. Continue reading

When Did We Get so Delicate? – David Funder (funderstorms)

Replication issues are rampant these days. The recent round of widespread concern over whether supposedly established findings can be reproduced began in biology and the related life sciences, especially medicine. Psychologists entered the fray a bit later, largely in a constructive way. Individuals and professional societies published commentaries on methodology, journals acted to revise their policies to promote data transparency and encourage replication, and the Center for Open Science took concrete steps to make doing research “the right way” easier. As a result, psychology was viewed not as the poster child of replication problems, quite the opposite. It became viewed as the best place to look for solutions to these problems.

So what just happened? In the words of a headline in the Chronicle of Higher Education, the situation in psychology has suddenly turned “ugly and odd.”  Some psychologists whose findings were not replicated are complaining plaintively about feeling bullied. Others are chiming in about how terrible it is that people’s reputations are ruined when others can’t replicate their work. People doing replication studies have been labeled the “replication police,” “replication Nazis” and even, in one prominent psychologist’s already famous phrase, “shameless little bullies.” This last-mentioned writer also passed along an anonymous correspondent’s description of replication as a “McCarthyite nightmare.”  More sober commentators have expressed worries about “negative psychology” and “p-squashing.” Concern has shifted away from the difficulties faced by those who can’t make famous effects “work,” and the dilemma about whether they dare to go public when this happens. Continue reading

The “Fundamental Attribution Error” and Suicide Terrorism – David Funder (funderstorms)

Review of: Lankford, A. (2013) The myth of martyrdom: What really drives suicide bombers, rampage shooters, and other self-destructive killers. Palgrave Macmillan.
In Press, Behavioral and Brain Sciences (published version may differ slightly)

In 1977, the social psychologist Lee Ross coined the term “fundamental attribution error” to describe the putative tendency of people to overestimate the importance of dispositional causes of behavior, such as personality traits and political attitudes, and underestimate the importance of situational causes, such as social pressure or objective circumstances.  Over the decades since, the term has firmly rooted itself into the conventional wisdom of social psychology, to the point where it is sometimes identified as the field’s basic insight (Ross & Nisbett 2011). However, the actual research evidence purporting to demonstrate this error is surprisingly weak (see, e.g., Funder 1982; Funder & Fast 2010; Krueger & Funder 2004), and at least one well-documented error (the “false consensus bias” (Ross 1977a) implies that people overestimate the degree to which their behavior is determined by the situation.

Moreover, everyday counter-examples are not difficult to formulate. Consider the last time you tried, in an argument, to change someone’s attitude. Was it easier, or harder than you expected?  Therapeutic interventions and major social programs intended to correct dispositional problems, such as tendencies towards violence or alcoholism also are generally less successful than anticipated. Work supervisors and even parents, who have a great deal of control over the situations experienced by their employees or children, similarly find it surprisingly difficult to control behaviors as simple as showing up on time or making one’s bed. Continue reading

Why I Decline to do Peer Reviews (part two): Eternally Masked Reviews – David Funder (funderstorms)

In addition to the situation described in a previous post, there is another situation where I decline to do a peer review. First, I need to define a couple of terms. “Blind review” refers to the practice of concealing the identity of reviewers from authors. The reason seems pretty obvious. Scientific academia is a small world, egos are easily bruised, and vehicles for subtle or not-so-subtle vengeance (e.g., journal reviews and tenure letters) are readily at hand. If an editor wants an unvarnished critique, the reviewer’s identity needs to be protected. That’s why every journal (I know of) follows the practice of blind review.

“Masked review” is different. In this practice, the identity of the author(s) is concealed from reviewers. The well-intentioned reason is to protect authors from bias, such as bias against women, junior researchers, or researchers from non-famous institutions. Some journals use masked review for all articles; some offer the option to authors; some do not use it at all.

Continue reading

Can Personality Change? – David Funder (funderstorms)

Can personality change? In one respect, the answer is clearly “yes,” because ample evidence shows that, overall, personality does change. On average, as people get older (after about age 20), they also become less neurotic and more conscientious, agreeable, and open, until about age 60 or so (Soto, John, Gosling & Potter, 2011). And then, after about age 65, they become on average less conscientious, agreeable, and extraverted – a phenomenon sometimes called the La Dolce Vita effect (Lucas & Donnellan, 2011). You no longer have to go to work every day, or socialize with people you don’t really like. Old age might really have some compensating advantages, after all.

But I think when people ask “can personality change” the inevitable consequences of age are not really what they have in mind. What they are asking is: Can personality change on purpose? Can I change my own personality? Or, can I change the personality of my child, or my spouse? One of the disconcerting things about being a psychologist is that the people I meet sometimes think I can answer questions like these. (This belief persists even if I try to beg off, saying “I’m not that kind of psychologist.”)

Here is the answer I have been giving for years: No. Continue reading

NSF Gets an Earful about Replication – David Funder (funderstorms)

I spent last Thursday and Friday (February 20 and 21) at an NSF workshop concerning the replicability of research results. It was chaired by John Cacioppo and included about 30 participants including such well-known contributors to the discussion as Brian Nosek, Hal Pashler, Eric Eich, and Tony Greenwald, to name a few.  Participants also included officials from NIH, NSF, the White House Office on Science and Technology and at least one private foundation. I was invited, I presume, in my capacity as Past-President of SPSP and chair of an SPSP task force on research practices which recently published a report on non-retracted PSPB articles by investigators who retracted articles elsewhere, and a set of recommendations for research and educational practice, which was just published in PSPR.

Committees, task forces and workshops – whatever you call them – about replicability issues have become almost commonplace.  The SPSP Task Force was preceded by a meeting and report sponsored by the European Association of Personality Psychology, and other efforts have been led by APS, the Psychonomic Society and other organizations.  Two symposia on the subject were held at the SPSP meeting in Austin just the week before.  But this discussion was perhaps special, because it is the first (to my knowledge) to be sponsored by the US government, with the explicit purpose of seeking advice about what NSF and other research agencies should do.  I think it is fair to say: When replication is discussed in a meeting with representatives from NIH, NSF and the White House, the issue is on the front burner!

The discussion covered several themes, some of which are more familiar than others. Continue reading

Why I Decline to do Peer Reviews (part one): Re-reviews – David Funder (funderstorms)

Like pretty much everyone fortunate enough to occupy a faculty position in psychology at a research university, I am frequently asked to review articles submitted for publication to scientific journals. Editors rely heavily on these reviews in making their accept/reject decisions. I know: I’ve been an editor myself, and I experienced first-hand the frustrations in trying to persuade qualified reviewers to help me assess the articles that flowed over my desk in seemingly ever-increasing numbers. So don’t get me wrong: I often do agree to do reviews – around 25 times a year, which is probably neither much above nor below the average for psychologists at my career stage. But sometimes I simply refuse, and let me explain one reason why.

The routine process of peer review is that the editor reads a submitted article, selects 2 or 3 individuals thought to have reasonable expertise in the topic, and asks them for reviews. After some delays due to reviewers’ competing obligations, trips out of town, personal emergencies or – the editor’s true bane – lengthy failures to respond at all, the requisite number of reviews eventually arrive. In a very few cases, the editor reads the reviews, reads the article, and accepts it for publication. In rather more cases, the editor rejects the article. The authors of the remaining articles get a letter inviting them to “revise and resubmit.” In such cases, in theory at least, the reviewers and/or the editor see a promising contribution. Perhaps a different, more informative statistic could be calculated, an omitted relevant article cited, or a theoretical derivation explained more clearly. But the preliminary decision clearly is – or should be – that the research is worth publishing; it could just be reported a bit better.

Continue reading

Don’t blame Milgram – David Funder (funderstorms)

I’m motivated to write this post because of a new book that, according to an NPR interview with its author, attacks the late Stanley Milgram for having misled us about the human propensity to obey.  He overstated his case, she claims, and also conducted unethical research.

The Milgram obedience studies of the 1960’s are probably the most famous research in the history of social psychology.  As the reader almost certainly knows, subjects were ordered to give apparently harmful – perhaps even fatal – electric shocks to an innocent victim (who was, fortunately, an unharmed research assistant).  The studies found that a surprising number of ordinary people followed orders to the hilt .

Accounts of these studies in textbooks and in popular writings usually make one of two points, and often both.  (1)  Milgram showed that anybody, or almost anybody, would obey orders to harm an innocent victim if the orders came from someone in an apparent position of authority.  (2) Milgram showed that the “power of the situation” overwhelms the “power of the person”; the experimenter’s orders were so strong that they overwhelmed personal dispositions and individual differences.  Both of these points are, indeed, dead wrong.  But their promulgation is not Milgram’s fault.

Consider each point, and what Milgram said (or didn’t say) about them. Continue reading

One example of how to do it – David Funder (funderstorms)

In a previous post, I wrote about the contentious atmosphere that so often surrounds replication studies, and fantasized a world in which one might occasionally see replication researchers and the original authors come together in “a joint effort to share methods, look at data together, and come to a collaborative understanding of an important scientific issue.”  Happily one example that comes close to this ideal has been recently accepted for publication in Psychological Science — the same journal that published the original paper.  The authors of both the original and replication studies appear to have worked together to share information about procedures and analyses, which while perhaps not a full collaboration, is at least cooperation of a sort that’s seen too rarely.  The result was that the original, intriguing finding did not replicate; two large new studies obtained non-significant findings in the wrong direction.  The hypothesis that anxiously attached people might prefer warm foods when their attachment concerns are activated was provocative, to say the least.  But it seems to have been wrong.

With this example now out there, I hope others follow the same path towards helping the scientific literature perform the self-correcting process that, in principle, is its principal distinctive advantage.  I also hope that, one of these days, an attempt to independently replicate a provocative finding will actually succeed!  Now that would be an important step forward.

Sanjay Srivastava  and Job van Wolferen have also commented on this replication study.


A Replication Initiative from APS – David Funder (funderstorms)

Several of the major research organizations in psychology, including APA, EAPP (European Association of Personality Psychology) and SPSP, have been talking about the issue of replicability of published research, but APS has made the most dramatic move so far to actually do something about it.  The APS journal Perspectives on Psychological Science today announced a new policy to enable the publication of pre-registered, robust studies seeking to replicate important published findings.  The journal will add a new section for this purpose, edited by Dan Simons and Alex Halcombe.  For details, click here.

This idea has been kicked around in other places, including proposals for new journals exclusively dedicated to replication studies.  One of the most interesting aspects of the new initiative is that instead of isolating replications in an independent journal few people might see, they will appear in an already widely-read and prestigious journal with a high impact factor.

When a similar proposal — in the form of a suggested new journal — was floated in a meeting I attended a few weeks ago, it quickly stimulated controversy. Some saw the proposal as a self-defeating attack on our own discipline that would only undermine the credibility of psychological research.  Others saw it as a much-needed self-administered corrective action; better to come from within the field than be imposed from outside. And still others — probably the largest group — raised and got a bit bogged down in worrying about specifics of implementation.  For example, what will stop a researcher from running a failed replication study, and only then “pre-registering” it?  How many failed replications does it take to overturn the conclusions of a published study, and what does “failed replication” mean exactly, anyway?  What degree of statistical power should replication studies be required to have, and what effect size should be used to make this calculation? Continue reading