Author Archives: Brent Roberts

Andrew Gelman’s blog about the Fiske fiasco – Brent Roberts (pigee)

Some of you might have missed the kerfuffle that erupted in the last few days over a pre-print of an editorial written by Susan Fiske for the APS Monitor about us “methodological terrorists”.  Andrew Gelman’s blog reposts Fiske’s piece, puts it in historical context, and does a fairly good job of articulating why it is problematic beyond the terminological hyperbole that Fiske employs.  We are reposting it for your edification.
What has happened down here is the winds have changed

The Power Dialogues – Brent Roberts (pigee)

The following is a hypothetical exchange between a graduate student and Professor Belfry-Roaster.  The names have been changed to protect the innocent…. Budlie Bond: Professor Belfry-Roaster I was confused today in journal club when everyone started discussing power.  I’ve taken my grad stats courses, but they didn’t teach us anything about power.  It seemed really important. But it also seemed controversial.  Can you tell me a bit more about power and why people care so much about it Prof. Belfry-Roaster: Sure, power is a very important factor in planning and evaluating research. Technically, power is defined as the long-run probability of rejecting the null hypothesis when it is, in fact, false. Power is typically considered to be a Good Thing because, if the null is false, then you want your research to be capable of rejecting it. The higher the power of your study, the better the chances are that this will happen. The concept of power comes out of a very specific approach to significance testing pioneered by Neyman and Pearson. Continue reading

Please Stop the Bleating – Brent Roberts (pigee)

  It has been unsettling to witness the seemingly endless stream of null effects emerging from numerous pre-registered direct replications over the past few months. Some of the outcomes were unsurprising given the low power of the original studies. But the truly painful part has come from watching and reading the responses from all sides.  Countless words have been written discussing every nuanced aspect of definitions, motivations, and aspersions. Only one thing is missing: Direct, pre-registered replications by the authors of studies that have been the target of replications. While I am sympathetic to the fact that those who are targeted might be upset, defensive, and highly motivated to defend their ideas, the absence of any data from the originating authors is a more profound indictment of the original finding than any commentary.  To my knowledge, and please correct me if I’m wrong, none of the researchers who’ve been the target of a pre-registered replication have produced a pre-registered study from their own lab showing that they are capable of getting the effect, even if others are not. For those of us standing on the sidelines watching things play out we are constantly surprised by the fact that the one piece of information that might help—evidence that the original authors are capable of reproducing their own effects (in a pre-registered study)—is never offered up. So, get on with it. Seriously. Everyone. Continue reading

We Need Federally Funded Daisy Chains – Brent Roberts (pigee)

One of the most provocative requests in the reproducibility crisis was Daniel Kahneman’s call for psychological scientists to collaborate on a “daisy chain” of research replication. He admonished proponents of priming research to step up and work together to replicate the classic priming studies that had, up to that point, been called into question. What happened? Nothing. Total crickets. There were no grand collaborations among the strongest and most capable labs to reproduce each other’s work. Why not? Using 20:20 hindsight it is clear that the incentive structure in psychological science militated against the daisy chain idea. The scientific system in 2012 (and the one currently still in place) rewarded people who were the first to discover a new, counterintuitive feature of human nature, preferably using an experimental method. Since we did not practice direct replications, the veracity of our findings weren’t really the point. The point was to be the discoverer, the radical innovator, the colorful, clever genius who apparently had a lot of flair. If this was and remains the reward structure, what incentive was there or is there to conduct direct replications of your own or other’s work? Continue reading

The New Rules of Research – Brent Roberts (pigee)

A paper on one of the most important research projects in our generation came out a few weeks ago. I’m speaking, of course, of the Reproducibility Project conducted by several hundred psychologists. It is a tour de force of good science. Most importantly, it provided definitive evidence for the state of the field. Despite the fact that 97% of the original studies reported statistically significant effects, only 36% hit the magical p < .05 mark when closely replicated. Two defenses have been raised against the effort. The first, described by some as the “move along folks, there’s nothing to see here” defense, proposes that a 36% replication rate is no big deal. It is to be expected given how tough it is to do psychological science. At one level I’m sympathetic to the argument that science is hard to do, especially psychological science. It is the case that very few psychologists have 36% of their ideas work. And, by work, I mean in the traditional sense of the word, which is to net a p value less than . Continue reading

Be your own replicator – Brent Roberts (pigee)

by Brent W. Roberts One of the conspicuous features of the ongoing reproducibility crisis stewing in psychology is that we have a lot of fear, loathing, defensiveness, and theorizing being expressed about direct replications. But, if the pages of our journals are any indication, we have very few direct replications being conducted. Reacting with fear is not surprising. It is not fun to have your hard-earned scientific contribution challenged by some random researcher. Even if the replicator is trustworthy, it is scary to have your work be the target of a replication attempt. For example, one colleague was especially concerned that graduate students were now afraid to publish papers given the seeming inevitability of someone trying to replicate and tear down their work. Seeing the replication police in your rearview mirror would make anyone nervous, but especially new drivers. Another prototypical reaction appears to be various forms of loathing. We don’t need to repeat the monikers used to describe researchers who conduct and attempt to publish direct replications. It is clear that they are not held in high esteem. Other scholars may not demean the replicators but hold equally negative attitudes towards the direct replication enterprise and deem the entire effort a waste of time. Continue reading

Sample Sizes in Personality and Social Psychology – Brent Roberts (pigee)

R. Chris Fraley Imagine that you’ve a young graduate student who has just completed a research project. You think the results are exciting and that they have the potential to advance the field in a number of ways. You would like to submit your research to a journal that has a reputation for publishing the highest caliber research in your field. How would you know which journals are regarded for publishing high-quality research? Traditionally, scholars and promotion committees have answered this question by referencing the citation Impact Factor (IF) of journals. But as critics of the IF have noted, citation rates per se may not reflect anything informative about the quality of empirical research. A paper can receive a large number of citations in the short run because it reports surprising, debatable, or counter-intuitive findings regardless of whether the research was conducted in a rigorous manner. In other words, the citation rate of a journal may not be particularly informative concerning the quality of the research it reports. What would be useful is a way of indexing journal quality that is based upon the strength of the research designs used in published articles rather than the citation rate of those articles alone. Continue reading

Is It Offensive To Declare A Social Psychological Claim Or Conclusion Wrong? – Brent Roberts (pigee)

By Lee Jussim

Science is about “getting it right” – this is so obvious that it should go without saying. However, there are many obstacles to doing so, some relatively benign (an honestly conducted study produces a quirky result), others less so (p-hacking). Over the last few years, the focus on practices that lead us astray have focused primarily on issues of statistics, methods, and replication.

These are all justifiably important, but here I raise the possibility that other, more subjective factors, distort social and personality psychology in ways at least as problematic. Elsewhere, I have reviewed what I now call questionable interpretive practices – how cherrypicking, double standards, blind spots, and embedding political values in research all lead to distorted conclusions (Duarte et al, 2014; Jussim et al, in press a,b).

But there are other interpretations problems. Ever notice how very few social psychological theories are refuted or overturned?   Disconfirming theories and hypotheses (including the subset of disconfirmation, failures to replicate) should be a normal part of the advance of scientific knowledge. It is ok for you (or me, or Dr. I. V. Famous) to have reached or promoted a wrong conclusion. Continue reading

An apology and proposal – Brent Roberts (pigee)

Brent W. Roberts

My tweet, “Failure to replicate hurting your career? What about PhDs with no career because they were honest” was taken by some as a personal attack on Dr. Schnall.  It was not and I apologize to Dr. Schnall if it were taken that way. The tweet was in reference to the field as a whole because our current publication and promotion system does not reward the honest design and reporting of research. And this places many young investigators at a disadvantage. Let me explain.

Our publication practices reward the reporting of optimized data—the data that looks the best or that could be dressed up to look nice through whatever means necessary. We have no choice given the way we incentivize our publication system. That system, which punishes null findings and rewards only statistically significant effects means that our published science is not currently an honest portrait of how our science works. The current rash of failures to replicate famous and not so famous studies is simply a symptom of a system that is in dire need of reform. Continue reading

Additional Reflections on Ceiling Effects in Recent Replication Research – Brent Roberts (pigee)

By R. Chris Fraley

In her commentary on the Johnson, Cheung, and Donnellan (2014) replication attempt, Schnall (2014) writes that the analyses reported in the Johnson et al. (2014) paper “are invalid and allow no conclusions about the reproducibility of the original findings” because of “the observed ceiling effect.”

I agree with Schnall that researchers should be concerned with ceiling effects. When there is relatively little room for scores to move around, it is more difficult to demonstrate that experimental manipulations are effective. But are the ratings so high in Johnson et al.’s (2014) Study 1 that the study is incapable of detecting an effect if one is present?

Image

To address this question, I programmed some simulations in R. The details of the simulations are available at http://osf.io/svbtw, but here is a summary of some of the key results:

  • Although there are a large number of scores on the high end of the scale in the Johnson et al. Continue reading