Category Archives: pigee

My Scary Vision of Good Science – Brent Roberts (pigee)

By Brent W. Roberts

In a recent blog post, I argued that the Deathly Hallows of Psychological Science—p values < .05, experiments, and counter-intuitive findings—represent the combination of factors that are most highly valued by our field and are the explicit criteria for high impact publications. Some commenters mistook my identification of the Deathly Hallows of Psychological Science as a criticism of experimental methods and an endorsement of correlational methods. They even went so far as to say my vision for science was “scary.”

Boo.

Of course, these critics reacted negatively to the post because I was being less than charitable to some hallowed institutions in psychological science. Regardless, I stand by the original argument. Counter-intuitive findings from experiments “verified” with p values less than .05 are the most valued commodities of our scientific efforts. And, the slavish worshiping of these criteria is at the root of many of our replicability and believability problems.

Continue reading

The Deathly Hallows of Psychological Science – Brent Roberts (pigee)

By Brent W. Roberts

As of late, psychological science has arguably done more to address the ongoing believability crisis than most other areas of science.  Many notable efforts have been put forward to improve our methods.  From the Open Science Framework (OSF), to changes in journal reporting practices, to new statistics, psychologists are doing more than any other science to rectify practices that allow far too many unbelievable findings to populate our journal pages.

The efforts in psychology to improve the believability of our science can be boiled down to some relatively simple changes.  We need to replace/supplement the typical reporting practices and statistical approaches by:

  1. Providing more information with each paper so others can double-check our work, such as the study materials, hypotheses, data, and syntax (through the OSF or journal reporting practices).
  2. Designing our studies so they have adequate power or precision to evaluate the theories we are purporting to test (i.e., use larger sample sizes).
  3. Providing more information about effect sizes in each report, such as what the effect sizes are for each analysis and their respective confidence intervals. Continue reading

For the love of p-values – Brent Roberts (pigee)

We recently read Karg et al (2011) for a local reading group.  It is one of the many of attempts to meta-analytically examine the idea that the 5-HTTLPR serotonin transporter polymorphism moderates the effect of stress on depression.

It drove me batty.  No, it drove me to apoplectia–a small country in my mind I occupy far too often.

Let’s focus on the worst part.  Here’s the write up in the first paragraph of the results:

“We found strong evidence that 5-HTTLPR moderates the relationship between stress and depression, with the s allele associated with an increased risk of developing depression under stress (P = .00002).  The significance of the result was robust to sensitivity analysis, with the overall P values remaining significant when each study was individually removed form the analysis (1.0×10-6<P<.00016).”

Wow. Continue reading

Are conceptual replications part of the solution to the crisis currently facing psychological science? – Brent Roberts (pigee)

by R. Chris Fraley

Stroebe and Strack (2014) recently argued that the current crisis regarding replication in psychological science has been greatly exaggerated. They observed that there are multiple replications of classic social/behavioral priming findings in social psychology. Moreover, they suggested that the current call for replications of classic findings is not especially useful. If a researcher conducts an exact replication study and finds what was originally reported, no new knowledge has been generated. If the replication study does not find what was originally reported, this mismatch could be due to a number of factors and may speak more to the replication study than the original study per se.

As an alternative, Stroebe and Strack (2014) argue that, if researchers choose to pursue replication, the most constructive way to do so is through conceptual replications. Conceptual replications are potentially more valuable because they serve to probe the validity of the theoretical hypotheses rather than a specific protocol.

Are conceptual replications part of the solution to the crisis currently facing psychological science?

The purpose of this post is to argue that we can only learn anything of value—whether it is from an original study, an exact replication, or a conceptual replication—if we can trust the data. And, ultimately, a lack of trust is what lies at the heart of current debates. Continue reading

The Pre-Publication Transparency Checklist – Brent Roberts (pigee)

The Pre-Publication Transparency Checklist: A Small Step Toward Increasing the Believability of Psychological Science

 We now know that some of the well-accepted practices of psychological science do not produce reliable knowledge. For example, widely accepted but questionable research practices contribute to the fact that many of our research findings are unbelievable (that is, that one is ill-advised to revise one’s beliefs based on the reported findings). Post-hoc analyses of seemingly convincing studies have shown that some findings are too good to be true.  And, a string of seminal studies have failed to replicate.  These factors have come together to create a believability crisis in psychological science.

Many solutions have been proffered to address the believability crisis.  These solutions have come in four general forms.  First, many individuals and organizations have listed recommendations about how to make things better.  Second, other organizations have set up infrastructures so that individual researchers can pre-register their studies, to document hypotheses, methods, analyses, research materials, and data so that others can reproduce published research results (Open Science Framework).  Third, specific journals, such as Psychological Science, have set up pre-review confessionals of sorts to indicate the conditions under which the data were collected and analyzed.  Fourth, others have created vehicles so that researchers can confess to their methodological sins after their work has been published (psychdisclosure.org).  In fact, psychology should be lauded for the reform efforts it has put forward to address the believability crisis, as it is only one of many scientific fields in which the crisis is currently raging, and it is arguably doing more than many other fields. Continue reading

Science or law: Choose your career – Brent Roberts (pigee)

I recently saw an article by an astute reporter that described one of our colleagues as a researcher who “…has made a career out of finding data….”

Finding data.

What a lush expression.  In this case, as it seems always to be the case, the researcher had a knack for finding data that supported his or her theory. On the positive side of the ledger, “finding data” denotes the intrepid explorer who discovers a hidden oasis or the wonder that comes with a NASA probe that unlocks long lost secrets on Mars.

On the negative side of the ledger, “finding data” alludes to researchers who will hunt down findings that confirm their theories and ignore data that do not. I remember coming across this phenomenon for the first time as a graduate student, when a faculty member asked whether any of us could “find some data to support X”.  I thought it was an odd request.  I thought in science one tested ideas rather than hunted down confirming data and ignored disconfirming data.

Of course, “finding data” is an all too common practice in psychology.  Given the fact that 92% of our published findings are statistically significant and that it is common practice to suppress null findings, it strikes me that the enterprise of psychological science has defaulted to the task of finding data. Continue reading

Owning it – Brent Roberts (pigee)

What happens when the authors of studies linking candidate gene polymorphisms to response to drug consumption tried to replicate their own research?

As many of you know, the saga of replication problems continues unabated in social and personality psychology. The most recent dust up being over the ability of some researchers to replicate Dijksterhuis’ professor prime studies and the ensuing arguments over those attempts.

While social and personality psychologists “discuss” the adequacies of the replication attempts in our field a truly remarkable paper was published in Neuropsychopharmacology (Hart, de Wit, & Palmer, 2013).  The second and third authors have a long collaborative history working on the genetics of drug addiction.  In fact, they have published 12 studies linking variations in candidate genes, such as BDNF, DRD2, and COMT to intermediary phenotypes related to drug addiction.  As they note in the introduction to their paper, these studies have been cited hundreds of times and would lead one to believe that single SNPS or variations in specific genes are strongly linked to the way people react to amphetamines.

The 12 original studies all relied on a really nice experimental paradigm.  The participants received placebos and varying doses of amphetamines across several sessions, and the experimenters and participants were blind to what dose they received.  The order of drug administration was counterbalanced.  Then, participants rated their drug-related experience over the few hours that they stayed in the lab.  Across the 12 studies the authors, their post docs, and graduate students published studies linking the genetic polymorphisms to outcomes like feelings of anxiety, elation, vigor, positive mood, and even concrete outcomes such as heart rate and blood pressure. Continue reading

Schadenfreude – Brent Roberts (pigee)

This week in PIG-IE we discussed the just published paper by an all-star team of “skeptical” researchers that examined the reliability of neuroscience research.  It was a chance to take a break from our self-flagellation to see whether some of our colleagues suffer from similar problematic research practices.

Button, K. S., Ioannidis, J. P., Mokrysz, C., Nosek, B. A., Flint, J., Robinson, E. S., & Munafò, M. R. Continue reading

When effect sizes matter: The internal (in?)coherence of much of social psychology – Brent Roberts (pigee)

This is a guest post by Lee Jussim.  It was originally posted as a comment to the Beginning of History Effect, but it seemed too important to leave as a comment. It has been slightly edited to help it stand alone.

Effect sizes may matter in some but not all situations, and reasonable people may disagree.

This post is about one class of situations where: 1) They clearly do matter; and 2) They are largely ignored. That situation: When scientific articles, theories, writing makes explicit or implicit claims about the relative power of various phenomena (see also David F’s comments on ordinal effect sizes).

If you DO NOT care about effect sizes, that is fine. But, then, please do not make claims about the “unbearable automaticity of being.” I suppose automaticity could be an itsy bitsy teenie weenie effect size that is unbearable (like a splinter of glass in your foot), but that is not my reading of those claims. And it is not just about absolute effect sizes. It would be about the relative effects of conscious versus unconscious processes, something almost never compared empirically.

Continue reading

A Case in Point – Brent Roberts (pigee)

In the post about the Beginning of History Effect, I used candidate gene research as a case in point to illustrate how unreplicable research can get lodged in the scientific literature and how it is difficult to then dislodge.  A case in point emerged with perfect timing this weekend in the New York Times Magazine.  In a horribly sourced story on “Why some kids handle pressure, while others fall apart”, the authors claim that “One particular gene, referred to as the COMT gene, could to a large degree explain why one child is more prone to be a worrier, while another may be unflappable, or in the memorable phrasing of David Goldman, a geneticist at the National Institutes of Health, more of a warrior.”

Just shoot me.

Keep in mind that there is not only a meta-analysis of existing studies showing that the relation between the COMT polymorphism and cognitive functioning is indistinguishable from zero (Barnett, Scoriels, & Munafo, 2009), but also a comprehensive review that shows that all the existing associations between gene polymorphisms and cognitive functioning are zero (Chabris et al 2012).  No, the authors can’t let science stand in the way of selling their new book or the NY Times from actually checking the veracity of the claims made in the article–we need to sell ad space after all.  No, it is far more important to misinform thousands, if not millions of students and their parents that their test anxiety can be attributed to one genetic polymorphism, even if it is not true.

This is one illustration of the way unreplicable research gets instantiated in our field and in the minds of the public and inevitably, granting agencies.  The original article and/or idea is compelling.  It fits a broad world view that some groups researchers/people want to believe.  All subsequent disconfirmations of the effect are rationalized or ignored.  Then, we spend 20 years or so waisting our time because the majority of follow up research fails to replicate the original effect and the original idea just fades away. Continue reading