Category Archives: pigee

Descriptive ulceritive counterintuitiveness – Brent Roberts (pigee)

An interesting little discussion popped up in the wild and wooly new media world in science (e.g., podcasts and twitter) concerning the relative merits of “descriptive” vs “hypothesis” driven designs. All, mind you, indirectly caused by the paper that keeps on givingTal Yarkoni’s generalizability crisis paper.   Inspired by Tal’s paper, a small group of folks endorsed the merits of descriptive work and the fact that psychology would do well to conduct more of this type of research (Two Psychologist, Four Beers; Very Bad Wizards). In response, Paul Bloom argued/opined for hypothesis testing–more specifically, theoretically informed hypothesis testing of a counterintuitive hypothesis.   I was implicated in the discussion as someone who’s work exemplifies descriptive research. In fact, Tal Yarkoni himself has disparaged my work in just such a way.* And, I must confess, I’ve stated similar things in public, especially when I give my standard credibility crisis talk.   So, it might come as a surprise to hear that I completely agree with Bloom that a surgical hypothesis test using experimental methods that arrives at what is described as a “counterintuitive” finding can be the bee’s knees. It is, and probably should be, the ultimate scientific achievement. If it is true, of course. Continue reading

Robust Findings in Personality Psychology – Brent Roberts (pigee)

Scientific personality psychology has had a bit of a renaissance in the last few decades, emerging from a period of deep skepticism and subsequent self-reflection to a period  where we believe there are robust findings in our field. The problem is that many people, and scientists, don’t follow scientific personality psychology and remain blithely unaware of the field’s accomplishments. In fact, it is quite common to do silly things like equate the field of scientific personality psychology with the commodity that is the MBTI. With this situation in mind, I recently asked a subset of personality psychologists to help  identify what they believed to be robust findings in personality psychology.  You will find the product of that effort below. We are not assuming that we’ve identified all of the robust findings.  In fact, we’d like you to vote on each one to see whether these are consensually defined “robust findings.”  Moreover, we’d love you to comment and suggest other candidates for consideration. All we ask is that you characterize the finding and suggest some research that backs up your suggestion.  We’ve kept things pretty loose to this point, but the items below can be characterized as findings that replicate across labs and have a critical mass of research that is typically summarized in one or more meta-analyses. We are open to suggestions about making the inclusion criteria more stringent. Continue reading

Lessons we’ve learned about writing the empirical journal article – Brent Roberts (pigee)

How about a little blast from the past?  In rooting around in an old hard drive searching for Pat Hill’s original CV [1], I came across a document that we wrote way back in 2006 on how to write more effectively. It was a compilation of the collective wisdom at that time of Roberts, Fraley, and Diener. It was interesting to read after 13 years. Fraley and I have updated our opinions a bit. We both thought it would be good to share if only for the documentation of our pre-blogging, pre-twitter thought processes. Manuscript Acronyms from Hell:  Lessons We’ve Learned on Writing the Empirical Research Article  By Brent Roberts (with substantial help from Ed Diener and Chris Fraley) Originally written sometime in 2006 Updated 2019 thoughts in blue Here are a set of subtle lessons that we’ve culled from our experience writing journal articles.  They are intended as a short list of questions that you can ask yourself each time you complete an article.  For example, before you submit your paper to a journal, ask yourself whether you have created a clear need for the study in the introduction, or whether everything is parallel, etc.  This list is by no means complete, but we do hope that it is useful. Continue reading

It’s deja vu all over again – Brent Roberts (pigee)

I seem to replicate the same conversation on Twitter every time a different sliver of the psychological guild confronts open science and reproducibility issues. Each conversation starts and ends the same way as conversations I’ve had or seen 8 years ago, 4 years ago, 2 years ago, last year, or last month. In some ways that’s a good sign. Awareness of the issue of reproducibility and efforts to improve our science are reaching beyond the subfields that have been at the center of the discussion.   Greater engagement with these issues is ideal. The problem is that each time a new group realizes that their own area is subject to criticism, they raise the same objections based on the same misconceptions, leading to the same mistaken attack on the messengers: They claim that scholars pursuing reproducibility or meta-science issues are a highly organized phalanx of intransigent, inflexible, authoritarians who are insensitive to important differences among subfields and who to impose a monolithic and arbitrary set of requirements to all research.   In these “conversations,” scholars recommending changes to the way science is conducted have been unflatteringly described as sanctimonious, despotic, authoritarian, doctrinaire, and militant, and creatively labeled with names such as shameless little bullies, assholes, McCarthyites, second stringers, methodological terrorists, fascists, Nazis, Stasi, witch hunters, reproducibility bros, data parasites, destructo-critics, replication police, self-appointed data police, destructive iconoclasts, vigilantes, accuracy fetishists, and human scum. Yes, every one of those terms has been used in public discourse, typically by eminent (i.e., senior) psychologists.   Villainizing those calling for methodological reform is ingenious, particularly if you have no compelling argument against the proposed changes*. Continue reading

Yes or No 2.0: Are Likert scales always preferable to dichotomous rating scales? – Brent Roberts (pigee)

A while back, Michael Kraus (MK), Michael Frank (MF) and me (Brent W Roberts, or BWR; M. Brent Donnellan–MBD–is on board for this discussion so we’ll have to keep our Michaels and Brents straight) got into a Twitter inspired conversation about the niceties of using polytomous rating scales vs yes/no rating scales for items.  You can read that exchange here. The exchange was loads of fun and edifying for all parties.  An over-simplistic summary would be that, despite passionate statements made by psychometricians, there is no Yes or No answer to the apparent superiority of Likert-type scales for survey items. We recently were reminded of our prior effort when a similar exchange on Twitter pretty much replicated our earlier conversation–I’m not sure whether it was a conceptual or direct replication…. In part of the exchange, Michael Frank (MF) mentioned that he had tried the 2-point option with items they commonly use and found the scale statistics to be so bad that they gave up on the effort and went back to a 5-point option. To which, I replied, pithily, that he was using the Likert scale and the systematic errors contained therein to bolster the scale reliability.  Joking aside, it reminded us that we had collected similar data that could be used to add more information to the discussion. But, before we do the big reveal, let’s see what others think.  We polled the Twitterati about their perspective on the debate and here are the consensus opinions which correspond nicely to the Michaels’ position: Continue reading

Eyes wide shut or eyes wide open? – Brent Roberts (pigee)

There have been a slew of systematic replication efforts and meta-analyses with rather provocative findings of late. The ego depletion saga is one of those stories. It is an important story because it demonstrates the clarity that comes with focusing on effect sizes rather than statistical significance. I should confess that I’ve always liked the idea of ego depletion and even tried my hand at running a few ego depletion experiments.* And, I study conscientiousness which is pretty much the same thing as self-control—at least as it is assessed using the Tangney et al self-control scale (2004) which was meant, in part, to be an individual difference complement to the ego depletion experimental paradigms. So, I was more than a disinterested observer as the “effect size drama” surrounding ego depletion played out over the last few years. First, you had the seemingly straightforward meta analysis by Hagger et al (2010), showing that the average effect size of the sequential task paradigm of ego-depletion studies was a d of .62. Impressively large by most metrics that we use to judge effect sizes. That’s the same as a correlation of .3 according to the magical effect size converters. Despite prior mischaracterizations of correlations of that magnitude being small**, that’s nothing to cough at. Continue reading

Making good on a promise – Brent Roberts (pigee)

At the end of my previous blog “Because, change is hard“, I said, and I quote: “So, send me your huddled, tired essays repeating the same messages about improving our approach to science that we’ve been making for years and I’ll post, repost, and blog about them every time.” Well, someone asked me to repost their’s.  So here is it is: http://www.nature.com/news/no-researcher-is-too-junior-to-fix-science-1.21928.  It is a nice piece by John Tregoning. Speaking of which, there were two related blogs posted right after the change is hard piece that are both worth reading.  The first by Dorothy Bishop is brilliant and counters my pessimism so effectively I’m almost tempted to call her Simine Vazire: http://deevybee.blogspot.co.uk/2017/05/reproducible-practices-are-future-for.html And if you missed it James Heathers has a spot on post about the New Bad People: https://medium.com/@jamesheathers/meet-the-new-bad-people-4922137949a1 Continue reading

Because, change is hard – Brent Roberts (pigee)

I reposted a quote from a paper on twitter this morning entitled “The earth is flat (p > 0.05): Significance thresholds and the crisis of unreplicable research.” The quote, which is worth repeating, was “reliable conclusions on replicability…of a finding can only be drawn using cumulative evidence from multiple independent studies.” An esteemed colleague (Daniël Lakens @lakens) responded “I just reviewed this paper for PeerJ. I didn’t think it was publishable. Lacks structure, nothing new.” Setting aside the typical bromide that I mostly curate information on twitter so that I can file and read things later, the last clause “nothing new” struck a nerve. It reminded me of some unappealing conclusions that I’ve arrived at about the reproducibility movement that lead to a different conclusion—that it is very, very important that we post and repost papers like this if we hope to move psychological science towards a more robust future. From my current vantage, producing new and innovative insights about reproducibility is not the point. There has been almost nothing new in the entire reproducibility discussion. And, that is okay. I mean, the methodologists (whether terroristic or not) have been telling us for decades that our typical approach to evaluating our research findings is problematic. Continue reading

A Most Courageous Act – Brent Roberts (pigee)

The most courageous act a modern academic can make is to say they were wrong.  After all, we deal in ideas, not things.  When we say we were wrong, we are saying our ideas, our products so to speak, were faulty.  It is a supremely unsettling thing to do. Of course, in the Platonic ideal, and in reality, being a scientist necessitates being wrong a lot. Unfortunately, our incentive system militates against being honest about our work. Thus, countless researchers choose not to admit or even acknowledge the possibility that they might have been mistaken. In a bracingly honest post in response to a blog by Uli Schimmack, the Nobel Prize winning psychologist, Daniel Kahneman, has done the unthinkable.  He has admitted that he was mistaken.   Here’s a quote: Continue reading

A Commitment to Better Research Practices (BRPs) in Psychological Science – Brent Roberts (pigee)

Scientific research is an attempt to identify a working truth about the world that is as independent of ideology as possible.  As we appear to be entering a time of heightened skepticism about the value of scientific information, we feel it is important to emphasize and foster research practices that enhance the integrity of scientific data and thus scientific information. We have therefore created a list of better research practices that we believe, if followed, would enhance the reproducibility and reliability of psychological science. The proposed methodological practices are applicable for exploratory or confirmatory research, and for observational or experimental methods.
  1. If testing a specific hypothesis, pre-register your research[1], so others can know that the forthcoming tests are informative. Report the planned analyses as confirmatory, and report any other analyses or any deviations from the planned analyses as exploratory.
  2. If conducting exploratory research, present it as exploratory. Then, document the research by posting materials, such as measures, procedures, and analytical code so future researchers can benefit from them. Also, make research expectations and plans in advance of analyses—little, if any, research is truly exploratory. State the goals and parameters of your study as clearly as possible before beginning data analysis.
  3. Consider data sharing options prior to data collection (e.g. Continue reading