Want us to add your blog or article?This site aggregates blogs and popular press articles about personality psychology. If you are an ARP member who writes a blog, or whose research has been featured in a recent popular press article, email us at firstname.lastname@example.org to have your work added to the meta-blog.
- Happiness Research During the Replication Crisis – Rich Lucas (The Desk Reject)
- Religiosity, Atheism, and Health: The Atheist Advantage – Scott McGreal (Unique—Like Everybody Else)
- “The Fool Says in His Heart that Atheists are Mutants” – Scott McGreal (Unique—Like Everybody Else)
- An Oath for Scientists – Simine Vazire (sometimes i'm wrong)
- What Makes a Hero? And What makes a Psychopath? – Scott McGreal (Unique—Like Everybody Else)
- Are Heroes and Psychopaths Cut from the Same Cloth? – Scott McGreal (Unique—Like Everybody Else)
Filter Posts by Blog
- citation needed (22)
- funderstorms (22)
- Person X Situation (4)
- pigee (29)
- Press coverage (1)
- Psych Your Mind (46)
- Secrets of Longevity (8)
- Sherman's Head (7)
- sometimes i'm wrong (55)
- The Desk Reject (5)
- The Hardest Science (49)
- The personality sentences (5)
- The SAPA Project (1)
- The Trait-State Continuum (34)
- Uncategorized (5)
- Unique—Like Everybody Else (78)
Subscribe to the Meta-Blog
DisclaimerThe views expressed in blog posts and other articles are those of the authors and do not necessarily reflect the views of the Association for Research in Personality.
Author Archives: Brent Roberts
At the end of my previous blog “Because, change is hard“, I said, and I quote: “So, send me your huddled, tired essays repeating the same messages about improving our approach to science that we’ve been making for years and I’ll post, repost, and blog about them every time.” Well, someone asked me to repost their’s. So here is it is: http://www.nature.com/news/no-researcher-is-too-junior-to-fix-science-1.21928. It is a nice piece by John Tregoning. Speaking of which, there were two related blogs posted right after the change is hard piece that are both worth reading. The first by Dorothy Bishop is brilliant and counters my pessimism so effectively I’m almost tempted to call her Simine Vazire: http://deevybee.blogspot.co.uk/2017/05/reproducible-practices-are-future-for.html And if you missed it James Heathers has a spot on post about the New Bad People: https://medium.com/@jamesheathers/meet-the-new-bad-people-4922137949a1 Continue reading
I reposted a quote from a paper on twitter this morning entitled “The earth is flat (p > 0.05): Significance thresholds and the crisis of unreplicable research.” The quote, which is worth repeating, was “reliable conclusions on replicability…of a finding can only be drawn using cumulative evidence from multiple independent studies.” An esteemed colleague (Daniël Lakens @lakens) responded “I just reviewed this paper for PeerJ. I didn’t think it was publishable. Lacks structure, nothing new.” Setting aside the typical bromide that I mostly curate information on twitter so that I can file and read things later, the last clause “nothing new” struck a nerve. It reminded me of some unappealing conclusions that I’ve arrived at about the reproducibility movement that lead to a different conclusion—that it is very, very important that we post and repost papers like this if we hope to move psychological science towards a more robust future. From my current vantage, producing new and innovative insights about reproducibility is not the point. There has been almost nothing new in the entire reproducibility discussion. And, that is okay. I mean, the methodologists (whether terroristic or not) have been telling us for decades that our typical approach to evaluating our research findings is problematic. Continue reading
The most courageous act a modern academic can make is to say they were wrong. After all, we deal in ideas, not things. When we say we were wrong, we are saying our ideas, our products so to speak, were faulty. It is a supremely unsettling thing to do. Of course, in the Platonic ideal, and in reality, being a scientist necessitates being wrong a lot. Unfortunately, our incentive system militates against being honest about our work. Thus, countless researchers choose not to admit or even acknowledge the possibility that they might have been mistaken. In a bracingly honest post in response to a blog by Uli Schimmack, the Nobel Prize winning psychologist, Daniel Kahneman, has done the unthinkable. He has admitted that he was mistaken. Here’s a quote: Continue reading
Scientific research is an attempt to identify a working truth about the world that is as independent of ideology as possible. As we appear to be entering a time of heightened skepticism about the value of scientific information, we feel it is important to emphasize and foster research practices that enhance the integrity of scientific data and thus scientific information. We have therefore created a list of better research practices that we believe, if followed, would enhance the reproducibility and reliability of psychological science. The proposed methodological practices are applicable for exploratory or confirmatory research, and for observational or experimental methods.
- If testing a specific hypothesis, pre-register your research, so others can know that the forthcoming tests are informative. Report the planned analyses as confirmatory, and report any other analyses or any deviations from the planned analyses as exploratory.
- If conducting exploratory research, present it as exploratory. Then, document the research by posting materials, such as measures, procedures, and analytical code so future researchers can benefit from them. Also, make research expectations and plans in advance of analyses—little, if any, research is truly exploratory. State the goals and parameters of your study as clearly as possible before beginning data analysis.
- Consider data sharing options prior to data collection (e.g. Continue reading
Some of you might have missed the kerfuffle that erupted in the last few days over a pre-print of an editorial written by Susan Fiske for the APS Monitor about us “methodological terrorists”. Andrew Gelman’s blog reposts Fiske’s piece, puts it in historical context, and does a fairly good job of articulating why it is problematic beyond the terminological hyperbole that Fiske employs. We are reposting it for your edification.
The following is a hypothetical exchange between a graduate student and Professor Belfry-Roaster. The names have been changed to protect the innocent…. Budlie Bond: Professor Belfry-Roaster I was confused today in journal club when everyone started discussing power. I’ve taken my grad stats courses, but they didn’t teach us anything about power. It seemed really important. But it also seemed controversial. Can you tell me a bit more about power and why people care so much about it Prof. Belfry-Roaster: Sure, power is a very important factor in planning and evaluating research. Technically, power is defined as the long-run probability of rejecting the null hypothesis when it is, in fact, false. Power is typically considered to be a Good Thing because, if the null is false, then you want your research to be capable of rejecting it. The higher the power of your study, the better the chances are that this will happen. The concept of power comes out of a very specific approach to significance testing pioneered by Neyman and Pearson. Continue reading
It has been unsettling to witness the seemingly endless stream of null effects emerging from numerous pre-registered direct replications over the past few months. Some of the outcomes were unsurprising given the low power of the original studies. But the truly painful part has come from watching and reading the responses from all sides. Countless words have been written discussing every nuanced aspect of definitions, motivations, and aspersions. Only one thing is missing: Direct, pre-registered replications by the authors of studies that have been the target of replications. While I am sympathetic to the fact that those who are targeted might be upset, defensive, and highly motivated to defend their ideas, the absence of any data from the originating authors is a more profound indictment of the original finding than any commentary. To my knowledge, and please correct me if I’m wrong, none of the researchers who’ve been the target of a pre-registered replication have produced a pre-registered study from their own lab showing that they are capable of getting the effect, even if others are not. For those of us standing on the sidelines watching things play out we are constantly surprised by the fact that the one piece of information that might help—evidence that the original authors are capable of reproducing their own effects (in a pre-registered study)—is never offered up. So, get on with it. Seriously. Everyone. Continue reading