Do Personality Traits and Values Form a Coherent Whole? – Scott McGreal (Unique—Like Everybody Else)

Personality psychologists are currently attempting to create more comprehensive theories that integrate many different components of personality. According to Life History Theory there is a general factor of personality that combines all personality traits in a specific way. However, attempting to integrate personal values with traits poses problems for this model. read more

Statistics as math, statistics as tools – Sanjay Srivastava (The Hardest Science)

frame How do you think about statistical methods in science? Are statistics a matter of math and logic? Or are they a useful tool? Over time, I have noticed that these seem to be two implicit frames for thinking about statistics. Both are useful, but they tend to be more common in different research communities. And I think sometimes conversations get off track when people are using different ones. Frame 1 is statistics as math and logic. I think many statisticians and quantitative psychologists work under this frame. Their goal is to understand statistical methods, and statistics are based on math and logic. In math and logic, things are absolute and provable. (Even in statistics, which deals with uncertainty, the uncertainty is almost always quantifiable, and thus subject to analysis.) In math and logic, exceptions and boundary cases are important. If I say “All A are B” and you disagree with me, all you need to do is show me one instance of an A that is not B and you’re done. Continue reading

why i am optimistic – Simine Vazire (sometimes i'm wrong)

Orangutani was recently engaged in a totally civil disagreement on facebook* about whether things are changing in our field in response to all of the concerns about the robustness of our scientific findings, and the integrity of our methods.  i said i was optimistic.  someone asked me if i could elaborate on why i am optimistic.  this is for you, brent roberts.
1. the obvious reasons
the fact that the open science framework exists, that the reproducibility project exists, that social psychology did a special issue on replications, that there have been workshops on reproducibility at NSF, that SPSP had a taskforce on best practices, that APS had a taskforce that led to changes in submission guidelines and procedures at psych science, that psych science commissioned and published an article and tutorials on effect estimation approaches by geoff cumming, that perspectives coordinates and publishes registered replication reports, and that every conference i've been to for at least the last two years has included some presentations on replicability.  everyone already knows all of this, but it's easy to take it for granted.  i'm pretty sure that if i had told someone in 2007 that all of this would happen in the next seven years, they would not have believed me.
2. i am not a pariah
i am shocked and amazed that people still tolerate me after i spew my not-always-entirely-well-thought-out opinions on my blog.  when i started blogging i never thought that 1,500 people would see my posts.  i'd like to think this has something to do with my biting sense of humor, but i know it's because there is a huge amount of interest in this topic. other blogs, like  sanjay srivastava's and daniel lakens's get tons of readers, i'm sure.  betsy levy paluck has 1,500 followers on twitter.  dorothy bishop has 16,000. Continue reading

When Being Nice Gets in the Way of Being Smart – Scott McGreal (Unique—Like Everybody Else)

The relation between intelligence and the personality trait agreeableness presents a puzzle. Agreeableness is unrelated to IQ, yet lay people tend to associate agreeableness with lower intelligence, even though it is a desirable quality. A new study found that agreeable people choke under pressure, suggesting that being too nice can be a liability at times. read more

(Hopefully) The Last Thing We Write About Warm Water and Loneliness – Brent Donnellan (The Trait-State Continuum)

Our rejoinder to the Bargh and Shalev response to our replication studies has been accepted for publication after peer-review. The Bargh and Shalev response is available here. A pdf of our rejoinder is available here.  Here are the highlights of our piece:
  1. An inspection of the size of the correlations from their three new studies suggests their new effect size estimates are closer to our estimates than to those reported in their 2012 paper. The new studies all used larger sample sizes than the original studies.
  2. We have some concerns about the validity of the Physical Warmth Extraction Index and we believe the temperature item is the most direct test of their hypotheses. If you combine all available data and apply a random-effects meta-analytic model, the overall correlation is .017 (95% CI = -.02 to .06 based on 18 studies involving 5,285 participants).
  3. We still have no idea why 90% of the participants in their Study 1a responded that they took less than 1 shower/bath per week. No other study using a sample from the United States even comes close to this distribution. Continue reading

Popper on direct replication, tacit knowledge, and theory construction – Sanjay Srivastava (The Hardest Science)

I’ve quoted some of this before, but it was buried in a long post and it’s worth quoting at greater length and on its own. It succinctly lays out his views on several issues relevant to present-day discussions of replication in science. Specifically, Popper makes clear that (1) scientists should replicate their own experiments; (2) scientists should be able to instruct other experts how to reproduce their experiments and get the same results; and (3) establishing the reproducibility of experiments (“direct replication” in the parlance of our times) is a necessary precursor for all the other things you do to construct and test theories.

Kant was perhaps the first to realize that the objectivity of scientific statements is closely connected with the construction of theories — with the use of hypotheses and universal statements. Only when certain events recur in accordance with rules or regularities, as is the case with repeatable experiments, can our observations be tested — in principle — by anyone. We do not take even our own observations quite seriously, or accept them as scientific observations, until we have repeated and tested them. Only by such repetitions can we convince ourselves that we are not dealing with a mere isolated ‘coincidence’, but with events which, on account of their regularity and reproducibility, are in principle inter-subjectively testable.

Every experimental physicist knows those surprising and inexplicable apparent ‘effects’ which in his laboratory can perhaps even be reproduced for some time, but which finally disappear without trace. Of course, no physicist would say in such a case that he had made a scientific discovery (though he might try to rearrange his experiments so as to make the effect reproducible). Indeed the scientifically significant physical effect may be defined as that which can be regularly reproduced by anyone who carries out the appropriate experiment in the way prescribed. No serious physicist would offer for publication, as a scientific discovery, any such ‘occult effect,’ as I propose to call it — one for whose reproduction he could give no instructions. The ‘discovery’ would be only too soon rejected as chimerical, simply because attempts to test it would lead to negative results. (It follows that any controversy over the question whether events which are in principle unrepeatable and unique ever do occur cannot be decided by science: it would be a metaphysical controversy. Continue reading

(Mis)Interpreting Confidence Intervals – Ryne Sherman (Sherman's Head)

In a recent paper Hoekstra, Morey, Rouder, & Wagenmakers argued that confidence intervals are just as prone to misinterpretation as tradiational p-values (for a nice summary, see this blog post). They draw this conclusion based on responses to six questions from 442 bachelor students, 34 master students, and 120 researchers (PhD students and faculty). The six questions were of True / False format and are shown here (this is taken directly from their Appendix, please don’t sue me; if I am breaking the law I will remove this without hesitation): Bubledorf Hoekstra et al. note that all six statements are false and therefore the correct response to mark each as False. [1, 2] The results were quite disturbing. The average number of statements marked True, across all three groups, was 3.51 (58.5%). Particularly disturbing is the fact that statement #3 was endorsed by 73%, 68%, and 86% of bachelor students, master students, and researchers respectively. Such a finding demonstrates that people often use confidence intervals simply to revert back to NHST (i.e., if the CI does not contain zero, reject the null). Continue reading