Regional Differences in Personality: Surprising Findings – Scott A. McGreal MSc. (Unique—Like Everybody Else)

Individual personality traits and the geographic region where one lives are correlated with important social outcomes. Research has found that personality traits are also geographically clustered in ways correlated with these same outcomes. Some of the results are surprising as the individual level and societal level correlates of personality can differ strikingly.

ARP Highlights (from a graduate student’s perspective) – Carol Tweten (Person X Situation)

I just got back from St. Louis, where I attended the biennial conference of the Association for Research in Personality for the first time. What a great conference! I highly recommend this conference for any graduate student interested in the study of personality. Here’s what some of the most prominent personality researchers had to say*: Rich Lucas (Michigan State): Sample sizes in the Journal of Research in Personality (JRP) have increased since 2010, with a particularly large jump in 2014. Fortunately, this increase does not appear to coincide with less rigorous methods or less diverse samples. JRP welcomes submissions of replications and even has a special issue of replications coming up, in addition to a special issue on intraindividual personality change (YES!). Simine Vazire (U.C. Davis): Data cleaning involves a lot of seemingly minute decisions (e.g., drop this item, control for that variable), but they can result in over-fitting analyses to data. Continue reading

Replicability in personality psychology, and the symbiosis between cumulative science and reproducible science – Sanjay Srivastava (The Hardest Science)

There is apparently an idea going around that personality psychologists are sitting on the sidelines having a moment of schadenfreude during the whole social psychology Replicability Crisis thing. Not true. The Association for Research in Personality conference just wrapped up in St. Louis. It was a great conference, with lots of terrific research. (Highlight: watching three of my students give kickass presentations.) And the ongoing scientific discussion about openness and reproducibility had a definite, noticeable effect on the program. The most obvious influence was the (packed) opening session on reproducibility. First, Rich Lucas talked about the effects of JRP’s recent policy of requiring authors to explicitly talk about power and sample size decisions. The policy has had a noticeable impact on sample sizes of published papers, without major side effects like tilting toward college samples or cheap self-report measures. Second, Simine Vazire talked about the particular challenges of addressing openness and replicability in personality psychology. Continue reading

why p = .048 should be rare (and why this feels counterintuitive) – Simine Vazire (sometimes i'm wrong)

sometimes i read a paper with three studies, and the key results have p-values of .04, .03 and .045.  and i feel like a jerk for not believing the results.  sometimes i am skeptical even when i see a single p-value of .04.*  is that fair? mickey inzlicht asked a similar question on twitter a few weeks ago, and daniel lakens wrote an excellent blog post in response.  i just sat on my ass.  this is the highly-non-technical result of all that ass-sitting. we are used to thinking about the null distribution.  and for good reason. Continue reading

Alpha and Correlated Item Residuals – Brent Donnellan (The Trait-State Continuum)

Subtitle: Is alpha lame? My last post was kind of stupid as Sanjay tweeted. I selected the non-careless responders in a way that guarantees a more unidimensional result. A potentially better approach is to use a different set of scales to identify the non-careless responders and repeat the analyses. My broader point still stands in that I think it is useful to look for ways to screen existing datasets given the literature that: a) suggests careless responders are present in many datasets; and b) careless responders often distort substantive results (see the references and additional recommendations to the original post). Another interesting criticism came about from my off-handed reporting of alpha coefficients. Matthew Hankins (via twitter) rightly pointed out that it is mistake to compute alpha in light of the structural analyses I conducted. I favored a particular model for the structure of the RSE that specifies a large number of correlated item residuals between the negatively-keyed and positively-keyed items. In the presence of correlated residuals, alpha is either an underestimate or overestimate of reliability/internal consistency (see Raykov 2001 building on Zimmerman, 1972). [Note: I knew reporting alpha was a technical mistake but I thought it was one of those minor methodological sins akin to dropping an f-bomb every now and then in real life.  Moreover, I am aware of the alpha criticism literature (and the alternatives like omega). I assumed the alpha is a lower bound heuristic when blogging but this is not true in the presence of correlated residuals (see again Raykov, 2001).] Continue reading

Guest Post: Excuses for Data Peeking – Simine Vazire (sometimes i'm wrong)

Excuses for Data Peeking

Guest Post by Don Moore and Liz Tenney

Good research practice says you pre-specify your sample size and you wait until the data are in before you analyze them.  Is it ever okay to peek early?  This question stimulated one of the more interesting discussions that we (Liz Tenney and Don Moore) had in our lab group at Berkeley.  Here’s the way the debate unfolded. Don staked out what he thought was the methodological high ground and argued that we shouldn’t peek.  He pointed out the perils of peeking: If the early results look encouraging then you might be tempted to stop early and declare victory.  However, we all know why this is a mortal sin: selecting your sample size conditional on obtaining the hypothesized result increases the chance of a false positive result.  However, if you don’t stop but the effect weakens in the full sample, you will be haunted by the counterfactual that you might have stopped.  The part of you that might consider using part of the sample won’t be able to help wondering about some hidden moderator.  Maybe the students were more stressed out and distracted as we approached the end of the semester?  Maybe the waning sunlight affected participants’ circadian rhythms? Continue reading

Careless Responders and Factor Structures – Brent Donnellan (The Trait-State Continuum)

Warning: This post will bore most people.  Read at your own risk. I also linked to some  articles behind pay walls. Sorry! I have a couple of research obsessions that interest me more than they should. This post is about two in particular: 1) the factor structure of the Rosenberg Self-Esteem Scale (RSE); and 2) the impact that careless responding can have on the psychometric properties of measures.  Like I said, this is a boring post. I worked at the same institution as Neal Schmitt for about a decade and he once wrote a paper in 1985 (with Daniel Stults) illustrating how careless respondents can contribute to “artifact” factors defined by negatively keyed items (see also Woods, 2006).  One implication of Neal’s paper is that careless responders (e.g., people who mark a “1” for all items regardless of the content) confound the evaluation of the dimensionality of scales that include both positively and keyed items.  This matters for empirical research concerning the factor structure of the RSE.  The RSE is perfectly balanced (it has 5 positively-keyed items and 5 negatively-keyed items). Continue reading