Author Archives: Brent Donnellan

Reviewing Papers in 2016 – Brent Donnellan (The Trait-State Continuum)

[Preface: I am bit worried that this post might be taken the wrong way concerning my ratio of reject to total recommendations. I simply think it is useful information to know about myself. I also think that keeping more detailed records of my reviewing habits was educational and made the reviewing processes even more interesting. I suspect others might have the same reaction.] Happy 2017! I collected more detailed data on my reviewing habits in 2016. Previously, I had just kept track of the outlets and total number of reviews to report on annual evaluation documents.  In 2016, I started tracking my recommendations and the outcomes of the papers I reviewed. This was an interesting exercise and I plan to repeat it for 2017.  I also have some ideas for extensions that I will outline in this post. Preliminary Data: I provided 51 reviews from 1 Jan 2016 to 29 Dec 2016. Of these 51 reviews, 38 were first time submissions (74. Continue reading

Updating a Graduate Level Personality Psychology Course – Brent Donnellan (The Trait-State Continuum)

Help! I am teaching graduate Personality Psychology in a few weeks and I want to update my syllabus. I last taught the course in Fall of 2013 so there are new readings and updates to be included. I have some ideas (e.g., the fourth law of behavior genetics piece) but I am suspicious of my ability to identify all of the relevant papers/chapters in the field.  In case you are interested, Brent Roberts maintains a repository of graduate syllabuses (or sittybes?). You can see my reading list from previous years at that location. Here is a little contest… 1. Identify references to recent papers/chapters (publication date 2012 to current) that you think should be included in a graduate personality psychology course. I try to keep the course broad (it is not just traits 101) and I am interested in both substantive and methodological pieces. Preprints are fine if you provide me the complete reference. Continue reading

Alpha and Correlated Item Residuals – Brent Donnellan (The Trait-State Continuum)

Subtitle: Is alpha lame? My last post was kind of stupid as Sanjay tweeted. I selected the non-careless responders in a way that guarantees a more unidimensional result. A potentially better approach is to use a different set of scales to identify the non-careless responders and repeat the analyses. My broader point still stands in that I think it is useful to look for ways to screen existing datasets given the literature that: a) suggests careless responders are present in many datasets; and b) careless responders often distort substantive results (see the references and additional recommendations to the original post). Another interesting criticism came about from my off-handed reporting of alpha coefficients. Matthew Hankins (via twitter) rightly pointed out that it is mistake to compute alpha in light of the structural analyses I conducted. I favored a particular model for the structure of the RSE that specifies a large number of correlated item residuals between the negatively-keyed and positively-keyed items. In the presence of correlated residuals, alpha is either an underestimate or overestimate of reliability/internal consistency (see Raykov 2001 building on Zimmerman, 1972). [Note: I knew reporting alpha was a technical mistake but I thought it was one of those minor methodological sins akin to dropping an f-bomb every now and then in real life.  Moreover, I am aware of the alpha criticism literature (and the alternatives like omega). I assumed the alpha is a lower bound heuristic when blogging but this is not true in the presence of correlated residuals (see again Raykov, 2001).] Continue reading

Careless Responders and Factor Structures – Brent Donnellan (The Trait-State Continuum)

Warning: This post will bore most people.  Read at your own risk. I also linked to some  articles behind pay walls. Sorry! I have a couple of research obsessions that interest me more than they should. This post is about two in particular: 1) the factor structure of the Rosenberg Self-Esteem Scale (RSE); and 2) the impact that careless responding can have on the psychometric properties of measures.  Like I said, this is a boring post. I worked at the same institution as Neal Schmitt for about a decade and he once wrote a paper in 1985 (with Daniel Stults) illustrating how careless respondents can contribute to “artifact” factors defined by negatively keyed items (see also Woods, 2006).  One implication of Neal’s paper is that careless responders (e.g., people who mark a “1” for all items regardless of the content) confound the evaluation of the dimensionality of scales that include both positively and keyed items.  This matters for empirical research concerning the factor structure of the RSE.  The RSE is perfectly balanced (it has 5 positively-keyed items and 5 negatively-keyed items). Continue reading

A Partial Defense of the Pete Rose Rule – Brent Donnellan (The Trait-State Continuum)

I tweeted this yesterday: Let’s adopt a Pete Rose Rule for fakers = banned for life.  Nothing questionable about fraud.  Jobs and funds are too scarce for 2nd chances. My initial thought was that people who have been shown by a preponderance of the evidence to have passed faked datasets as legitimate should be banned from receiving grants and publishing papers for life.   [Pete Rose was a baseball player and manager in professional baseball who bet on games when who was a manager. This made him permanently ineligible to participate in the activities of professional baseball.] Nick Brown didn’t like this suggestion and provided a thoughtful response on his blog.  My post is an attempt to defend my initial proposal. I don’t want to hijack his comments with a lengthy rejoinder. You can get banned for life from the Olympics for doping so I don’t think it is beyond the pale to make the same suggestion for science.  As always, I reserve the right to change my mind in the future! At the outset, I agree with his suggestion that it is not 100% feasible given that there is no overall international governing body for scientific research like there is for professional sports or the Olympics. Continue reading

Replication Project in Personality Psychology – Call for Submissions – Brent Donnellan (The Trait-State Continuum)

Richard Lucas and I are editing a special issue of the Journal of Research in Personality dedicated to replication (Click here for complete details). This blog post describes the general process and a few of my random thoughts on the special issue. These are my thoughts and Rich may or may not share my views.  I also want to acknowledge that there are multiple ways of doing replication special issues and we have no illusions that our approach is ideal or uncontroversial.  These kinds of efforts are part of an evolving “conversation” in the field about replication efforts and experimentation should be tolerated.  I also want to make it clear that JRP has been open to replication studies for several years.  The point of the special issue is to actively encourage replication studies and try something new with a variant of pre-registration. What is the General Process? We modeled the call for papers on procedures others have used with replication special issues and registered reports (e.g., the special issue of Social Psychology, the Registered Replication Reports at PoPS).  Here is the gist:

My View on the Connection between Theory and Direct Replication – Brent Donnellan (The Trait-State Continuum)

I loved Simine’s blog post on flukiness and I don’t want to hijack the comments section of her blog with my own diatribe. So here it goes… I want to comment on the suggestion that researchers should propose an alternative theory to conduct a useful or meaningful close/exact/direct replication. In practice, I think most replicators draw on the same theory that original authors used for the original study.  Moreover, I worry that people making this argument (or even more extreme variants) sometimes get pretty darn close to equating a theory with a sort of religion.  As in, you have to truly believe (deep in your heart) the theory or else the attempt is not valid.  The point of a direct replication is to make sure the results of a particular method are robust and obtainable by independent researchers. My take: Original authors used Theory P to derive Prediction Q (If P then Q). This is the deep structure of the Introduction of their paper.  They then report evidence consistent with Q using a particular Method (M) in the Results section. A replicator might find the theoretical reasoning more or less plausible but mostly just think it is a good idea to evaluate whether repeating M yields the same result (especially if the original study was underpowered).* The point of the replication is to redo M (and ideally improve on it using a larger N to generate more precise parameter estimates) to test Prediction Q. Continue reading