President's Column

Daniel Ozer

UC Riverside

Daniel Ozer

Several months have passed since our St. Louis conference, and my impressions and recollections of that meeting leave me optimistic about the future of personality psychology; and I was especially encouraged to see many younger (and to me) unfamiliar researchers participating alongside colleagues I have known decades. I'm hoping to see many of you again at ECP18 in Timisoara next July.

We are now a year further into the great replication crisis that I addressed briefly in my first P column, and I have recently taken stock to ask what I have learned from it. My takeaway, to the extent anyone might care about the observations of a late-career personality psychologist is as follows:

1. I have grossly underestimated what counts as a sufficient sample. I concluded early in my career that correlations (and comparisons between two independent means) needed to be based on at least a sample of 100. I recall many colleagues who thought, at the time, that this was too high. I now regard it as far too low. My new lower bound to take an effect (bivariate) seriously is N=200.

2. Taking an effect seriously doesn't mean "believing" it. Multiple successful replications, only some of which may be "conceptual" are required to alter my beliefs if forced into a binary logic of "yes" or "no" with respect to the effect-though that binary logic is to be resisted whenever feasible.

3. What should distinguish a successful replication from a failure to replicate is far from clear. Again, the binary logic creates problems. Significance (or lack thereof) in a second study seems a pretty flimsy basis for drawing a conclusion about whether the first study "replicates". One might just as well ask whether the 2nd study effect is significantly different from the effect of the first study. Significance testing is a part of what lead us into this epistemic quandary and I see little hope that it will provide a way out.

4. However interpreted, replications are like subjects: More are better (exceeding the point of diminishing returns here is not among our current problems). Given limited journal pages, perhaps we can create space for publishing replications by limiting the elaboration of theory to explain results to just those cases that are replications. So here's a novel and probably bad idea: Only replication studies get to have Discussion sections.

5. A failure to replicate, however defined, is not a judgment about the skills or ethics of any of the authors. It is testimony to the multiple probabilistic causal pathways that create the phenomena we care about. We have much in common with naturalists wandering through unknown continents issuing apparently conflicting reports of all they have encountered.

One final and unrelated comment: I want to thank the entire membership of ARP for the privilege of serving as your President. I trust that our next President, Dan McAdams, will have the wonderful support of the membership that I experienced.