JRP Editor’s Report: Changes in Editorial Policy at the Journal of Research in Personality

Richard Lucas

Richard LucasAs many P readers will know, the Journal of Research in Personality recently announced some new policies that affect how papers are evaluated and selected for publication. These new policies were, in part, a reaction to the current “crisis in confidence” surrounding psychological research. However, this was not the only motivation for the changes, and we hope that the new policies lead to a further improvement in the already strong papers that JRP has been publishing. Of course, not all will agree that change is needed, and even among those who believe that some form of change is appropriate, disagreements about the precise nature of those changes likely exists. Because of this, I thought I’d take this opportunity to explain some of the choices we made.

In our August, 2013 Editorial, Brent Donnellan and I laid out three new policy changes that will affect submissions, at least for the near future. In our editorial, we acknowledged that there are many possible responses to the controversies that the field faces. Indeed, we expect that as the field as a whole considers issues of transparency, replicability, and reproducibility more fully, further changes will be needed. However, it also seemed to be the case that after much discussion about the problems that exist, few journals were taking any concrete steps to address these issues. Therefore, we thought that JRP could take the lead in developing policies that could address some of the clearer issues. At the same time, we know that change is difficult, and that policies that were too disruptive or too inconvenient could negatively affect authors’ perceptions of the journal, which could ultimately backfire and hurt the journal. Thus, we focused on policies that were relatively uncontroversial and that could be implemented without too much additional burden for authors, reviewers, or editors.

The first policy will likely have the largest impact on authors, as it does require some additions to submitted papers, and it may exclude a larger number of papers from consideration. In short, we are now taking power and precision much more seriously in our initial evaluation of submitted papers, and we now require authors to explicitly discuss power and precision in their papers. Specifically, we ask that authors consider and describe what size effect seems plausible for their study, to justify this expectation by referring to existing literature, and to discuss the power and precision of their study in relation to this expected effect. We realize that for many areas of research, the size of the effect that is expected may be difficult to predict with any precision. However, in those cases, authors can still use evidence about typical effect sizes within personality psychology to guide their decisions about sample size (this usually means that they should expect relatively small effects and as a result, recruit relatively large samples of participants). We also acknowledge that some research is very difficult to conduct, and therefore large samples may be difficult in these areas. These difficulties can be factored into publication decisions. Therefore, there are no hard and fast rules about the precision a study needs to achieve; we just ask that authors provide a realistic discussion of these issues so readers will be aware of any limitations related to this issue.

Although we will consider the context of a paper when evaluating its power and precision, if a study is seriously under-powered (without a compelling reason why larger samples could not be recruited), then such studies will often be rejected without review at JRP (though such papers can be resubmitted if additional data are collected). We understand that this may exclude from consideration some studies that might otherwise have been acceptable for publication, but we think that this is a critical issue with which the field needs to contend. We hope that this policy will be adopted by other journals and that power and precision consideration will factor more heavily both in the early stages of the research process and when evaluating papers for publication. We should also note that to further the second goal, we encourage authors to include confidence intervals in their results when possible.

The second policy that we instituted will likely have the least impact on authors. This policy has to do with authors’ willingness to share their data with interested researchers. Specifically, at the time of submission, authors are asked to indicate whether they will be willing to share their data with interested researchers who wish to verify the published findings. If authors are not able or willing to do so, they will be asked to provide an explanation, which the editors will evaluate before sending the paper out for review. Note that this policy does not deviate from other guidelines for sharing data (such as policies from the APA), it just asks authors to affirm that they will be willing to follow these guidelines. Our goal is simply to remind authors that such data sharing is an essential part of the research process and perhaps even to encourage more data sharing by prompting authors to consider whether their data are in a format that would allow for easy sharing at the time of submission. In addition, because the willingness to share data is so important, authors who simply refuse to do so (without a compelling explanation) will not be able to publish in JRP.

Finally, our third policy should have a very positive impact on authors, as we now provide a new mechanism for submitting papers that describe research that replicates studies previously published in JRP. Many have argued that more replication is needed within the field, and the editors of JRP agree. We also know, however, that replication studies are often not valued as highly as original reports, and thus, authors may not be motivated to conduct them. Thus, we created a simplified review process meant to encourage replication of research previously published in JRP. Specifically, papers that present the results of a study or studies that replicate a paper published in JRP within the past five years will be subject to an abbreviated review process that focuses solely on the technical merits. In other words, reviewers will be asked not to consider the importance of the question or the plausibility of the hypothesis, with the rationale that these issues were already considered in the review of the initial paper. The policy is restricted to replications of papers that were published within the past five years. The rationale for this limitation is that standards for importance and interest value do change over time. Replications of older papers will still be considered; however, the extent to which the importance threshold will play more of a role in their evaluation. Authors who wish to conduct replication studies should attempt to do exact replications when possible, and all authors should shoot for especially high levels of power and precision with their replication attempts. Authors who have questions about whether their planned replication will likely be acceptable are encouraged to contact the editor-in-chief.

Again, the editors of JRP realize that change does not come without some negative consequences. However, we have attempted to keep author burden and the fairness of the process in mind when developing these new policies. We will also be closely monitoring submissions to see whether these changes have a benefit overall. And of course, these policies are subject to change in the future if we feel that they have unintended consequences or if further modifications are needed. Our goal is to deal with any concerns about these policies as they apply to specific papers in a fair and flexible manner, so authors who have concerns are certainly encouraged to contact the editors. And finally, we ask that if you agree with the importance of these initial steps, you should consider sending more of your papers to the journal as a way of supporting our attempts at improving the transparency, reproducibility, and replicability of personality research.