Our rejoinder to the Bargh and Shalev response to our replication studies has been accepted for publication after peer-review. The Bargh and Shalev response is available here
. A pdf of our rejoinder is available here
. Here are the highlights of our piece:
- An inspection of the size of the correlations from their three new studies suggests their new effect size estimates are closer to our estimates than to those reported in their 2012 paper. The new studies all used larger sample sizes than the original studies.
- We have some concerns about the validity of the Physical Warmth Extraction Index and we believe the temperature item is the most direct test of their hypotheses. If you combine all available data and apply a random-effects meta-analytic model, the overall correlation is .017 (95% CI = -.02 to .06 based on 18 studies involving 5,285 participants).
- We still have no idea why 90% of the participants in their Study 1a responded that they took less than 1 shower/bath per week. No other study using a sample from the United States even comes close to this distribution. Continue reading
I’ve quoted some of this before, but it was buried in a long post
and it’s worth quoting at greater length and on its own. It succinctly lays out his views on several issues relevant to present-day discussions of replication in science. Specifically, Popper makes clear that (1) scientists should replicate their own experiments; (2) scientists should be able to instruct other experts how to reproduce their experiments and get the same results; and (3) establishing the reproducibility of experiments (“direct replication” in the parlance of our times) is a necessary precursor for all the other things you do to construct and test theories.
Kant was perhaps the first to realize that the objectivity of scientific statements is closely connected with the construction of theories — with the use of hypotheses and universal statements. Only when certain events recur in accordance with rules or regularities, as is the case with repeatable experiments, can our observations be tested — in principle — by anyone. We do not take even our own observations quite seriously, or accept them as scientific observations, until we have repeated and tested them. Only by such repetitions can we convince ourselves that we are not dealing with a mere isolated ‘coincidence’, but with events which, on account of their regularity and reproducibility, are in principle inter-subjectively testable.
Every experimental physicist knows those surprising and inexplicable apparent ‘effects’ which in his laboratory can perhaps even be reproduced for some time, but which finally disappear without trace. Of course, no physicist would say in such a case that he had made a scientific discovery (though he might try to rearrange his experiments so as to make the effect reproducible). Indeed the scientifically significant physical effect may be defined as that which can be regularly reproduced by anyone who carries out the appropriate experiment in the way prescribed. No serious physicist would offer for publication, as a scientific discovery, any such ‘occult effect,’ as I propose to call it — one for whose reproduction he could give no instructions. The ‘discovery’ would be only too soon rejected as chimerical, simply because attempts to test it would lead to negative results. (It follows that any controversy over the question whether events which are in principle unrepeatable and unique ever do occur cannot be decided by science: it would be a metaphysical controversy. Continue reading
In a recent paper Hoekstra, Morey, Rouder, & Wagenmakers
argued that confidence intervals are just as prone to misinterpretation as tradiational p
-values (for a nice summary, see this blog
post). They draw this conclusion based on responses to six questions from 442 bachelor students, 34 master students, and 120 researchers (PhD students and faculty). The six questions were of True / False format and are shown here (this is taken directly from their Appendix, please don’t sue me; if I am breaking the law I will remove this without hesitation):
Hoekstra et al. note that all six statements are false and therefore the correct response to mark each as False. [1, 2] The results were quite disturbing. The average number of statements marked True, across all three groups, was 3.51 (58.5%). Particularly disturbing is the fact that statement #3 was endorsed by 73%, 68%, and 86% of bachelor students, master students, and researchers respectively. Such a finding demonstrates that people often use confidence intervals simply to revert back to NHST (i.e., if the CI does not contain zero, reject the null).
Some theorists argue that intelligence and socially desirable personality traits naturally go together. However, lay people associate intelligence with a mix of desirable and undesirable personality traits, such as disagreeableness. The relationship between personality and intelligence may be more complicated than is suggested by grand unitary theories.
|A terrifying graph for any PhD student! (source)
It's late October and that means we are squarely in the middle of job season for psychology PhDs (and PhD candidates). I was hired during the 2011-2012 job cycle, and so I recently switched to the evaluation side of the job process. Sitting on this side of the fence I feel incredibly fortunate to have a job: There are a ton of accomplished graduate students and postdocs with strong records, interesting research ideas, and stellar (!!!) letters of recommendation. If the system were running optimally, most of these applicants would land jobs. If the system were running optimally...
Warning: For educational purposes only. I am a personality researcher not a political scientist!
Short Answer: Probably Not.
Longer Answer: There has been a fair bit of discussion about narcissism and the current president (see here
for example). Some of this stemmed from recent claims about his use of first person pronouns (i.e., a purported use of greater “I-talk”). A big problem with that line of reasoning is that the empirical evidence linking narcissism with I-talk is surprisingly shaky
. Thus, Obama’s use of pronouns is probably not very useful when it comes to making inferences about his levels of narcissism.
Perhaps a better way to gauge Obama’s level of narcissism is to see how well his personality profile matches a profile typical of someone with Narcissistic Personality Disorder (NPD). The good news is that we have such a personality profile for NPD thanks to Lynam and Widiger (2001
). Those researchers asked 12 experts to describe the prototype case of NPD in terms of the facets of the Five-Factor Model (FFM). Continue reading
We are told that replication is the heart of all sciences
. As such, psychology has recently seen numerous calls for direct replication. Sanjay Srivastava says that replication provides an opportunity to falsify an idea
(an important concept in science, but rarely done in psychology). Brian Nosek and Jeffrey Spies suggest that replication would help identify “manufactured effects” rapidly
. And Brent Roberts proposed a three step process
, the last of which is a direct replication of any unique study reported in the package of studies.
Not everyone thinks that direct replications are useful though. Andrew Wilson has argued that replication will not save psychology
and better theories are needed. Jason Mitchell has gone so far as to say that failed replications offer nothing to science
as they are largely the result of practical mistakes on the part of the experimenters. So are direct replications necessary? My answer is a definitive: sometimes
Let’s start by considering what I gather to be some of the main arguments for direct replications.
R. Chris Fraley
Imagine that you’ve a young graduate student who has just completed a research project. You think the results are exciting and that they have the potential to advance the field in a number of ways. You would like to submit your research to a journal that has a reputation for publishing the highest caliber research in your field.
How would you know which journals are regarded for publishing high-quality research?
Traditionally, scholars and promotion committees have answered this question by referencing the citation Impact Factor (IF) of journals. But as critics of the IF have noted, citation rates per se
may not reflect anything informative about the quality
of empirical research. A paper can receive a large number of citations in the short run because it reports surprising, debatable, or counter-intuitive findings regardless of whether the research was conducted in a rigorous manner. In other words, the citation rate of a journal may not be particularly informative concerning the quality of the research it reports.
What would be useful is a way of indexing journal quality that is based upon the strength of the research designs used in published articles rather than the citation rate of those articles alone.