Monthly Archives: July 2014

Peak Experiences in Psilocybin Users – Scott McGreal (Unique—Like Everybody Else)

A recent study of intensely positive experiences in people who have used psilocybin found that some users had experienced profoundly altered states of consciousness, including visual hallucinations even when not under the direct influence of the drug. Perhaps psilocybin might have lasting effects on a person’s ability to enter altered states of consciousness without drugs.

read more

the simpleminded & the muddleheaded – Simine Vazire (sometimes i'm wrong)

Photo(11)
 

i have been sitting on this paul meehl gem for a few months now, ruminating on how it relates to our current situation:

"The two opposite errors to which psychologists, especially clinical psychologists, are tempted are the simpleminded and the muddleheaded (as Whitehead and Russell labeled each other in a famous dinner exchange). The simpleminded, due to their hypercriticality and superscientism and their acceptance of a variant of operationalist philosophy of science (that hardly any historian or logician of science has defended unqualifiedly for at least 30 years), tend to have a difficult time discovering anything interesting or exciting about the mind. The muddleheads, per contra, have a tendency to discover a lot of interesting things that are not so. I have never been able, despite my Minnesota “simpleminded” training, to decide between these two evils. At times it has seemed to me that the best solution is sort of like the political one, namely, we wait for clever muddleheads to cook up interesting possibilities and the task of the simpleminded contingent is then to sift the wheat from the chaff. But I do not really believe this, partly because I have become increasingly convinced that you cannot do the right kind of research on an interesting theoretical position if you are too simpleminded to enter into its frame of reference fully (see, e.g., Meehl, 1970b). One hardly knows how to choose between these two methodological sins." *

here is what i have come up with (i am trying to fit what probably belongs in several separate blog posts into one because i think the points are interconnected.  bear with me.)

1. another way to describe these groups is that the simpleminded are terrified of type I error while the muddleheaded are terrified of type II error. Continue reading

The Real Source of the Replication Crisis – David Funder (funderstorms)

“Replication police.” “P-squashers.” “Hand-wringers.” “Hostile replicators.”  And of course, who can ever forget, “shameless little bullies.”  These are just some of the labels applied to what has become known as the replication movement, an attempt to improve science (psychological and otherwise) by assessing whether key findings can be reproduced in independent laboratories.

Replication researchers have sometimes targeted findings they found doubtful.  The grounds for finding them doubtful have included (a) the effect is “counter-intuitive” or in some way seems odd (1), (b) the original study had a small N and an implausibly large effect size, (c) anecdotes (typically heard at hotel bars during conferences) abound concerning naïve researchers who can’t reproduce the finding, (d) the researcher who found the effect refuses to make data public, has “lost” the data or refuses to answer procedural questions, or (e) sometimes, all of the above.

Fair enough. If a finding seems doubtful, and it’s important, then it behooves the science (if not any particular researcher) to get to the bottom of things.  And we’ve seen a lot of attempts to do that lately. Famous findings by prominent researchers have been put  through the replication wringer, sometimes with discouraging results.  But several of these findings also have been stoutly defended, and indeed the failure to replicate certain prominent effects seems to have stimulated much of the invective thrown at replicators more generally. Continue reading

Is It Offensive To Declare A Social Psychological Claim Or Conclusion Wrong? – Brent Roberts (pigee)

By Lee Jussim

Science is about “getting it right” – this is so obvious that it should go without saying. However, there are many obstacles to doing so, some relatively benign (an honestly conducted study produces a quirky result), others less so (p-hacking). Over the last few years, the focus on practices that lead us astray have focused primarily on issues of statistics, methods, and replication.

These are all justifiably important, but here I raise the possibility that other, more subjective factors, distort social and personality psychology in ways at least as problematic. Elsewhere, I have reviewed what I now call questionable interpretive practices – how cherrypicking, double standards, blind spots, and embedding political values in research all lead to distorted conclusions (Duarte et al, 2014; Jussim et al, in press a,b).

But there are other interpretations problems. Ever notice how very few social psychological theories are refuted or overturned?   Disconfirming theories and hypotheses (including the subset of disconfirmation, failures to replicate) should be a normal part of the advance of scientific knowledge. It is ok for you (or me, or Dr. I. V. Famous) to have reached or promoted a wrong conclusion. Continue reading

Failed experiments do not always fail toward the null – Sanjay Srivastava (The Hardest Science)

There is a common argument among psychologists that null results are uninformative. Part of this is the logic of NHST – failure to reject the null is not the same as confirmation of the null. Which is an internally valid statement, but ignores the fact that studies with good power also have good precision to estimate effects.

However there is a second line of argument which is more procedural. The argument is that a null result can happen when an experimenter makes a mistake in either the design or execution of a study. I have heard this many times; this argument is central to an essay that Jason Mitchell recently posted arguing that null replications have no evidentiary value. (The essay said other things too, and has generated some discussion online; see e.g., Chris Said’s response.)

The problem with this argument is that experimental errors (in both design and execution) can produce all kinds of results, not just the null. Confounds, artifacts, failures of blinding procedures, demand characteristics, outliers and other violations of statistical assumptions, etc. can all produce non-null effects in data. When it comes to experimenter error, there is nothing special about the null. Continue reading

Some thoughts on replication and falsifiability: Is this a chance to do better? – Sanjay Srivastava (The Hardest Science)

Most psychologists would probably endorse falsification as an important part of science. But in practice we rarely do it right. As others have observed before me, we do it backwards. Instead of designing experiments to falsify the hypothesis we are testing, we look for statistical evidence against a “nil null” — the point prediction that the true effect is zero. Sometimes the nil null is interesting, sometimes it isn’t, but it’s almost never a prediction from the theory that we are actually hoping to draw conclusions about.

The more rigorous approach is to derive a quantitative prediction from a theory. Then you design an experiment where the prediction could fail if the theory is wrong. Statistically speaking, the null hypothesis should be the prediction from your theory (“when dropped, this object will accelerate toward the earth at 9.8 m/s^2″). Then if a “significant” result tells you that the data are inconsistent with the theory (“average measured acceleration was 8.6 m/s^2, which differs from 9.8 at p < .05″), you have to either set aside the theory itself or one of the supporting assumptions you made when you designed the experiment. You get some leeway to look to the supporting assumptions (“oops, 9. Continue reading

In defense of In Defense of Facebook – Tal Yarkoni ([citation needed])

A long, long time ago (in social media terms), I wrote a post defending Facebook against accusations of ethical misconduct related to a newly-published study in PNAS. I won’t rehash the study, or the accusations, or my comments in any detail here; for that, you can read the original post (I also recommend reading this or this for added context). While I stand by most of what I wrote, as is the nature of things, sometimes new information comes to light, and sometimes people say things that make me change my mind. So I thought I’d post my updated thoughts and reactions. I also left some additional thoughts in a comment on my last post, which I won’t rehash here.

Anyway, in no particular order…

I’m not arguing for a lawless world where companies can do as they like with your data

Some people apparently interpreted my last post as a defense of Facebook’s data use policy in general. It wasn’t. I probably brought this on myself in part by titling the post “In Defense of Facebook”. Maybe I should have called it something like “In Defense of this one particular study done by one Facebook employee”. In any case, I’ll reiterate: I’m categorically not saying that Facebook–or any other company, for that matter–should be allowed to do whatever it likes with its users’ data. There are plenty of valid concerns one could raise about the way companies like Facebook store, manage, and use their users’ data. And for what it’s worth, I’m generally in favor of passing new rules regulating the use of personal data in the private sector. So, contrary to what some posts suggested, I was categorically not advocating for a laissez-faire world in which large corporations get to do as they please with your information, and there’s nothing us little people can do about it. Continue reading