Monthly Archives: April 2014

when are two things really one thing? – Simine Vazire (sometimes i'm wrong)

Two_bells01

the question came up a few weeks ago: if two measures are correlated .60, should they be aggregated into a single measure or kept separate?  this might seem like a narrow question, but it raises some deep and complicated issues.  at the heart of the matter is: when are two measures measuring the same construct?
 
thinking about this reminded me of the jingle and jangle fallacies.  i have not read the original papers (the jingle fallacy apparently dates back to edward thorndike, 1904, and the jangle fallacy to truman kelley, 1927.  i have not tracked down either book/paper.  sorry.) the jingle fallacy is calling two things by the same name that are actually different constructs.  the jangle fallacy is using two different names for things that are actually the same construct.
 
let's get specific.  let's take the constructs 'neuroticism' and 'negative affect'.  are these two the same thing? (maybe, maybe not).
one obvious difference is that neuroticism is usually thought of as a stable personality trait, whereas negative affect is typically thought of as a momentary state. Continue reading

My Scary Vision of Good Science – Brent Roberts (pigee)

By Brent W. Roberts

In a recent blog post, I argued that the Deathly Hallows of Psychological Science—p values < .05, experiments, and counter-intuitive findings—represent the combination of factors that are most highly valued by our field and are the explicit criteria for high impact publications. Some commenters mistook my identification of the Deathly Hallows of Psychological Science as a criticism of experimental methods and an endorsement of correlational methods. They even went so far as to say my vision for science was “scary.”

Boo.

Of course, these critics reacted negatively to the post because I was being less than charitable to some hallowed institutions in psychological science. Regardless, I stand by the original argument. Counter-intuitive findings from experiments “verified” with p values less than .05 are the most valued commodities of our scientific efforts. And, the slavish worshiping of these criteria is at the root of many of our replicability and believability problems.

Continue reading

Challenging the “Banality” of Evil and of Heroism Part 1 – Scott McGreal (Unique—Like Everybody Else)

What makes a hero, what makes a villain? Phil Zimbardo has claimed that evil and heroism are equally banal and mainly arise as a matter of circumstance rather than any special qualities of the person. However, his own analysis blames evil on external forces, but views heroism as coming from within the person. A more balanced view is needed to understand these extremes.

read more

(Sample) Size Matters – Michael Kraus (Psych Your Mind)

Sample Size Matters
On this blog and others, on twitter (@mwkraus), at conferences, and in the halls of the psychology building at the University of Illinois, I have engaged in a wealth of important discussions about improving research methods in social-personality psychology. Many prominent psychologists have offered several helpful suggestions in this regard (here, here, here, and here).

Among the many suggestions for building a better psychological science, perhaps the simplest and most parsimonious way to improve research methods is to increase sample sizes for all study designs: By increasing sample size researchers can detect smaller real effects and can more accurately measure large effects. There are many trade-offs in choosing appropriate research methods, but sample size, at least for a researcher like me who deals in relatively inexpensive data collection tools, is in many ways the most cost effective way to improve one's science. In essence, I can continue to design the studies I have been designing and ask the same research questions I have been asking (i.e., business-as-usual) with the one exception that each study I run has a larger N than it would have if I were not thinking (more) intelligently about statistical power.

How has my lab been fairing with respect to this goal of collecting large samples? See for yourself:

Read More->

why i study self-knowledge – Simine Vazire (sometimes i'm wrong)

someone recently asked my why i do what i do.  it's easy to come up with just-so stories. for all i know, i could just as easily have ended up inventing new ice cream flavors for tara's ice cream (i still haven't ruled it out).  and if i had, i could probably make up a story about why it was always meant to be.  narratives are deceptively easy to construct, and amazingly convincing after the fact. it would be especially ironic for a self-knowledge researcher to deceive herself about why she studies self-knowledge.  so i mostly try to resist giving an explanation.  but then there's this:

Note

this is a note from my best friend, written when we were 15. (i have honored her request to remain anonymous -- she undoubtedly has much more embarrasing material on me.)

a few excerpts:

'at miriam's birthday party, i got the idea that you wanted an honest evaluation from me about you.'

Continue reading

the self-deception alarm system – Simine Vazire (sometimes i'm wrong)

 

Montaigne

'those who venture to criticize us perform a remarkable act of friendship,
for to undertake to wound and offend a man for his own good
is to have a healthy love for him.'

-michel de montaigne*

(all this time, reviewers were just expressing their healthy love for me!)

today i am going to talk about self-deception. don't worry, i will connect it back to scientific integrity.

being a researcher who studies self-knowledge and self-deception is a little nerve-wracking.  watching other people delude themselves and be entirely convinced by their self-deception, you start to wonder whether all of your own self-beliefs might not also be delusional. it can make a person paranoid. so i started wondering, could there be an internal marker of self-deception? a red flag that, if trained, one could detect and catch oneself in the act of self-deception? Continue reading

buckets of tears – Simine Vazire (sometimes i'm wrong)

Bucket04

i learned a new word the other day. bucketing. it almost made me cry.

one of the most common mistakes i see when reviewing papers is authors who take a continuous variable and, for no good reason, mutilate it by turning it into a categorical variable. our old friend the median split is one example.  (whose idea was it to befriend the median split? and why won’t he stop harassing us?)

bucketing, from what i can tell, is another such technique.  i had a hard time finding a definition, but i think it’s basically creating categories out of multiple response options and grouping the data that way. for example, you can turn the continuous variable ‘age’ into a categorical variable by categorizing people into age ‘buckets’ (e.g, 20-29, 30-39, etc.). Continue reading

Why Do We Take Personality Tests? – Kate Reilly Thorson (Psych Your Mind)

I often get questions from friends and family that they would like answered in a post. This month, my post is inspired by a question from my grandmother. Kudos to my grandma for asking a question about a popular trend on the internet!


Personality tests
Personality tests are not new, but they have recently skyrocketed in popularity on the internet. This week, Buzzfeed published 15 such tests in one 24-hour period. It seems every day on my Facebook news feed, someone has posted new results from one of these quizzes. Online personality tests have expanded beyond the traditional format of telling us certain traits we possess, although those do still exist (try here and here). Now, there are also tests that give us information about ourselves by comparing us to people or characters we know (“Which pop star should you party with?” or “Which children’s book character are you?”) and by comparing specific behaviors or knowledge to others’ (“How many classic horror films have you seen? Continue reading