Teaching Personality

Teaching Personality is a new feature of P that encourages ARP members to share their ideas for teaching students—undergraduate or graduate—about personality theories and research. If you have an activity or assignment that you’d like to share, please let us know! Email a description of your assignment or activity to arpnewsletter@gmail.com. Include a title, the type of course in which you use it (e.g., personality lecture course, advanced seminar), a description, and any supporting materials (e.g., handouts or lecture slides). We’ll share a few ideas in each future issue.

Assignment: Applying Personality Theories to Individuals
Contributed by Laura P. Naumann

The purpose of this assignment is for students to analyze the behavior of particular individuals, using the perspectives of major personality theories. Specifically, each student must choose 3 major (and different) theories, as well as at least one specific construct from each theory (e.g., anxious-ambivalent style for attachment theory), and apply them to the personality/behavior of (a) someone they know, using Dan McAdams’ Life Story Interview as a basis for learning specific details about the person, or (b) a character from a popular movie (e.g., The Breakfast Club or The Royal Tenenbaums). For each theory, they need to describe the broader theory, then select the specific construct that captures the character’s personality and demonstrate it by examples from the movie/life story.

The assignment is typically due about two-thirds of the way through the semester, by which point we’ve discussed a variety of major theories and constructs. For more details, see these handouts for the life story and film character options.

Activity: Comparing Self, Peer, and Stranger Judgments of Personality
Contributed by Christopher J. Soto

This 15-minute activity is designed to generate sets of personality self-ratings, peer-ratings, and stranger-ratings for the same set of target students. I then use these ratings to discuss the general process of person perception (using Funder’s Realistic Accuracy Model), as well as how the degree of acquaintance, the trait being judged, and the relevance of available information influence accuracy.

First, students partner up with a classmate they know well, or at least have talked with outside of class. Each pair then decides which of them will be “partner #1” and which will be “partner #2.” Next, each partner #1 privately rates themself (self-ratings) on the Big Five, using a 7-point scale (1 = much less E/A/C/N/O than the average person; 4 = about as E/A/C/N/O as the average person; 7 = much more E/A/C/N/O than the average person). Each partner #2 privately rates their partner (peer-ratings), using the same scale.

Next, all of the #1’s are asked to leave the room. The #2’s are told that they are about to observe their partner having a conversation with a stranger, and afterwards they will rate the stranger’s personality based on the conversation. After the #1’s return to the classroom, each pair of students is asked to find another pair of students whom they do not know outside of class. In these “pairs of pairs,” the #1’s are told to have a getting-acquainted conversation with each other, in which they introduce themselves and discuss their lives. Afterward, the #2’s privately rate the person that their partner just met on the Big Five (stranger-ratings).

Finally, everyone passes the judgments that they’ve made to the other pair’s partner #2. Thus, each #2 should end up with a self-rating, peer-rating, and stranger-rating for the other pair’s #1. They then compute the total absolute difference between the self-versus-peer ratings and self-versus-stranger ratings, using a handout. Usually, the self-ratings will be closer to the peer-ratings than to the stranger-ratings, which illustrates how closer acquaintance increases the accuracy of personality judgments, by allowing the peer to draw on more information. Also, the stranger-ratings are usually more accurate for Extraversion and Agreeableness than for Neuroticism, which illustrates that traits are easier to rate when they produce overt behaviors rather than private thoughts and feelings. Finally, we discuss how particular pieces of information from the conversation helped the strangers rate particular traits, which illustrates how even a small amount of highly relevant information about a trait can greatly improve accuracy.

Activity: Coding Behavior—Not as Easy as it Sounds!
Contributed by Simine Vazire

When talking about personality assessment, the topic of behavioral or criterion measures always comes up.  Students typically agree that “objective” behavioral measures of personality should be the gold standard, but they don’t always grasp how messy these measures can be in practice.  The purpose of this activity is to demonstrate that measuring behavior is not as easy, or as objective, as it sounds.

I start by asking if everyone in the class knows what an interruption is.  Does anyone need a definition?  Usually nobody raises their hand—they agree that they know what an interruption is.  Then I tell them they’re going to do a simple behavioral observation task: watch a few minutes of a debate and count the total number of interruptions that happen.  They don’t have to keep track of who was interrupting whom, just make a tally whenever they see an interruption.

I then show about 4-5 minutes of a debate (I use the last 4-5 minutes of a discussion between Paul Krugman and Bill O’Reilly on Meet the Press back in 2004, but any lively discussion will do).  Students keep a tally of the number of interruptions.  When the video clip is over, I ask them to say how many interruptions they saw.  The counts usually range from 5 or 6 to about 30 (in a class of 15 students!).  This leads to a discussion of why they could not even come close to agreeing on the number of interruptions when everyone knows what an interruption is.  We talk about whether providing a very specific definition would have helped, and about other ideas for improving agreement.

Optional: This activity can also be used to demonstrate how aggregation can increase reliability, (but not necessarily validity, though it’s hard to say that there’s a “right” answer in this activity).  Have the students pair up and average their estimates, and then note the new minimum and maximum estimate of the two-person aggregates, which will necessarily be closer together than the minimum and maximum counts of individuals.  Then have the two-person teams team up with other two-person teams to form four-person aggregates and show that those have a narrower range still.  Repeat until you have only two estimates (e.g., in a class of 32, these will each be based on an aggregate of 16 estimates) and they will be quite close together, showing that the average of 16 coders’ judgments is much more reliable than a single person’s judgment. (This activity was adapted from a similar activity developed by Sam Gosling. Thanks, Sam!).