An Interview with 2016 Murray Award Recipient Oliver Schultheiss

by Smrithi Prasad

Smrithi Prasad








Oliver Schultheiss

How did you get interested in personality research—and specifically in implicit motives and projective measures of personality?

In the German psychology curriculum, we have a discipline called General Psychology that deals with topics like emotions, learning, and motivation—topics that get divvied up between social/personality psychology, cognitive psychology, and biopsychology in US psychology departments. So in the German system there's a specific discipline dedicated to the theory and research on fundamental psychological processes. When I started my studies here at Friedrich-Alexandria University, I became a research assistant for Joachim Brunstein (who later on became my dissertation thesis advisor). I got interested in research on goals, which was already in the realm of motivation, but is a very specific form of human motivation. It's one of the few kinds of human motivation where I think self-report is actually appropriate. Because goals are uniquely human, in that we can articulate goals, actually pursue them, and they impact our behavior.

But then serendipity struck, and Joachim had to give a talk as part of his habilitation—an additional qualification hurdle in the German academic system for individuals who want to get a faculty position. According to the sometimes patently insane rules of the German academic system, the topic of this talk could not be about his habilitation work. He remembered that when he was at the Max-Planck Institute in Munich in the 80's, David McClelland—who was on the board of supervisors of the institute had done some really interesting research on power motivation at the time. And so Joachim decided to make that the topic of his habilitation talk. Me—being his research assistant—was charged with the task of pulling together literature that he needed. While I was busy making copies, I read the stuff and I thought: Wow! You can measure motives by having people tell stories? And then you can use that to predict norepinephrine output in urine, and alcohol abuse, and all kinds of behaviors—now that is really, really cool!

After that, something else that characterized this research struck me. Back in the 1940s and 50s, McClelland and John Atkinson had the ingenuous idea to validate their measures using experimental arousal techniques. When they set out working on motivation, they wondered: How do I know whether people think a lot about food in the case of hunger motivation—or achievement in the case of achievement motivation? That led them to ask the question: If we changed people's motivations, would that systematically change their thoughts, relative to a control condition? And that's exactly what McClelland and Atkinson did. They didn't set out to generate implicit motive measures, they set out to generate empirically validated motive measures that picked up on something that a person who is motivationally aroused will think about more frequently or more intensively. But if you apply that measure to an individual whose motive has not been experimentally aroused, and you find that that person also chronically thinks about those things, then you can assume that that particular motivation must be chronically aroused in that person.

This approach towards validating measures recently got a boost when Denny Borsboom and colleagues, in a 2004 Psychological Review article, talked about causal validation of measures and confirmed that experimental manipulation of an attribute, and determining its effect on a measure, is at the core of validating any kind of instrument. It's basic natural science actually! That's how the thermometer was validated as a measurement device. That struck me from the get-go as a selling point for motive measures derived in the McClelland/Atkinson tradition. When I read up on power as a motive and realized how the motive measure was derived, I thought: Wow, this is it! This is what I'm going to spend my career doing research on. It was the feeling of a compass needle hitting north very strongly. And then you realize that's the way to go—for better or worse! I couldn't know at the time that this would actually be a successful path, but I took the gamble.

And I was rewarded starting with the very first studies that I did. I found really interesting associations between motive measures and behavior. This included novel behavioral details of how the power motive was associated with influence tactics. For instance, I found that power-motivated people were effective in influencing others through their nonverbal behavior—by raising their eyebrows to emphasize points, by gesturing a lot, by speaking fluently—but not through what they were actually saying! That was the interesting point. The more I did this research, the more I learned, and the more I was convinced that there's something to this approach to measuring human motivation!

At the time when I had just finished my dissertation thesis, I happened to read a book by Irenäus Eibl-Eibesfeldt, The Biological Foundations of Human Behavior. In that book, Eibl-Eibesfeldt, who is a former student of Konrad Lorenz, also reviewed Allan Mazur's research on hormones and dominance. And I thought: I know these findings! We have very similar findings in the field of research on power motivation. There's a strong parallel, so why don't I try to get the two together? And that was the starting point of my endocrinological endeavors. I never had any basic training in behavioral endocrinology in college. We didn't even have proper courses in biopsychology course for that matter, and so I was a complete greenhorn when I started this. But when David McClelland invited me for a post-doc over to Boston, I was lucky to be trained by biochemist Kenneth Campbell at University of Massachusetts, who taught me how to do immunoassays. I learned everything from scratch—creating your own horseradish conjugate, coating assay plates, etc.

During my first year as a post-doc, I examined the link between power motivation and testosterone. This was my first study, and first attempt at analyzing hormone data. So at 2 am in the morning, after running all the saliva samples through a one-channel gamma counter, I entered all data in a spreadsheet that already contained my picture-story motive scores, ran the hypothesis-testing regressions and found—nothing! I didn't see any main effect of winning and losing on testosterone changes, neither any effect involving power motivation. I was depressed for the next couple of weeks, but I didn't relent. Then I decided to look at how I had coded power motivation, and realized that there was something to that. People who had an increase in testosterone after winning were different in the way that they were writing about exerting influence on someone else, and at someone else's expense. Never once did they write about how this could be beneficial to the other person, too. But the people who didn't show a testosterone change were the ones who wrote about trying to influence the other person in a prosocial way. That made me realize that I found a key to this! By today's standards, you could probably say, well maybe this was a false positive, or that I massaged the data until they gave up and cried uncle. My response to that is: I'm guilty as charged! But I had the opportunity to replicate and replicate these findings, also with larger samples in later years, and the basic finding held: power motivation determines people's testosterone responses to victory and defeat in a competition. This experience led to greater intertwining between my endocrine research and my motive research using the picture-story exercise, a.k.a., the Thematic Apperception Test (TAT).

You've talked about integrating endocrine research with your motive research, but I want to know how do you balance breadth and depth? You publish independently in the realms of personality and social endocrinology, but also publish work that marries the two. How do you stay true to each field, while integrating both?

Well, it's nice that you're saying there's depth and breadth. But I'd rather say that I'm blissfully ignorant in most parts of personality research. I don't follow much of what's happening in personality psychology, and also what's happening in endocrinology (especially when you are teaching ten courses per year here!). The dirty truth is that I have just tried to follow, and deepen the hunches that I've had for the past ten years or longer. Because once you're on a roll, your brain's generating ideas faster than you can ever test them. Basically, I'm still benefiting from an initial onslaught of ideas that I had during my post-doc years. And I'm still trying to follow up and test some of those ideas.

I concentrate on the things that I understand really well. I may be wrong about a lot of things, but at least I stick to what I have a strong intuition about. I do stay abreast, but again that requires a lengthy feeding process. You really have to read during your graduate years and post-doc years. Read a lot, and read broadly! Having no background in biopsychology, one of the first things that I did when I started my post-doc at Harvard University was that I hit the library and spent hours reading everything. I read about dopamine receptors, about the serotonin reuptake mechanisms. It felt like it was random exploration of biopsychology, but over time I started to pick up some patterns. So reading is really essential! I'm also a strong believer that if you read things you are about ready to understand, they will be interesting and stimulating to read - a clear sign of it being the next step for our brain to cognitively penetrate. But if you read things that don't make any sense: either your brain isn't ready for it yet, or you may have to read and learn other things first. Or that somebody else's brain wasn't really ready for writing it!

So that's the answer. I followed my interests, I followed my gut feelings.

What was the best piece of advice you have been given?

When I started at the University of Michigan as a young assistant professor, Dick Nisbett attended one of my seminars there. After the seminar, I asked him what kind of advice he would give to a young assistant professor. He said: "Always keep the data boiler boiling away!" Not in the sense of: create your own data, but generate research, and generate more research. Some of it will fall flat, some of it won't be usable, some of it will only be pilot work, but don't stop! Because you have to feed a pipeline, and the more you feed it, the more diversely you feed it, the richer the dividends. Even if you have studies that initially don't make sense, years later you might run another study and then suddenly retrospectively understand how it all fits. So the benefits are sometimes very delayed, but they'll come eventually.

I'm also serious about two other pieces of advice: One is to look at your data. I think that the software packages that we use sometimes hide more than show you your data. As a doctoral student, after having worked with the standard statistics program of our field for many years, I happened upon the statistics software SYSTAT. In contrast to the other software, SYSTAT made it really easy to plot histograms, see data distributions, scattergrams. So I learned early on to look at the data, more than looking at coefficients. I realized: Oh there's an outlier there that could really make or break my entire correlation. So before I even looked at any of the coefficients, I looked at the actual data! I found out later on that the statistician John Tukey also recommended conducting exploratory visual data analysis before actually analyzing data with statistical tools. The other piece of advice is to be intimately familiar with your measurement devices, the process of measurement, and have a good understanding of what exactly generates the measures' scores.

Actually, that leads me to my next question: Throughout your career you've pursued both validation/methods questions and theoretical questions. We are all so drawn to asking theoretical/conceptual questions that we often forget to take a step back and ask if methods that we're using are valid to begin with. Can you say something about that?

I think you're alluding to a big problem in modern psychology. It has been a problem for a long time—not just modern psychology. Our discipline places a premium on being sexy, publishing something completely new with each new project, and making a name for yourself by branding some completely new concept. At least that's the name of the game in large parts of social and personality psychology: To create the next big measure of XYZ, ideally in one minute, and then demonstrate that it predicts everything! There is little premium placed on doing the tedious, but necessary incremental work that lead to a more thorough understanding of a process or outcome and how it fits with other known facts. This is problematic because in most other disciplines—especially in the natural sciences—people sometimes spend their entire careers on developing a good measure of just one thing and making sure that it works the way it should.

Take Rosalyn S. Yalow, who invented the immunoassay, as an example. She spent almost her entire career on just perfecting that measurement device with her collaborator Solomon Berson. What psychologists do is usually a far cry from that. We become so invested in coming up with the next big concept that sometimes the measurements we create or use are of rather doubtful or unknown validity, at least if by Borsboom's criterion of validity. I'd rather turn this upside down and say: Let's take the measurement device first and understand the measurement process. Maybe we can learn a thing or two about the concepts that we're dealing with by looking at the measures we use. Henry Murray and Christiana Morgan—who were the inventors of the TAT—spent a long, long time trying to understand the measure, trying to perfect it, trying to understand what kind of information can be drawn from it—even though they never saw the end of that. It was and is an ongoing endeavor. So that is another example of scientists who really tried to create a good measure of something that eventually turned out to be much bigger than they ever expected it to be.

What makes you different from other traditional personality psychologists? Why do you use implicit measures but not traditional personality measures?

Well, the reason for my obvious and relative lack of using any self-report measures of personality is that even as a student I was much more convinced by a natural-science approach to studying personality and human behavior. This approach maintains a healthy distance from the object that we want to depict, describe, understand and predict. But if you're using self-report measures, you're basically eliminating that distance. Essentially you're saying: Hey! We can all talk to each other, and so why not use that bridge across the gap of 'intersubjectivity' to make the process more efficient?

If you're doing that, then you are really ignoring a very basic lesson from Freud more than 100 years ago. This was reiterated by the behaviorists, who became behaviorists not because they thought it was cool to simply disregard what's going on in your head. Because before that researchers had run into a wall with introspection, realizing that one can't get at all the relevant aspects of the brain and its processes, and it may sometimes also interfere with and distort what one wants to measure. So they came to a dead end. Behaviorists said: Let's scrap that and just try to create a whole different science of human behavior, one that is based on what we can actually and reliably observe from the outside. I think they went too far because they ignored whatever might be going on up in the brain. They said this is a black box, and we can't know anything about that. Luckily those days are over, so we can start to speculate again about what's going on in the brain. But I wouldn't even call this modern behaviorism. Kent Berridge's work is a good example of how you can carefully and rigorously reintroduce mental concepts into the equation. He's looking at affect, something that is very fundamental for motivation. He cannot ask rats questions. But he looks at how much rats like the food that they're getting. And he does that by observing how much they lick their lips, because that's a good indicator of hedonic pleasure even in the case of humans and other mammalian species. The more lip licking there is, the more taste pleasure there is. And conversely, the more of a gaping response an animal displays to indicate disgust, the more displeasure there is. So you can measure affect objectively and independently of what it does to behaviors- like bar pressing in operant learning paradigms. Berridge's work is an excellent example of how you can carefully construct a science of behavior without having to resort to self-report measures.

Having said that, I think that there are some domains of human experience where you must use self-reported introspection. One I've already mentioned is goals because we are able to use them to coordinate our behavior, like this meeting for example. Perhaps another example is our sense of identity—of who we are or our sense of self, which is partly verbally constructed and verbally communicable. So there are some pockets where the verbal output that people provide is veridical about the person, and carries valid and important information that you wouldn't be able to parsimoniously capture any other way.

The idea that if you just ask people and they give an answer, then ipso facto that answer must have some validity—now that assumption is plain wrong! I think that a lot of personality psychology is built on that very problematic assumption. If you look at the way measures are validated, we use criteria of whether scales hang together in a certain pattern, or whether the measure correlates in a certain pattern with other self-report measures. But there's no actual, substantial validation in terms of finding out if the instrument measures the thing it's supposed to measure by any strong, causal criterion. You don't have it for the Big Five, you don't have it for most any other personality measures, and certainly not for many other self-report measures. Unless personality psychology starts getting serious and really showing strong, causal evidence that the things we measure tap into certain things that make sense, I don't buy into it.

One theoretician whose work has fascinated me the most in personality psychology is Hans Eysenck, because he tried to lift the hood of the extraversion vehicle, look at the machinery below it, and came up with theories about what actually causes people to be extraverted. And similarly Richard Depue—who is a proponent of the dopamine theory of extraversion. It doesn't always have to be biological, it can be experimental, but there needs to be an effort to generate causal evidence for the processes underlying personality constructs and their measures. As long as that's not there, I don't know what I'm measuring with those measures.

In academia we are seldom asked this question, but how do you maintain work-life balance?

Well, it's easy to always do more. Our ought selves tells us, "If I work day and night, I can put out one more paper, one more paper, and one more paper." But does it make us any happier? Does it make the quality of the work any better? I don't think so! I don't think it makes anyone a better human being, because there's much more to being a human being than just working your ass off all the time. I think it's really important to make room, and to create deliberate breaks, in your work schedule which continually threatens to gobble you up. Likewise, it is equally important to make room for other parts of life. Family is a strong anchor, and so is having kids.

You really need to have downtime to absorb things. Creativity research is very clear about this actually. If you work on a problem, hit a wall, one option is to try and get behind it with a crowbar, and just write a paper about it. But is that necessarily good solution? Probably not! You just forced your way through, without any inspiration. But if you have downtime, your brain can digest what you've been working on, while you're doing other things—completely different things. It was also give you a chance to be more creative and to come up with better ideas, and subtle intuitions.

I think we all are a bit like Freud in that our motivational energy can take many different manifestations. First it was art for me and then it was music. But with both endeavors I realized that it was great fun, but I couldn't make a living with them. Then I hit psychology and realized: Okay, I can take this much, much farther. But I still retained some of my enthusiasm for music. I don't do music as much as I used to. I'm certainly not recording anymore, but we just bought an electronic drum set, and it's just fun. And I also bought myself a fretless bass, and I'm trying that for the first time in my life. I also read a lot and getting ideas from other parts of life, from other authors, and from things outside of psychology.

Life's more than just your narrow world, be it psychology, or flipping burgers at McDonalds. Our general tendency is to do more and more of the same thing, because if you do more, you will get much better at it, but at a cost! The cost is that you stagnate in all the other parts of your life. You have to accept the fact that if you write two less papers or say no to a new review assignment, you might forsake a great learning opportunity but you can then make room for learning opportunities in other parts of your life.

In the end you need to ask yourself: Do you want to be a generalist in life, who knows a little bit about many different things and can draw happiness from many different domains of life? Or do you want to know everything about one domain, be perfect at it, and basically suck at everything else?

So my last question for you is, what research questions are you currently excited about? What is keeping you awake at night?

Well, there are actually a couple of things. Actually, I never focus on only one thing at a time but pursue several things at the same time. One line of research that landed on my radar by serendipity is body morphology serving as a proxy variable for early hormone effects on motivational structures in the brain. We found really interesting evidence that 2D:4D digit lengths are linked to power motivation using Morgan and Murray's thematic apperception measure. Then my students and I started looking into other aspects of body morphology that are gender dimorphic—cheekbone width, face height to width ratio, fibula length. We're looking at things that happened before birth, and things that change during puberty and their hormonal implications and effects on the brain, and finally linking those to motivational needs. It's an interesting endeavor because I always joked about the 2D:4D measure not being valid, but then I looked more into it and I realized evidence behind it has really grown in recent years, and it made sense to me.

I also really want to develop new measures based on the TAT for the assessment of sexual motivation. Sexual motivation is a fundamental motive that is under-researched typically because it is fraught with all kinds of problems. Again, self-report is seemingly a good way to measure it, but there are issues associated with over-claiming and under-claiming. Kent Berridge made a compelling argument that even for something as basic and uncontroversial as food motivation, we don't have good insight into what drives that motive. Maybe it's purely cognitive variables—that we think that we're hungry. Take an amnesiac—like the famous patient HM. If you gave him a full meal and he ate it all, half an hour later he would have forgotten about it. And then if you put another meal in front of him and told him—it's lunchtime and here's your meal, he would eat it again. This is because he believed it was time to eat, and not because he paid attention to any signals from his blood sugar. This illustrates again the limits of self-reports of motivation. So getting measures of hunger motivation, sexual motivation, and maybe curiosity motivation using the TAT is high on my agenda.

And finally I want to go back to the TAT itself—to really understand the process of how stories are imbued with motivational impulses. Trying to find ways of actually putting that process in the brain scanner, and getting a first glimpse at which parts of the brain contribute to writing about power, achievement, or affiliation imagery. Recently, one study looked at brain regions that are involved when people make up complex stories (research that was not approaching it from a motivational angle). Basically, they found that all of the brain is involved. It's not just your Wernicke's and Broca's centers, but we get activation from motivational parts of the brain, like the striatum. The authors of that paper say it's probably the motor activity of writing stories, but I'm not sure about that. I think there's more to it than just motor activity. Maybe there is a process that imbues narrative language with motivational content, and the striatum is involved in that process. And by addressing such questions and issues I'd like to finally come to better grips with Morgan and Murray's wonderful device.