Tag Archives: research methods

Everything is fucked: The syllabus – Sanjay Srivastava (The Hardest Science)

PSY 607: Everything is Fucked
Prof. Sanjay Srivastava
Class meetings: Mondays 9:00 – 10:50 in 257 Straub
Office hours: Held on Twitter at your convenience (@hardsci)

In a much-discussed article at Slate, social psychologist Michael Inzlicht told a reporter, “Meta-analyses are fucked” (Engber, 2016). What does it mean, in science, for something to be fucked? Fucked needs to mean more than that something is complicated or must be undertaken with thought and care, as that would be trivially true of everything in science. In this class we will go a step further and say that something is fucked if it presents hard conceptual challenges to which implementable, real-world solutions for working scientists are either not available or routinely ignored in practice.

The format of this seminar is as follows: Each week we will read and discuss 1-2 papers that raise the question of whether something is fucked. Our focus will be on things that may be fucked in research methods, scientific practice, and philosophy of science. The potential fuckedness of specific theories, research topics, etc. will not be the focus of this class per se, but rather will be used to illustrate these important topics. To that end, each week a different student will be assigned to find a paper that illustrates the fuckedness (or lack thereof) of that week’s topic, and give a 15-minute presentation about whether it is indeed fucked.

Grading:

20% Attendance and participation
30% In-class presentation
50% Final exam

Continue reading

An eye-popping ethnography of three infant cognition labs – Sanjay Srivastava (The Hardest Science)

I don’t know how else to put it. David Peterson, a sociologist, recently published an ethnographic study of 3 infant cognition labs. Titled “The Baby Factory: Difficult Research Objects, Disciplinary Standards, and the Production of Statistical Significance,” it recounts his time spend as a participant observer in those labs, attending lab meetings and running subjects. In his own words, Peterson “shows how psychologists produce statistically significant results under challenging circumstances by using strategies that enable them to bridge the distance between an uncontrollable research object and a professional culture that prizes methodological rigor.” The account of how the labs try to “bridge the distance” reveals one problematic practice after another, in a way that sometimes makes them seem like normal practice and no big deal to the people in the labs. Here are a few examples. Protocol violations that break blinding and independence:

…As a routine part of the experiments, parents are asked to close their eyes to prevent any unconscious influence on their children. Although this was explicitly stated in the instructions given to parents, during the actual experiment, it was often overlooked; the parents’ eyes would remain open. Moreover, on several occasions, experimenters downplayed the importance of having one’s eyes closed. One psychologist told a mother, “During the trial, we ask you to close your eyes. That’s just for the journals so we can say you weren’t directing her attention. But you can peek if you want to. Continue reading

Thought Fragments Concerning Ideology in Social Science – Michael Kraus (Psych Your Mind)

I took a course in sociology my first year as an undergraduate at UC Berkeley. The course was an introduction to sociology taught by professor and social activist, Harry Edwards. The course blew me away because it felt so viscerally real. Professor Edwards would talk about social class, race, and gender in America and students would chime in about their own experiences that brought these big social constructs to life. What I learned in Professor Edwards’ class resembled nothing we had discussed in my high school history classes—I grew up in a politically conservative suburb in San Diego, and we didn’t have much ideological diversity in our discussions of law and society. Sociology, and social sciences more broadly, really spoke to me.
Read More->

SPSP 2015: Actually Predicting the Future – Michael Kraus (Psych Your Mind)

In regression (a common statistical practice used in social science research) we often attempt to predict the outcome of a given dependent measure (the DV) based on what we know about other measured variables theoretically related to the DV (the IVs). This common regression method has one problem though: We are predicting values for data that we have already collected. What if we were to engage in actual prediction? That is, what if we attempted to predict the values of a DV that is unknown? How might we do this and what would be the benefit?
This was a fascinating talk presented by Liz Page-Gould of the University of Toronto at the Future of Social Psychology Symposium!
Read More->

Failed experiments do not always fail toward the null – Sanjay Srivastava (The Hardest Science)

There is a common argument among psychologists that null results are uninformative. Part of this is the logic of NHST – failure to reject the null is not the same as confirmation of the null. Which is an internally valid statement, but ignores the fact that studies with good power also have good precision to estimate effects.

However there is a second line of argument which is more procedural. The argument is that a null result can happen when an experimenter makes a mistake in either the design or execution of a study. I have heard this many times; this argument is central to an essay that Jason Mitchell recently posted arguing that null replications have no evidentiary value. (The essay said other things too, and has generated some discussion online; see e.g., Chris Said’s response.)

The problem with this argument is that experimental errors (in both design and execution) can produce all kinds of results, not just the null. Confounds, artifacts, failures of blinding procedures, demand characteristics, outliers and other violations of statistical assumptions, etc. can all produce non-null effects in data. When it comes to experimenter error, there is nothing special about the null. Continue reading

Notes on Replication from an Un-Tenured Social Psychologist – Michael Kraus (Psych Your Mind)

Last week the special issue on replication at the Journal of Social Psychology arrived to an explosion of debate (read the entire issue here and read original author Simone Schnall's commentary on her experience with the project and Chris Fraley's subsequent examination of ceiling effects). The debate has been happening everywhere--on blogs, on twitter, on Facebook, and in the halls of your psychology department (hopefully).
Read More->

(Sample) Size Matters – Michael Kraus (Psych Your Mind)

Sample Size Matters
On this blog and others, on twitter (@mwkraus), at conferences, and in the halls of the psychology building at the University of Illinois, I have engaged in a wealth of important discussions about improving research methods in social-personality psychology. Many prominent psychologists have offered several helpful suggestions in this regard (here, here, here, and here).

Among the many suggestions for building a better psychological science, perhaps the simplest and most parsimonious way to improve research methods is to increase sample sizes for all study designs: By increasing sample size researchers can detect smaller real effects and can more accurately measure large effects. There are many trade-offs in choosing appropriate research methods, but sample size, at least for a researcher like me who deals in relatively inexpensive data collection tools, is in many ways the most cost effective way to improve one's science. In essence, I can continue to design the studies I have been designing and ask the same research questions I have been asking (i.e., business-as-usual) with the one exception that each study I run has a larger N than it would have if I were not thinking (more) intelligently about statistical power.

How has my lab been fairing with respect to this goal of collecting large samples? See for yourself:

Read More->

I’m Using the New Statistics – Michael Kraus (Psych Your Mind)

Do you remember your elementary school science project? Mine was about ant poison. I mixed borax with sugar and put that mixture outside our house during the summer in a carefully crafted/aesthetically pleasing "ant motel." My prediction, I think, was that we would kill ants just like in the conventional ant killing brands, but we'd do so in an aesthetically pleasing way. In retrospect, not sure I was cut out for science back then.

Anyway, from what I remember about that process, there was a clear study design and articulation of a hypothesis--a prediction about what I expected to happen in the experiment. Years later, I would learn more about hypothesis testing in undergraduate and graduate statistical courses on my way to a social psychology PhD. For that degree, Null Hypothesis Significance Testing (NHST) would be my go-to method of inferential statistics.

In NHST, I have come to an unhealthy worship of p-values--the statistic expressing the probability of the data showing the observed relationship between variables X and Y, if the null hypothesis (of no relationship) were true. If p < .05 rejoice! If p < .10 claim emerging trends/marginal significance and be cautiously optimistic. If p > .10 find another profession. Continue reading

SWAG: My favorite reason to "Just Post It!" – Michael Kraus (Psych Your Mind)

Every Wednesday Thursday afternoon, I gather with a bunch of faculty and graduate students at the University of Illinois to discuss a journal article about social psychology, and to eat a snack. This blog post reflects the discussion we had during this week's seminar affectionately called Social Wednesdays Thursdays and Grub (SWTAG)--we're going STAG now!

In last week's journal club we read about a recent paper in Psychological Science with a very clear message: It should be the norm for researchers to post their data upon publication. In the article, the author (Uri Simonsohn) lays out the major reason why he thinks posting data is a good idea: It helps our field catch scientific fraud in action (e.g., fabricated data). Simonsohn details some methods he has used in the past to catch fraud in the paper and on his new blog over at datacolada.org (I'll have mine blended!).

I agree that posting data will make it harder for people to fabricate data. However, my favorite reason to increase norms for posting data has nothing to do with data fabrication.

Read More->

External Validity and Believability in Experiments – Michael Kraus (Psych Your Mind)

Imagine for a moment that you are an experiment participant in a dystopian future university thirty years from now. At birth, you were taken from your natural parents and assigned to two robotic parental unit alternatives. The first unit is cold and metal, it has a big frowny face, and all it's good for is dispensing the occasional hot meal through it's midriff. The second unit provides no food, but this unit is fashioned with a luxurious coat of fine fur that feels warm to the touch.

Months pass as you are raised by these two robotic parental units. As you descend further and further into madness, every move you make is video recorded by a pair of enterprising future psychologists who are seeking an answer to one question: Will you spend more time with the cold, metal, food-dispensing robot or the furry one? Surprisingly, though the metal robot fulfills your metabolic needs, the researchers are fascinated to find that you spend most of your time with the furry mother surrogate.

What do results from an experiment such as this (famously conducted by Harry Harlow on monkey's in the 1950's) tells us about the nature of social relationships, love, and survival? Do they tell us anything about the human/monkey experience? Or are the conditions of the experiment so artificial in nature, that they obscure our ability to draw insights about basic psychology? I consider these questions in today's post.

Read More->