Tag Archives: research ethics

Three Guys Talking About Scales – Michael Kraus (Psych Your Mind)

What follows below is the result of an online discussion I had with psychologists Brent Roberts (BR) and Michael Frank (MF). We discussed scale construction, and particularly, whether items with two response options (i.e., Yes v. No) are good or bad for the reliability and validity of the scale. The answers we came to surprised me--and they might surprise you too!
 
MK: Twitter recently rolled out a polling feature that allows its users to ask and answer questions of each other. The poll feature allows polling with two possible response options (e.g., Is it Fall? Yes/No). Armed with snark and some basic training in psychometrics and scale construction, I thought it would be fun to pose the following as my first poll:
Screenshot_2015-10-26-20-00-55.png
 
Said training suggests that, all things being equal, some people are more “Yes” or more “No” than others, so having response options that include more variety will capture more of the real variance in participant responses. To put that into an example, if I ask you if you agree with the statement: “I have high self-esteem.” A yes/no two-item response won’t capture all the true variance in people’s responses that might be otherwise captured by six items ranging from strongly disagree to strongly agree. MF/BR, is that how you would characterize your own understanding of psychometrics? Continue reading

Notes on Replication from an Un-Tenured Social Psychologist – Michael Kraus (Psych Your Mind)

Last week the special issue on replication at the Journal of Social Psychology arrived to an explosion of debate (read the entire issue here and read original author Simone Schnall's commentary on her experience with the project and Chris Fraley's subsequent examination of ceiling effects). The debate has been happening everywhere--on blogs, on twitter, on Facebook, and in the halls of your psychology department (hopefully).
Read More->

(Sample) Size Matters – Michael Kraus (Psych Your Mind)

Sample Size Matters
On this blog and others, on twitter (@mwkraus), at conferences, and in the halls of the psychology building at the University of Illinois, I have engaged in a wealth of important discussions about improving research methods in social-personality psychology. Many prominent psychologists have offered several helpful suggestions in this regard (here, here, here, and here).

Among the many suggestions for building a better psychological science, perhaps the simplest and most parsimonious way to improve research methods is to increase sample sizes for all study designs: By increasing sample size researchers can detect smaller real effects and can more accurately measure large effects. There are many trade-offs in choosing appropriate research methods, but sample size, at least for a researcher like me who deals in relatively inexpensive data collection tools, is in many ways the most cost effective way to improve one's science. In essence, I can continue to design the studies I have been designing and ask the same research questions I have been asking (i.e., business-as-usual) with the one exception that each study I run has a larger N than it would have if I were not thinking (more) intelligently about statistical power.

How has my lab been fairing with respect to this goal of collecting large samples? See for yourself:

Read More->

I’m Using the New Statistics – Michael Kraus (Psych Your Mind)

Do you remember your elementary school science project? Mine was about ant poison. I mixed borax with sugar and put that mixture outside our house during the summer in a carefully crafted/aesthetically pleasing "ant motel." My prediction, I think, was that we would kill ants just like in the conventional ant killing brands, but we'd do so in an aesthetically pleasing way. In retrospect, not sure I was cut out for science back then.

Anyway, from what I remember about that process, there was a clear study design and articulation of a hypothesis--a prediction about what I expected to happen in the experiment. Years later, I would learn more about hypothesis testing in undergraduate and graduate statistical courses on my way to a social psychology PhD. For that degree, Null Hypothesis Significance Testing (NHST) would be my go-to method of inferential statistics.

In NHST, I have come to an unhealthy worship of p-values--the statistic expressing the probability of the data showing the observed relationship between variables X and Y, if the null hypothesis (of no relationship) were true. If p < .05 rejoice! If p < .10 claim emerging trends/marginal significance and be cautiously optimistic. If p > .10 find another profession. Continue reading

SWAG: My favorite reason to "Just Post It!" – Michael Kraus (Psych Your Mind)

Every Wednesday Thursday afternoon, I gather with a bunch of faculty and graduate students at the University of Illinois to discuss a journal article about social psychology, and to eat a snack. This blog post reflects the discussion we had during this week's seminar affectionately called Social Wednesdays Thursdays and Grub (SWTAG)--we're going STAG now!

In last week's journal club we read about a recent paper in Psychological Science with a very clear message: It should be the norm for researchers to post their data upon publication. In the article, the author (Uri Simonsohn) lays out the major reason why he thinks posting data is a good idea: It helps our field catch scientific fraud in action (e.g., fabricated data). Simonsohn details some methods he has used in the past to catch fraud in the paper and on his new blog over at datacolada.org (I'll have mine blended!).

I agree that posting data will make it harder for people to fabricate data. However, my favorite reason to increase norms for posting data has nothing to do with data fabrication.

Read More->

External Validity and Believability in Experiments – Michael Kraus (Psych Your Mind)

Imagine for a moment that you are an experiment participant in a dystopian future university thirty years from now. At birth, you were taken from your natural parents and assigned to two robotic parental unit alternatives. The first unit is cold and metal, it has a big frowny face, and all it's good for is dispensing the occasional hot meal through it's midriff. The second unit provides no food, but this unit is fashioned with a luxurious coat of fine fur that feels warm to the touch.

Months pass as you are raised by these two robotic parental units. As you descend further and further into madness, every move you make is video recorded by a pair of enterprising future psychologists who are seeking an answer to one question: Will you spend more time with the cold, metal, food-dispensing robot or the furry one? Surprisingly, though the metal robot fulfills your metabolic needs, the researchers are fascinated to find that you spend most of your time with the furry mother surrogate.

What do results from an experiment such as this (famously conducted by Harry Harlow on monkey's in the 1950's) tells us about the nature of social relationships, love, and survival? Do they tell us anything about the human/monkey experience? Or are the conditions of the experiment so artificial in nature, that they obscure our ability to draw insights about basic psychology? I consider these questions in today's post.

Read More->

Quality v. Quantity in Publication – Michael Kraus (Psych Your Mind)

Einstein says Quality not Quantity (source)
I was on twitter the other day (mwkraus, why aren't you following me?) and my twitter feed displayed a great quote from Albert Einstein with some important career advice for aspiring scientists: He said something like "a career in which one is forced to produce scientific writings in great amounts creates a danger of intellectual superficiality." This quote got me wondering about the career trajectories of aspiring social psychologists, and the tension between wanting to publish as much as possible, and wanting to publish only the very best research. I consider this tension in today's blog.




Read More->

Have Your Cake and Eat It Too! Practical Reform in Social Psychology – Michael Kraus (Psych Your Mind)

The cake we can (1) have, and (2) eat!
If you have been following recent headlines in the social sciences then you are aware that the field of social psychology has been in some rough water over the past three years. In this time period, we've had our flagship journal publish a series of studies providing evidence that ESP exists (and then refuse to publish non-replications of these studies). We've suffered through at least three instances of scientific fraud perpetrated by high profile researchers who engaged in egregious scientific misconduct. We've had an entire popular area of research come under attack because researchers have failed to replicate its effects. And several respected members of the science community have had some harsh words to say about the discipline and its methods.

Listing all of these events in succession makes me feel a bit ashamed to call myself a social psychologist. Clearly our field has been lacking both oversight and leadership if all of this could happen in such a brief period. Now, I'm not one to tuck my tail between my legs. Instead, I've decided to look ahead. I think there are relatively simple changes that social psychologists (even ones without tenure) can make in their research that can shore up our science going forward.
Read More->

Science Utopia (Continued): Methods Integrity Workshop – Michael Kraus (Psych Your Mind)

"Winter is coming." --Ned Stark/Greg Francis
On Friday afternoon I attended a seminar in methods integrity in research (here). The speakers were Hal Pashler of UC San Diego and Greg Francis of Purdue University. In the seminar, the speakers raised a number of interesting points that I think add to last week's post on PYM about questionable research practices (here). I'll summarize the main points that I took from the seminar:

Read More->

Science Utopia: Some Thoughts About Ethics and Publication Bias – Michael Kraus (Psych Your Mind)

Science Utopia, next exit
Psychology's integrity in the public eye has been rocked by recent high profile discoveries of data fabrication (here, here, and here) and several independent realizations that psychologists (this is not unique to our field) tend to engage in data analytic practices that allow researchers to find positive results (here, here, and here). While it can be argued that these are not really new realizations (here), the net effect has turned psychologists to the important question: How do we reform our science?

It's a hard question to answer in one empirical article, or one blog post, and so that's not the focus here. Instead, what I'd like to do is simply point out what I think are the most promising changes that we, as a science, can adopt right now to move toward a solution that will help prevent future data fabrication or the use of biased hypothesis tests. These are not my ideas mind you, rather, they are ideas brought up in the many discussions of research reform (online and in person) that I have had formally and informally with my colleagues. Where possible, I link to the relevant sources for additional information.

Read More->