Gender Imbalance in Discussions of Best Research Practices – Michael Kraus (Psych Your Mind)

Over the last couple of weeks there have been some really excellent blog posts about gender representation in discussions of best research practices. The first was a shared Email correspondence between Simine Vazire and Lee Jussim. The second was a report of gender imbalance in discussions of best research practices by Alison Ledgerwood, Elizabeth Haines, and Kate Ratliff. Before then (May 2014), Sanjay Srivastava wrote about a probable diversity problem in the best practices debate. Go read these posts! I'll be here when you return. Read More->

“Open Source, Open Science” Meeting Report – March 2015 – Tal Yarkoni ([citation needed])

[The report below was collectively authored by participants at the Open Source, Open Science meeting, and has been cross-posted in other places.] On March 19th and 20th, the Center for Open Science hosted a small meeting in Charlottesville, VA, convened by COS and co-organized by Kaitlin Thaney (Mozilla Science Lab) and Titus Brown (UC Davis). People working across the open science ecosystem attended, including publishers, infrastructure non-profits, public policy experts, community builders, and academics. Open Science has emerged into the mainstream, primarily due to concerted efforts from various individuals, institutions, and initiatives. This small, focused gathering brought together several of those community leaders. The purpose of the meeting was to define common goals, discuss common challenges, and coordinate on common efforts. We had good discussions about several issues at the intersection of technology and social hacking including badging, improving standards for scientific APIs, and developing shared infrastructure. We also talked about coordination challenges due to the rapid growth of the open science community. At least three collaborative projects emerged from the meeting as concrete outcomes to combat the coordination challenges. A repeated theme was how to make the value proposition of open science more explicit. Why should scientists become more open, and why should institutions and funders support open science? We agreed that incentives in science are misaligned with practices, and we identified particular pain points and opportunities to nudge incentives. Continue reading

Guest Post: Not Nutting Up or Shutting Up – Simine Vazire (sometimes i'm wrong)

Not nutting up or shutting up: Notes on the demographic disconnect in our field’s best practices conversation

Alison Ledgerwood, Elizabeth Haines, and Kate Ratliff

A few weeks ago, two of us chaired a symposium on best practices at SPSP focusing on concrete steps that researchers can take right now to maximize the information they get from the work that they do. Before starting, we paused briefly to ask a couple simple questions about the field’s ongoing conversation on these issues. Our goal was to take a step back for a moment and consider both who is doing the talking as well as how are we talking about these issues. Apparently our brief pause sounded strident to some ears, and precipitated an email debate that was ultimately publicized on two blogs. The thing is, the issues we originally wanted to raise seemed to be getting a little lost in translation. And somehow, despite the absolute best of intentions of the two people having the (cordial, reasonable, interesting) debate, we had become literally invisible in the conversation that was taking place. So we thought maybe we would chime in, and Simine graciously allowed us to guest blog.* As we said in our symposium, a conversation about where the field as a whole is going should involve the field as a whole. And yet, when we look at the demographics of the voices involved in the conversation on best practices and the demographics of the field, it’s clear that there’s a disconnect.** For instance, the SPSP membership is about 56% female. Continue reading

Is there p-hacking in a new breastfeeding study? And is disclosure enough? – Sanjay Srivastava (The Hardest Science)

There is a new study out about the benefits of breastfeeding on eventual adult IQ, published in The Lancet Global Health. It’s getting lots of news coverage, for example in NPR, BBC, New York Times, and more. A friend shared a link and asked what I thought of it. So I took a look at the article and came across this (emphasis added):

We based statistical comparisons between categories on tests of heterogeneity and linear trend, and we present the one with the lower p value. We used Stata 13·0 for the analyses. We did four sets of analyses to compare breastfeeding categories in terms of arithmetic means, geometric means, median income, and to exclude participants who were unemployed and therefore had no income.

Yikes. The description of the analyses is frankly a little telegraphic. But unless I’m misreading it, or they did some kind of statistical correction that they forgot to mention, it sounds like they had flexibility in the data analyses (I saw no mention of pre-registration in the analysis plan), they used that flexibility to test multiple comparisons, and they’re openly disclosing that they used p-values for model selection – which is a more technical way of saying they engaged in p-hacking. (They don’t say how they selected among the 4 sets of analyses with different kinds of means etc.; was that based on p-values too?)* Continue reading

How Do You Feel When Something Fails To Replicate? – Brent Donnellan (The Trait-State Continuum)

Short Answer: I don’t know, I don’t care. There is an ongoing discussion about the health of psychological science and the relative merits of different research practices that could improve research. This productive discussion occasionally spawns a parallel conversation about the “psychology of the replicators” or an extended mediation about their motives, emotions, and intentions. Unfortunately, I think that parallel conversation is largely counter-productive. Why? We have limited insight into what goes on inside the minds of others. More importantly, feelings have no bearing on the validity of any result. I am a big fan of this line from Kimble (1994, p. 257): How you feel about a finding has no bearing on its truth. A few people seem to think that replicators are predisposed to feeling ebullient (gleeful?) when they encounter failures to replicate. This is not my reaction. My initial response is fairly geeky. Continue reading

An open review of Many Labs 3: Much to learn – Sanjay Srivastava (The Hardest Science)

A pre-publication manuscript for the Many Labs 3 project has been released. The project, with 64 authors and supported by the Center for Open Science, ran replications of 10 previously-published effects on diverse topics. The research was conducted in 20 different university subject pools plus an Mturk comparison sample, with very high statistical power (over 3,000 subjects total). The project was pre-registered, and wherever possible the replicators worked with original authors or other experts to make the replications faithful to the originals. Big kudos to the project coordinating team and all the researchers who participated in this important work, as well as all the original authors who worked with the replicators. A major goal was to examine whether time of semester moderates effect sizes, testing the common intuition among researchers that subjects are “worse” (less attentive) at the end of the term. But really, there is much more to it than that: Not much replicated. The abstract says that 7 of 10 effects did not replicate. But dig a little deeper and the picture is more complicated. For starters, only 9 of those 10 effects were direct (or if you prefer, “close”) replications. The other was labeled a conceptual replication and deserves separate handling. More on that below; for now, let’s focus on the 9 direct replications. Continue reading

lady problems – Simine Vazire (sometimes i'm wrong)

IMG_3351 california. not relevant.  just nice.

below is a joint blog entry with lee jussim.  the title, pictures, and post-script are mine, the rest is posted on both of our blogs. joint introduction recently, two friends and scholars who are working together on scientific integrity stuff had a very sane and civil email discussion about gender representation in the scientific integrity debate and in STEM more generally.  at the end of the discussion, neither had convinced the other but they decided it was still an interesting and informative discussion, and they decided to post it on their blogs for the world to see.  you're welcome, world.   you can find simine’s blog here:   Continue reading