What is “Spiritual Intelligence” Anyway? – Scott McGreal (Unique—Like Everybody Else)

If "spiritual intelligence" is a real thing, what might it consist of? Probably, elements of personality, intelligence, and altered states of consciousness.

Yes, Your Field Does Need to Worry About Replicability – Rich Lucas (The Desk Reject)

One of the most exciting things to happen during the years-long debate about the replicability of psychological research is the shift in focus from providing evidence that there is a problem to developing concrete plans for solving those problems. Whether it is journal badges that reward good practices, statistical software that can check for problems before papers are published, collaborative efforts to deal with limited resources and underpowered studies, proposals for new standards of evidence, or even entire societies dedicated to figuring out what we can do to make thing better, many people have devoted an incredible amount of thought, time, and energy to figuring out how we can fix any problems that exist and move the field forward.

Is “Spiritual Intelligence” a Valid Concept? – Scott McGreal (Unique—Like Everybody Else)

"Spiritual intelligence" has been popularized in recent years as an "alternative" intelligence based on little evidence, However, could the concept have some scientific merit?

Reflections on SIPS (guest post by Neil Lewis, Jr.) – Sanjay Srivastava (The Hardest Science)

The following is a guest post by Neil Lewis, Jr. Neil is an assistant professor at Cornell University. Last week I visited the Center for Open Science in Charlottesville, Virginia to participate in the second annual meeting of the Society for the Improvement of Psychological Science (SIPS). It was my first time going to SIPS, and I didn’t really know what to expect. The structure was unlike any other conference I’ve been to—it had very little formal structure—there were a few talks and workshops here and there, but the vast majority of the time was devoted to “hackathons” and “unconference” sessions where people got together and worked on addressing pressing issues in the field: making journals more transparent, designing syllabi for research methods courses, forming a new journal, changing departmental/university culture to reward open science practices, making open science more diverse and inclusive, and much more. Participants were free to work on whatever issues we wanted to and to set our own goals, timelines, and strategies for achieving those goals. I spent most of the first two days at the diversity and inclusion hackathon that Sanjay and I co-organized. These sessions blew me away. Maybe we’re a little cynical, but going into the conference we thought maybe two or three people would stop by and thus it would essentially be the two of us trying to figure out what to do to make open science more diverse and inclusive. Instead, we had almost 40 people come and spend the first day identifying barriers to diversity and inclusion, and developing tools to address those barriers. We had sub-teams working on (1) improving measurement of diversity statistics (hard to know how much of a diversity problem one has if there’s poor measurement), (2) figuring out methods to assist those who study hard-to-reach populations, (3) articulating the benefits of open science and resources to get started for those who are new, (4) leveraging social media for mentorship on open science practices, and (5) developing materials to help PIs and institutions more broadly recruit and retain traditionally underrepresented students/scholars. Although we’re not finished, each team made substantial headway in each of these areas. Continue reading

Improving Psychological Science at SIPS – Sanjay Srivastava (The Hardest Science)

Last week was the second meeting of the Society for the Improvement of Psychological Science, a.k.a. SIPS[1]. SIPS is a service organization with the mission of advancing and supporting all of psychological science. About 200 people met in Charlottesville, VA to participate in hackathons and lightning talks and unconference sessions, go to workshops, and meet other people interested in working to improve psychology. What Is This Thing Called SIPS? If you missed SIPS and are wondering what happened – or even if you were there but want to know more about the things you missed – here are a few resources I have found helpful: The conference program gives you an overview and the conference OSF page has links to most of what went on, though it’s admittedly a lot to dig through. For an easier starting point, Richie Lennie posted an email he wrote to his department with highlights and links, written specifically with non-attendees in mind. Drilling down one level from the conference OSF page, all of the workshop presenters put their materials online. I didn’t make it to any workshops so I appreciate having access to those resources. Continue reading

Thresholds – David Funder (funderstorms)

Part One I’ve been suffering an acute bout of cognitive dissonance lately, finding myself disagreeing with people I admire, specifically, several of the authors of this article. (The article has 72 authors and I don’t know all of them!)   The gist of the article can be stated very simply and in the authors’ own words: “We propose to change the default P-value threshold for statistical significance for claims of new discoveries from .05 to .005.”  This proposal is soberly, clearly argued and the article makes some good points, the best of which is that, imperfect as this change would be, at least it’s a step in the right direction.  But I respectfully disagree.  Here’s why. I’m starting to think that p-levels should all be labeled “for entertainment purposes only.”  They give a very very rough idea of the non-randomness of your data, and are kind of interesting to look at. So they’re not completely useless, but they are imprecise at best and almost impossible to interpret at worst*, and so should be treated as only one among many considerations when we decide what we as scientists actually believe.  Other considerations (partial list): prior probabilities (also very rough!), conceptual coherence, consistency with related findings, and (hats off please) replicability. Continue reading

alpha wars – Simine Vazire (sometimes i'm wrong)

[DISCLAIMER: The opinions expressed in my posts are personal opinions, and they do not reflect the editorial policy of Social Psychological and Personality Science or its sponsoring associations, which are responsible for setting editorial policy for the journal.]

  IMG-0290 gomi

i was going to do a blog post on having thick skin but still being open to criticism, and how to balance those two things.  then a paper came out, which i’m one of 72 authors on, and which drew a lot of criticism, much of it from people i respect a ton and often agree with (one of them is currently on my facebook profile picture, and one of the smartest people i know).  so this is going to be a two-fer blog post.  one on my thoughts about arguments against the central claim in the paper, and one on the appropriate thickness for scientists’ skin. PART I: the substantive argument* in our paper we argue that we should introduce a new threshold for statistical significance (and for claiming new discoveries), and make it .005. i want to take a moment to emphasize some other things we said in the paper.  we said that .005 should not be a threshold for publication. Continue reading