Quality of Evidence Revealing Subtle Gender Biases in Science Is in the Eye of the Beholder

When presented with evidence of how gender bias disadvantages women in STEM fields, men evaluate this research more negatively than women.

Introduction

Although science, technology, engineering, and mathematics (STEM) value objectivity as a fundamental principle, a growing body of evidence shows that unconscious gender bias disadvantages women and their work in STEM fields. Awareness of the current research that demonstrates this bias is a key step in addressing the issue. However, awareness depends not only on evidence but also on the audience’s receptivity to evidence. Men make up the vast majority of the audience of STEM faculty, yet previous research on bias suggests that they may be especially reluctant to accept evidence of gender bias, perceiving it as a threat to their group identity and status. These randomized, double-blind studies compare how men and women perceive the quality of research evidence of gender bias in STEM fields. Specifically, the authors looked at how men and women rated the quality of research abstracts on gender bias in STEM, and compared results from STEM faculty, non-STEM faculty, and the general public.

Findings

When shown an abstract reporting evidence of gender bias disadvantaging women in STEM fields, men evaluated the research less favorably than women, especially among STEM faculty.

  • Among the general public, men rated the quality of an abstract reporting gender bias less favorably than women (4.25 vs. 4.66 on a 6-point scale), with moderate effect size.
  • Among faculty at a research university, men rated the quality of an abstract reporting gender bias less favorably than women (4.21 vs. 4.65), with moderate effect size.
    • Among 89 non-STEM faculty, men and women rated the quality of the abstract similarly (4.55 vs. 4.54), which was not significantly different.
    • Among 116 STEM faculty, men rated the quality of the abstract less favorably than women (4.02 vs. 4.80), with large effect size.
    • The difference in ratings between STEM and non-STEM faculty was statistically significant among men, but not among women.
  • In a replication experiment with adults in the general public, men rated the quality of a different abstract less favorably than women (3.65 vs 3.86) when it reported gender bias, and more favorably than women (3.83 vs 3.59) when it was altered to report no gender bias, with small effect sizes.
  • There was no difference found in the first two experiments depending on whether the abstract was written by a man or a woman.

In short, gender bias in STEM creates barriers not only to women’s advancement, but also to accepting evidence of these barriers.

Methodology

All three experiments used an online survey platform. Experiments 1 and 3 recruited 205 and 303 U.S.-based adults, respectively, through a task posted to the Amazon Mechanical Turk online labor market platform. Experiment 2 recruited 205 faculty (STEM and non-STEM) from a U.S. research university, through an invitation to all 506 tenure-track faculty to participate in a faculty climate survey, followed optionally by the survey for the experiment. In all experiments, participants read a real abstract from a peer-reviewed journal, then rated the quality of the abstract and the research described (4 items) on a 6-point scale. These items were agreement with the authors’ interpretation of the results, the importance of the research, and how well-written and favorable the found the abstract. The final rating was an average of the 4 items. Experiments 1 and 2 used an abstract from Moss-Racusin et al. (previously summarized on GAP), which found evidence of gender bias among science professors evaluating students for a laboratory-manager position. Participants were randomized to view the abstract with either a female or a male first-author name, and an affiliation with either Yale or Iowa State University. Experiment 3 used an abstract by Knobloch-Westerwick et al., which found evidence of gender bias in perceived quality of scientific conference abstracts and resulting collaboration interest. Participants were randomized to read either the original abstract or an altered abstract reporting no gender bias in the same experiment, with no author information.

Related GAP Studies