Original article can be found here (source): Artificial Intelligence on Medium
March is Women’s History Month in the US, the UK and Australia, a time to honour women’s sometimes underrated contributions to society. According to the US National Women’s History Museum, Women’s History Month started in 1978 as a local “Women’s History Week” celebration in California, with organizers selecting the week to correspond with the March 8 International Women’s Day. The US Congress in 1987 passed Public Law 100–9 designating March as the Women’s History Month.
The past few decades have seen a steady increase in the number of women studying and excelling in the STEM fields. But this is not so in computer science — the number of women studying or pursuing a career in computer science has been decreasing since around 1990. As of 2015, women made up only 18% of computer science majors in the US — a decline from a high of 37% in 1984, according to a National Academies of Sciences, Engineering, and Medicine 2018 report. The AI Now Institute last year estimated that women currently make up 24.4% of the computer science workforce and receive median salaries that are only 66% of the salaries of their male counterparts.
A report by the National Center for Women in Information Technology found out that nearly half the women who go into technology eventually leave the field — more than double the percentage of men who leave.
The gender imbalance among machine learning researchers therefore is hardly surprising. The AI Now Institute warned in a 2019 report that the AI industry needs to “acknowledge the gravity of its diversity problem and admit that existing methods have failed to contend with the uneven distribution of power.” It’s argued that the lack of gender diversity also creates the risk — as an increasing body of research in recent years has suggested — that AI systems may perpetuate existing forms of structural inequality and cause harm to underrepresented groups.
As part of this month’s Women in AI special project, Synced takes a look at some key numbers (and trends) on gender gaps in the AI industry and discusses possible ways to address the issue.
WIRED worked with Montreal research firm Element AI in 2018 to estimate the diversity of leading machine learning researchers. They found that only 12 percent were women. The estimate came from an analysis of the numbers of men and women who had contributed work at three top machine learning conferences in 2017.
Nesta, a UK-based innovation foundation, estimated last year that only 13.83 percent of AI paper authors are women, and in relative terms, the proportion of AI papers co-authored by at least one woman has not improved since the 1990s. Nesta conducted a large-scale analysis of gender diversity in AI research using publications from widely-used preprints repository arXiv ( 1,372,350 papers), where they identified 74,407 AI papers through an expanded keyword analysis and predicted author gender using a name-to-gender inference service, Gender API.
A report produced by Element AI and cited in the 2019 Global AI Talent Report found that 18 percent of authors at the leading 21 conferences in the field are women.
The Nesta report also found important international differences in the gender diversity gap in AI research. China, one of the world leaders in AI research, is excluded from the sample due to a lower confidence in gender labelling authors by name.
Over 30 percent of the AI papers in the Netherlands had at least one female co-author, while only 10 percent and 16 percent of those with Japanese and Singaporean affiliations had a female co-author. Note that most countries covered by the report have a higher share of AI papers than non-AI papers with at least one female author.
The 2019 Artificial Intelligence Index reports that, across all the educational institutions examined, males constituted a clear majority of AI department faculty, making up 80 percent of AI professors on average.
According to the 2019 Artificial Intelligence Index, diversifying AI faculty along gender lines has not shown significant progress — with women comprising less than 20 percent of the new faculty hires in 2018. Similarly, the share of female AI PhD recipients has remained virtually constant at 20 percent since 2010 in the US.
The percent of new female tenure-track faculty has remained largely constant at slightly over 21 percent, according to the AI Index 2019.
21 percent of Google’s technical roles are filled by women, according to company figures released in 2018.
In its 2018 Global Gender Gap Report the World Economic Forum found that 22 percent of AI professionals on LinkedIn were women, with no evidence of improvement in recent years.
Facebook said in 2018 that 22 percent of its technical workers were women.
– 25 %
Apart from the University of Washington, every other academic institution and industry organization in Nesta’s dataset has less than 25 percent female AI researchers.
For example, 11.3 percent of Google’s researchers who have published their AI research on arXiv are women, while the proportion is similar for Microsoft (11.95 percent) and IBM (15.66 percent). When looking at the universities, ETH Zurich with 10.15 percent has the lowest share of women authors in AI research on arXiv.
The AI Index 2018 surveyed online job advertisement data and found that 71 percent of applicants for AI roles in the US in 2017 were men.
Nesta’s Gender Diversity in AI Research 2019report presented opinions from a number of leading female researchers:
Mihaela van der Schaar, Professor of Machine Learning, Artificial Intelligence, and Medicine at the University of Cambridge and a Turing Fellow at The Alan Turing Institute in London, is the female AI researcher with the most publications in Nesta’s data.
- Even though she began working in AI 16 years ago, her presence in the field and much of her early work has only been recognised recently.
- “Disparity of recognition between men and women is slowly changing but there is a lot more that needs to be done.”
- Van der Schaar also highlighted the importance of an open discussion about the challenges that women face in the AI sector, and that workplace changes such as flexible hours are needed to enable researchers to participate in a fast-paced sector without sacrificing their family life.
- She noted that having a good mentor is important — people to champion gender diversity in the AI sector as well as individuals at the top of institutions to push forward policies and drive change.
- She pointed out that “publicising the interdisciplinary scope of possibilities and career paths that studying AI can lead to will help to inspire a more diverse group of people to pursue it. In parallel, the industry will benefit from a pipeline of people who are motivated by combining a variety of ideas and applying them across domains.”
Petia Radeva, a professor at the Department of Mathematics and Computer Science at the University of Barcelona, Icrea Academia and the University’s Head of Computer Vision and Machine Learning research group, is another female AI researcher who ranks among the highest in number of publications on arXiv.
- Radeva noted that lack of diversity in AI research requires policy interventions to tackle the issue at its root — in higher education and universities.
- She underlined that prestigious AI conferences have initiatives, from parallel sessions and workshops to policies supporting diversity and inclusion, that are trying to address the lack of women in the field and create an inclusive environment.
- She emphasised that it is important for young researchers to work on topics that motivate them and that she was positive that the broad domains of application and the potential impact of this technology will attract more women into the sector.
Eve Riskin is the University of Washington’s Associate Dean of Diversity and Access in the College of Engineering, Professor of Electrical & Computer Engineering, and Faculty Director of the ADVANCE Center for Institutional Change.
- Riskin said environments in male dominated subjects can become ‘toxic’, and the University of Washington is working on schemes with department chairs on cultural change while providing professional development assistance to female faculty in STEM.
- “Research has shown that female undergrads achieve more under female faculty, and so you need a two-pronged approach.”
- At the same time, commitment, time and funding are needed to drive change.
- She pointed out that “it is important to make under-represented groups more visible but this has to be done thoughtfully. If communications teams are highlighting the stories of women and minorities this must be done as part of a broader programme of activity. These groups have to be genuinely welcomed and nurtured, not just used for photo ops.”
AI Now Institute’s Discriminating Systems — Gender, Race, and Power in AI 2019 report recommended:
- To improve workplace diversity:
- Alter hiring practices to maximize diversity — ensure more equitable focus on under-represented groups and create more pathways for contractors, temps, and vendors to become full-time employees. Commit to transparency around hiring practices — especially regarding how candidates are levelled, compensated, and promoted.
- Publish compensation levels, including bonuses and equity, across all roles and job categories broken down by race and gender, and set pay and benefit equity goals that include contract workers, temps, and vendors.
- Publish harassment and discrimination transparency reports — including the number of claims over time, the types of claims submitted, and actions taken.
- To address bias and discrimination in AI systems:
- Transparency is essential — begin with tracking and publicizing where AI systems are used, and for what purpose. The field of research on bias and fairness needs to go beyond technical debiasing to include a wider social analysis of how AI is used in context. This necessitates including a wider range of disciplinary expertise.
- Rigorous testing should be required across the lifecycle of AI systems in sensitive domains. Pre-release trials, independent auditing, and ongoing monitoring are necessary to test for bias, discrimination, and other harms.
- The methods for addressing bias and discrimination in AI need to expand to include assessments of whether certain systems should be designed at all — based on a thorough risk assessment.
A few groups aiming to address the gender imbalance issues in the field: