The field of Artificial Intelligence (AI) has grown exponentially as the world is increasingly being built around automated systems and ‘smart’ machines. Yet the people whose work underpin this are far from representative of the society these systems are meant to serve. There is an extensive gender gap in AI, with a significant absence of women working in the data and AI fields globally. This Explainer sheds light on the importance of understanding and closing the gender gap in AI.
The field of Artificial Intelligence (AI) has grown exponentially as the world is increasingly being built around automated systems and ‘smart’ machines.[i] Yet the people whose work underpin this are far from representative of the society these systems are meant to serve. There is an extensive gender gap in AI,[ii] with a significant absence of women working in the data and AI fields globally. According to the World Economic Forum, women make up an estimated 26% of workers in AI roles worldwide. Other studies have found that only 10-15% of machine learning researchers in the leading technology companies are women; less than 14% of authors of AI research papers are women; and women are under-represented at 17-18% across the largest online global data science platforms. Evidence has also been found of persistent structural inequality within the data and AI fields, with career paths and trajectories of AI professionals differentiated by gender. At the same time, and closely related to these issues, there are significant gender gaps and other biases in the data used to train AI and machine learning systems.
The under-representation of women (and marginalized groups) in data science and AI, alongside algorithmic and data biases such as gender data gaps, are not only understood as fundamentally ethical issues of social and economic justice (as well as value-in-diversity). Crucially they have also been found to partake in feedback loops whereby gender bias is built into AI systems and other technical products. Thus, as AI becomes ubiquitous in everyday life, the drive for inclusion in technology is of increasing concern for many. This Explainer sheds light on the importance of understanding and closing the gender gap in AI.
Historical, technical and political parameters
While digitalization maintains the promise of greater equality, it also poses the risk of encoding and amplifying existing patterns of (gender) inequality, particularly in the AI fields. The issue is complex, but can be broken down as follows into the main historical, technical and political parameters.
Historically, the under-representation of women in the technology sector has been framed as a ‘pipeline problem’, suggesting that the low number of women in tech is due to a small talent pool in STEM fields.[iii] Indeed, women's participation is constrained by unequal technical education and training, and girls drop off every rung of the digital skills continuum. However, this perspective tends to neglect institutions’, including technology companies’, failure to attract and retain female talent. Rather, it is also important to consider the ‘unwelcoming’ cultures of tech workplaces as a reason for women’s reticence to join the industry, and for their high attrition rates away from AI professions. This includes gendered stereotypes around technical expertise, pay gaps and harassment.
The history of women in technology is also a useful reference point for understanding the issue. At the beginning of electronic computing during the Second World War, software programming was largely considered ‘women’s work’, and the first ‘computers’ were young women. As programming became professionalized, however, the gender composition of the industry shifted, marginalizing the work of female technical experts into less prestigious sub-specialisms. In other words, when there began to be money and influence, more men began to enter the field. This pattern seems to be repeating itself with the advent of data science and AI fields, which also risks widening the gender pay gap further.
The technical and political aspects of data are also important since data are not neutral in their collection, analysis, interpretation or use. Not only can unconscious human biases affect such processing decisions, but data created within unequal (gender) power structures can reproduce the same discriminations present in society. Data used to train algorithms may under-represent (or harmfully over-represent) certain groups. A key instance of this is the gender data gap; this failure to collect sufficient gender-disaggregated data further shapes gendered AI.[iv]
What is happening at the moment?
A growing strand of research documents how AI systems can exhibit gender – and other – biases, and AI products are increasingly making headlines due to their discriminatory outcomes. For example, a study from the MIT Media Lab found that facial recognition software successfully identifies the faces of white men but fails to recognize those of dark-skinned women. Similarly, it was found that when translating gender-neutral language related to STEM fields, Google Translate defaulted to male pronouns. In the media, reports have surfaced highlighting how marketing algorithms have disproportionally presented scientific job advertisements to men. The introduction of automated hiring is also particularly worrying, as the smaller the number of women employed within the AI sector, the higher the potential for future AI hiring systems to exhibit gender bias, and so on. Such gender-biased AI not only has an immediate impact on individuals, but can also contribute to setbacks in gender equality globally.
Nonetheless, there is some important work being done by a variety of actors, not only towards increasing the number of women in AI, but also tackling related AI biases. Feminist and AI ethics scholars, as well as civil society, grassroots and non-profit initiatives, conduct research on – and draw the awareness of Big Tech companies and policy-makers to – these issues. Data & Society, HAI, and AI Now are notable organizations working on the social implications of AI. Projects on gender and AI include Gender Shades at MIT, Women in Data Science and AI at The Alan Turing Institute, and Gendered Innovations at Stanford, while initiatives encouraging women within AI include Women in AI and Women in Machine Learning.
Governments and international institutions are also addressing the issue to varying degrees, with some national and inter-governmental AI strategies, standards and initiatives developed to guide the (ethical) development of AI.[v] A noteworthy intervention by the UK government’s Office for AI saw £18.5 million pledged to boost diversity in AI roles, funding conversion degrees including 1000 scholarships for people from under-represented groups. Additionally, the United Nations has recently addressed the gender gap in AI; this work includes UNESCO’s impactful I’d Blush if I Could paper exploring the widespread female gendering of AI voice assistants.
Large technology companies and educational institutions are also key stakeholders, particularly around their internal diversity and inclusion in AI. Various initiatives have been trialled in the past few decades, particularly in developed economies, to encourage more women into technological fields, including AI. The US university Carnegie Mellon provides a successful model: The number of women in their Computer Science department increased dramatically from 7% in 1995 to 49% in 2017. The leading technology companies, however, have been less successful, despite diversity programs. For example, according to their 2020 diversity report, only 1.6% of Google’s US workforce are black women.
What happens next?
To a certain extent, the future state and implications of the gender gap in AI lie with the world’s largest technology companies. First, while estimates are available for the number of women in the global AI workforce, detailed, intersectional data about diversity in this field is still severely limited.[vi] Data made available by Big Tech companies are fragmented, and most national labor force statistics lack dis-aggregated information about the tech workforce. Improved disclosure of quality data, particularly the gender composition of technical and leadership teams, is needed to thoroughly examine, and thus close, the gender gap in AI. Second, the ways in which technology firms approach AI and data ethics will be crucial. Attention to various aspects of the gender gap in AI through internal ethics boards, technical bias mitigation and investment in ethical data infrastructures, for example, may help to tackle the issue, although such a large-scale effort will be difficult.
Future relationships between the tech sector and other key stakeholders in the field will be critical. Rapid advances in the development and rollout of AI over the past few years have triggered various initiatives in governments, for example, the European Commission’s recent legal framework proposal for AI regulation. New data-driven technologies developed within a regulatory and legislative framework that promotes AI fairness, accountability, responsibility, transparency and explainability hold promise for closing the gender gap in this sector. However, as it stands, explicit, substantial references to gender in existing AI ethics principles are scarce. Going forward, an interdisciplinary, multifaceted approach, including attention to AI skills acquisition and data literacy as well as the gender-sensitive design of machine learning systems, will be crucial. At a time when women in tech are nearly twice as likely as men to have lost their jobs or been furloughed due to the pandemic, closing the gender gap in AI is essential.
Criado Perez, C. (2019). Invisible Women: Exposing Data Bias in a World Designed for Men. London: Chatto & Windus.
D’Ignazio, C. and Klein, L. F. (2020). Data Feminism. Cambridge, MA: MIT Press.
Hicks, M. (2017). Programmed Inequality: How Britain Discarded Women Technologists and
Lost its Edge in Computing. MIT Press.
Noble, S. (2018). Algorithms of Oppression: How Search Engines Reinforce Racism. New York: NYU Press.
West, M., Kraut, R. and Chew, H. E. (2019). I’d blush if I could: closing gender divides in digital skills through education. UNESCO Equals, 306. https://en.unesco.org/Id-blush-if-I-could
World Economic Forum (2020). Global Gender Gap Report 2020. https://www.weforum.org/reports/gender-gap-2020-report-100-years-pay-equality
Young, E., Wajcman, J. and Sprejer, L. (2021). Where are the women? Mapping the gender job gap in AI. Policy Briefing. The Alan Turing Institute. https://www.turing.ac.uk/research/publications/report-where-are-women-mapping-gender-job-gap-ai
[i] Artificial Intelligence (AI) is defined as “When a machine or system performs tasks that would ordinarily require human (or other biological) brainpower to accomplish” (The Alan Turing Institute, 2021).
[ii] Note that ‘gender’ refers to socio-cultural attitudes, behaviors and identities, and ‘sex’ refers to
[iii] (a) STEM stands for ‘Science, Technology, Engineering and Mathematics’; (b) Not all countries have the same level of gender (in)equality in their technology workforces. For example, in Malaysia some universities have up to 60% women on computer science programmes, with near parity also reported in some Taiwanese and Thai institutions.
[iv] Note that the gender data gap tends to be larger for low and middle-income countries. A 2019 study of national databases in fifteen African countries, conducted by Data 2X and Open Data Watch, found that sex-disaggregated data were available for only 52% of the gender-relevant indicators.
[v] It is important to note that some discuss the gender gap in AI, although this is not a focus.
[vi] ‘Intersectionality’, coined in 1989 by Professor Kimberlé Williams Crenshaw, recognizes that women are a multifaceted and heterogeneous group, with a plurality of experiences, and that gender intersects with multiple aspects of difference and disadvantage.
The opinions expressed in this text are solely that of the author/s and do not necessarily reflect the views of the Heinrich Böll Stiftung Tel Aviv and/or its partners.