AI Inequalities at Work
Dencik, Lina; Brand, Jessica; Metcalfe, Philippa and Hopkins, Cate. 2025. AI Inequalities at Work. Project Report. Data Justice Lab, Cardiff. [Report]
![]() |
Text
AI-Inequalities-At-Work.pdf - Published Version Download (962kB) |
Abstract or Description
The report provides a review of research to date on AI and (in)equality in the workplace, focusing on the following areas: Age; Women; Disability; Ethnicity; and Minority Languages. Drawing from studies across the world, the report highlights that although AI is often advanced on grounds of efficiency and enhanced productivity, including more objective decision-making, the impacts of data-driven technologies, including AI, on work and workers so far tend to extend or introduce significant inequalities. By showcasing how such inequalities are present and become manifested within the workplace across different groups of workers, the report highlights both the intersectional nature of AI inequalities as well as the particularities of different lived experiences.
Looking at uses of AI during key stages of workplace relations and the labour process, from hiring and recruitment through to management including the direction, evaluation and the disciplining of workers, as well as the more recent adoption of generative AI in workplace settings, the report provides a comprehensive overview of the complex and multifaceted nature of AI inequalities. The report shows how significant inequalities are manifested within the workplace due to both the nature of the technology itself, particularly in terms of what data is or can be generated and collected, how that data is processed, and the nature of outputs and decision-making that result from such processes, as well as the importance of the broader context in which such technology is developed and used.
By exploring research on questions of age, women, disability, ethnicity and minority languages in relation to AI and inequality, the report makes clear that the disparate impact of AI on different workers is intricately linked to historical patterns of social and economic inequality that sees the already advantaged reap most of the benefits of AI whereas those already disadvantaged tend to be the most at risk of harm. Such harm can happen by being more exposed and subjected to the use of AI technologies within the labour process, by being more likely to experience discriminatory outcomes based on their use, or by being less equipped with the resources needed to exploit the opportunities of AI. For example, both women and young workers tend to occupy more precarious positions in the labour market where experimentation with AI technologies in the management of workers has become more widespread, such as care work and platform labour, often involving increased surveillance and work intensification. They also dominate in jobs that are more likely to be replaced by AI-driven automation. Furthermore, the reliance on such technologies has been shown to particularly harm older workers and ethnic minorities whose identities and experiences are not properly accounted for in the design and use of data-driven systems. Similarly, disabled workers and minority language workers are often found to be stigmatised or excluded when new technologies are introduced. At the same time, the report also showcases some of the ways AI has been used to support or advance equality through more inclusive technologies or highlighting existing discriminatory practices within organisations. For example, active efforts have been made to create new data-driven models catered explicitly to minority languages to further advance their use in society more broadly. Similarly, research shows that AI technologies can be used to actively include disabled workers in processes where they were previously excluded. The use of AI in recruitment and hiring can also seek to deliberately target historical practices that have resulted in biased or exclusionary outcomes, perhaps particularly with regards to women and ethnic minorities, and to allow for more inclusive forms of recruitment.
While such advancements are welcomed in responses to the use of AI in the workplace, the report also makes clear that overwhelmingly efforts from within and beyond the labour movement have been oriented towards minimising harms of AI, often ex post, through securing more transparency and better safeguarding measures, or by seeking to end or limit the use of AI technologies for certain purposes or in particular settings. In this sense, the report shows the continued need to mobilise efforts that can also address AI inequalities preventatively, both through stricter regulation, including avenues for AI to be refused, as well as through enhancing workers’ voices and decision-making power within workplaces in ways that actively foregrounds the experiences of those workers most likely to be harmed or disadvantaged by its use.
Item Type: |
Report (Project Report) |
Related URLs: |
|
Departments, Centres and Research Units: |
|
Date: |
26 March 2025 |
Item ID: |
38775 |
Date Deposited: |
09 May 2025 11:31 |
Last Modified: |
09 May 2025 11:31 |
URI: |
View statistics for this item...
![]() |
Edit Record (login required) |