Fairness and bias in algorithmic recruitment tools: An interdisciplinary approach
Hilliard, Airlie. 2025. Fairness and bias in algorithmic recruitment tools: An interdisciplinary approach. Doctoral thesis, Goldsmiths, University of London [Thesis]
![]() |
Text (Fairness and bias in algorithmic recruitment tools: An interdisciplinary approach)
IMS_thesis_HilliardA_2025.pdf - Accepted Version Available under License Creative Commons Attribution Non-commercial No Derivatives. Download (7MB) |
Abstract or Description
Industrial-organisational psychology and computer science are increasingly coming together to create innovative pre-employment selection tools that use non-traditional data and scoring methods to improve the test-taking experience and maximise test validity. However, the two fields have different approaches when it comes to scoring and measuring bias and fairness. Psychology uses simple algebra to score tests, whereas computer science uses predictive modelling and machine learning. Bias and fairness are distinct concepts in psychology but the same in machine learning. Accordingly, using two commercially created algorithmic selection tools, this thesis describes six empirical studies investigating the impact of using an image-based format and machine learning based scoring on test validity/accuracy, fairness, and bias.
In terms of scoring, this research found acceptable subgroup differences in personality and that a machine learning based approach can increase test validity in comparison to a manual-based approach. However, it also found that computer science approaches to mitigating bias can lack compatibility with psychological best practices and equal opportunity laws.
In terms of the effects of format, this thesis provides first data on the fairness of pre-employment tests for neurodivergent test-takers, where neurotypical test-takers have a more positive experience than neurodivergent in general. It found that image-based assessment formats present an opportunity to close the disparity in the experience of the two groups, although further research into the specific features that can support this is needed. Finally, it found that well-trained algorithms are generalisable to neurodivergent populations without causing biased outcomes.
Overall, this thesis provides the foundations for psychologists and computer scientists to work more collaboratively to maximise test validity, fairness, and accessibility of pre-employment tests while minimising biased outcomes.
Item Type: |
Thesis (Doctoral) |
Keywords: |
algorithmic bias, algorithmic fairness, algorithmic recruitment, image-based assessment, neurodiversity |
Departments, Centres and Research Units: |
|
Date: |
31 January 2025 |
Item ID: |
38521 |
Date Deposited: |
28 Feb 2025 18:23 |
Last Modified: |
28 Feb 2025 18:23 |
URI: |
View statistics for this item...
![]() |
Edit Record (login required) |