What personalisation can do for you! Or: how to do racial discrimination without ‘race’
Phan, Thao and Wark, Scott. 2021. What personalisation can do for you! Or: how to do racial discrimination without ‘race’. Culture Machine, 20, ISSN 1465-4121 [Article]
|
Text
Phan-Wark.pdf - Published Version Available under License Creative Commons Attribution Non-commercial. Download (470kB) | Preview |
Abstract or Description
Between 2016 and 2020, Facebook allowed advertisers in the United States to target their advertisements using three broad ‘ethnic affinity’ categories: African American, U.S.-Hispanic, and Asian American. Superficially, these categories were supposed to allow advertisers to target demographic groups without using data about users’ race, which Facebook explicitly does not collect. This article uses the life and death of Facebook’s ‘ethnic affinity’ categories to argue that they exemplify a novel mode of racialisation made possible by machine learning techniques.
Adopting Wendy H. K. Chun’s conceptualisation of race ‘and/as’ technology as an analytical frame, this article focuses on what ‘ethic affinity’ categories do with race. ‘Ethhnic affinity’worked by analysing users’ preferences and behaviour: they were supposed to capture an ‘affinity’ for a broad demographic group, rather than registering membership of that group. That is, they were supposed to allow advertisers to ‘personalise’ content for users depending on behaviourally determined affinities. We argue that, in effect, Facebook’s ethnic affinity categories were supposed to operationalise a ‘post-racial’ mode of categorising users. But the paradox of personalisation is that in order to apprehend users as individuals, platforms must first assemble them into groups based on their likenesses with other individuals.
Even in the absence of data on a user’s race—even after the demise of the categories themselves—users can still be subject to techniques of inclusion or exclusion for discriminatory ends. The inductive machine learning techniques that platforms like Facebook employ to classify users generate proxies, like racialised preferences or language use, as racialising substitutes. We conclude that Facebook’s ethnic affinity categories in fact typify novel modes of racialisation that are often elided by the claim that using complex machine learning techniques to attend to our preferences will inaugurate a post-racial present. Discrimination is not personalisation’s accidental product; it is its very condition of possibility. Like that of Facebook’s ethnic affinity categories, its death has been greatly exaggerated.
Item Type: |
Article |
||||
Departments, Centres and Research Units: |
|||||
Dates: |
|
||||
Item ID: |
34642 |
||||
Date Deposited: |
15 Jan 2024 15:20 |
||||
Last Modified: |
15 Jan 2024 15:20 |
||||
Peer Reviewed: |
Yes, this version has been peer-reviewed. |
||||
URI: |
View statistics for this item...
Edit Record (login required) |