Risking lives: Smart borders, private interests and AI policy in Europe
Metcalfe, Philippa; Dencik, Lina; Chelioudakis, Eleftherios and van Eerd, Boudewijn. 2023. Risking lives: Smart borders, private interests and AI policy in Europe. Project Report. Data Justice Lab, Cardiff. [Report]
|
Text
Risking-Lives-report.pdf - Published Version Download (1MB) | Preview |
Abstract or Description
Recent years have seen huge investment in, and advancement of, technologically aided border controls, from biometric databases for identification to unmanned drones for external border surveillance. Data infrastructures and Artificial Intelligence (AI), often from private providers, are playing an increasingly pivotal role in attempts to predict, prevent and control often illegalised mobility into and across Europe. At the same time, the European Union is in the final stages of negotiating and adopting a final text of the proposed AI act, the inaugural EU legislation designed to establish comprehensive protections and safeguards with regards to the development, application and use of AI technology. This report explores and interrogates the interplay between smart borders, private interests, and policy surrounding AI within Europe. It does so to make apparent how the concept of 'risk' is integral to the advancement of smart border controls, while concurrently providing the framework for the governance of data infrastructures and AI. This highlights how AI is both embedded within and entrenching particular approaches to migration controls. To understand the relationship between smart borders, private interests and AI policy, we explore four components of smart borders in Europe: the development of 'Fortress Europe' in terms of securitisation, militarisation, and externalisation; technology used in smart borders; funding and profits; and AI policy. The report demonstrates that the concept of 'risk' in the context of migration and AI is used as both a legitimisation and regulatory tool. On the one hand, we see risk used to legitimise the ongoing investment in and development of hi-tech surveillance and AI at the border to prevent illegalised migrants from reaching European territory. Here, illegalised migrants are portrayed as a security issue and threat to Europe. On the other hand, the language of risk is also adopted as a regulatory tool to categorise AI applications within the AI act. Within these policy developments, we maintain that it is essential to include an exploration of the role of private defence and security companies and, as we investigate, their lobbying activities throughout the development of the AI act. These companies stand to make huge profits from the development of smart, securitised borders, seen as the answer to the problem of 'risky' migrants. From this, we end by considering the extent to which the AI act fails to benefit and protect those most affected by the harmful effects of smart borders.
Item Type: |
Report (Project Report) |
Keywords: |
Surveillance, AI, Europe |
Departments, Centres and Research Units: |
|
Date: |
August 2023 |
Item ID: |
37253 |
Date Deposited: |
17 Jul 2024 14:57 |
Last Modified: |
18 Jul 2024 09:25 |
URI: |
View statistics for this item...
Edit Record (login required) |