Submission of Evidence to the House of Lords Communications and Digital Committee Inquiry into Large language models

McQuillan, Daniel. 2023. Submission of Evidence to the House of Lords Communications and Digital Committee Inquiry into Large language models. Other. UK Parliament, London. [Report]

[img]
Preview
Text
LLMs_Select_Committee_Submission_Goldsmiths_UoL_Dan_McQuillan.pdf - Submitted Version

Download (141kB) | Preview

Abstract or Description

1. Large language models contain foundational flaws which mean they are unable to live up to the hype and make it likely that the current bubble will burst. They will continue to require vast amounts of invisibilised labour to produce, but will not result in any form of artificial general intelligence (AGI).

2. The greatest risk is that large language models act as a form of ‘shock doctrine’, where the sense of world-changing urgency that accompanies them is used to transform social systems without democratic debate.

3. The AI White Paper promotes populist narratives about AI adoption that align with the hype around large language models while offering a fairly thin evidence base. Ongoing developments in UK policy, such as the upcoming summit, cite notions of existential threat while ignoring the more mundane risks of social and environmental harms.

4. The narrative around open source AI is a complete red herring. The way ‘open’ can be applied to large language models doesn’t level the playing field, make the models more secure or challenge the centralisation of control.

5. UK regulators are not well placed to address the issues raised by large language models because these systems operate across sectors and technical, economic and social registers while establishing unpredictable feedback loops between them. Meanwhile the AI industry is already engaged in significant lobbying at the EU which has proven sufficient to dissolve regulatory red lines.

6. Additional options for regulation draw on frameworks like post-normal science to mandate an extended peer community and the inclusion of previously marginalised perspectives. This more grounded approach has a better chance of resulting in AI that is more socially productive, where regulators are supported by distributed and adaptive ‘councils on AI’.

Item Type:

Report (Other)

Keywords:

LLMs, House of Lords, ChatGPT, Large Language Models, Shock Doctrine. Open Source, Regulation

Departments, Centres and Research Units:

Computing

Date:

1 September 2023

Item ID:

35153

Date Deposited:

05 Mar 2024 15:26

Last Modified:

06 Mar 2024 01:08

URI:

https://research.gold.ac.uk/id/eprint/35153

View statistics for this item...

Edit Record Edit Record (login required)