The Crisis of Human Knowledge Formation in AI Society: Algorithmic Probability and Human Abductive Hypotheses
Tamari, Tomoko. 2024. 'The Crisis of Human Knowledge Formation in AI Society: Algorithmic Probability and Human Abductive Hypotheses'. In: British Sociological Association Annual Conference 2024: Crisis, Continuity and Change. Online, United Kingdom 3 - 5 April 2024. [Conference or Workshop Item]
No full text availableAbstract or Description
The emergence of ChatGPT, an artificial intelligence with a large language model (LLM), has become a central topic for those concerned with the potential risks to human creativity and imagination. Comparing human language acquisition processes with algorithmic machine language systems, the paper analyses both their differences and similarities to explore potential risks of human-machine symbiotic knowledge formation. Whereas ChatGPT needs an LLM with an algorithm that has been trained on a massive amount of text-based data, human babies start to learn words one by one to expand their language capacity through bodily and sensory experience. This is a vital process for humans to make a link between an object’s name and its meaning in the complex language system of the real world. In this process, abductive inferences which generate and verify explanatory hypotheses, help inductively generalize language concepts. Although LLM’s algorithm can also be seen as using inductive reasoning, it is based on a probabilistic statistical data model, which is different from abduction in human intelligence. Human abduction inferences cannot be based on mathematical ‘rational’ calculation, rather they rely on flexible, inspirational, even irrational, or novel conceptualization (and generalization) through embodied experience. This is a key process in the expansion of human language systems and knowledge formation. ChatGPT generates huge volumes of ‘text-based’ knowledge without involving the intrinsic traits of the human language ontogenetic processes. Machine generated knowledge recursively integrates and becomes part of the meta-data for LLMs. This can distort human abduction inferences, conceptual creativity, and knowledge formation mechanisms.
Item Type: |
Conference or Workshop Item (Paper) |
||||
Keywords: |
algorithms, artificial intelligence, body, machine, knowledge, abductive hypotheses, AI |
||||
Related URLs: |
|
||||
Departments, Centres and Research Units: |
|||||
Dates: |
|
||||
Event Location: |
Online, United Kingdom |
||||
Date range: |
3 - 5 April 2024 |
||||
Item ID: |
38546 |
||||
Date Deposited: |
05 Mar 2025 12:06 |
||||
Last Modified: |
05 Mar 2025 12:06 |
||||
URI: |
![]() |
Edit Record (login required) |