Sexualized deepfake abuse: Perpetrator and victim perspectives on the motivations and forms of non-consensually created and shared sexualized deepfake imagery
Flynn, Asher; Powell, Anastasia; Eaton, Asia A. and Scott, Adrian J.. 2025. Sexualized deepfake abuse: Perpetrator and victim perspectives on the motivations and forms of non-consensually created and shared sexualized deepfake imagery. Journal of Interpersonal Violence, ISSN 0886-2605 [Article] (In Press)
|
Text
2025 Asher, Powell, Eaton & Scott (JIV).pdf - Accepted Version Available under License Creative Commons Attribution Non-commercial No Derivatives. Download (379kB) | Preview |
Abstract or Description
Advances in digital technologies provide new opportunities for harm, including sexualized deepfake abuse–the non-consensual creation, distribution, or threat to create/distribute, an image or video of another person that had been altered in a nude or sexual way. Since 2017, there has been a proliferation of shared open-source technologies to facilitate deepfake creation and dissemination, and a corresponding increase in cases of sexualized deepfake abuse. There is a substantive risk that the increased accessibility of easy-to-use tools, the normalization of non-consensually sexualizing others, and the minimization of harms experienced by those who have their images created and/or shared, may impact prevention and response efforts. This paper reports on findings from 25 qualitative interviews conducted with perpetrators (n=10) and victims (n=15) of sexualized deepfake abuse in Australia. It provides insights into sexualized deepfake abuse, patterns in perpetration and motivations, and explores theoretical explanations that may shine light on how perpetrators justify and minimize their behavior. Ultimately, the study finds some similarities with other forms of technology-facilitated sexual violence, but identifies a need for responses that recognize the accessibility and ease in which deepfakes can be created, and which capture the diversity of experiences, motivations and consequences. The paper argues that responses should expand beyond criminalization to include cross-national collaborations to regulate deepfake tool availability, searches and advertisements.
Item Type: |
Article |
||||
Identification Number (DOI): |
|||||
Departments, Centres and Research Units: |
|||||
Dates: |
|
||||
Item ID: |
39499 |
||||
Date Deposited: |
03 Sep 2025 15:54 |
||||
Last Modified: |
03 Sep 2025 15:54 |
||||
Peer Reviewed: |
Yes, this version has been peer-reviewed. |
||||
URI: |
View statistics for this item...
![]() |
Edit Record (login required) |