Perceptual and automated estimates of infringement in 40 music copyright cases

Yuan, Yuchen; Cronin, Charles; Müllensiefen, Daniel; Fujii, Shinya and Savage, Patrick E.. 2023. Perceptual and automated estimates of infringement in 40 music copyright cases. Transactions of the International Society for Music Information Retrieval, 6(1), pp. 117-134. ISSN 2514-3298 [Article]

651eb52a5f2e8.pdf - Published Version
Available under License Creative Commons Attribution.

Download (2MB) | Preview

Abstract or Description

Music copyright infringement lawsuits implicate millions of dollars in damages and costs of litigation. There are, however, few objective measures by which to evaluate these claims. Recent music information retrieval research has proposed objective algorithms to automatically detect musical similarity, which might reduce subjectivity in music copyright infringement decisions, but there remains minimal relevant perceptual data despite its crucial role in copyright law. We collected perceptual data from 51 participants for 40 adjudicated copyright cases from 1915–2018 in 7 legal jurisdictions (USA, UK, Australia, New Zealand, Japan, People’s Republic of China, and Taiwan). Each case was represented by three different versions: either full audio, melody only (MIDI), or lyrics only (text). Due to the historical emphasis in legal opinions on melody as the key criterion for deciding infringement, we originally predicted that listening to melody-only versions would result in perceptual judgments that more closely matched actual past legal decisions. However, as in our preliminary study of 17 court decisions (Yuan et al., 2020), our results did not match these predictions. Participants listening to full audio outperformed not only the melody-only condition, but also automated algorithms designed to calculate musical similarity (with maximal accuracy of 83% vs. 75%, respectively). Meanwhile, lyrics-only conditions performed at chance levels. Analysis of outlier cases suggests that music, lyrics, and contextual factors can interact in complex ways difficult to capture using quantitative metrics. We propose directions for further investigation including using larger and more diverse samples of cases, enhanced methods, and adapting our perceptual experiment method to avoid relying on ground truth data only from court decisions (which may be subject to errors and selection bias). Our results contribute data and methods to inform practical debates relevant to music copyright law throughout the world, such as the question of whether, and the extent to which, judges and jurors should be allowed to hear published sound recordings of the disputed works in determining musical similarity. Our results ultimately suggest that while automated algorithms are unlikely to replace human judgments, they may help to supplement them.

Item Type:


Identification Number (DOI):

Additional Information:

This work was supported by a Grant-In-Aid from the Japan Society for the Promotion of Science (#19KK0064) and by funding from the New Zealand Government, administered by the Royal Society Te Apārangi (Rutherford Discovery Fellowship 22-UOA-040 and Marsden FastStart Grant 22-UOA-052) to PES.

Data Access Statement:

Musical stimuli, data and analysis code are available at The full experiments can be accessed at for fullaudio, for melody-only and lyrics-only, and https://s2survey. net/music_copyright_voao for vocals-only and accompaniment-only. Detailed summaries and primary legal documents for all 40 cases are available at the Music Copyright Infringement Resource (MCIR; Cronin, 2018) and are linked in Table 1 and wherever else they appear in this article.


copyright; similarity; perception; melody; audio; lyrics

Departments, Centres and Research Units:



17 May 2023Accepted
5 October 2023Published

Item ID:


Date Deposited:

06 Oct 2023 07:52

Last Modified:

06 Oct 2023 07:52

Peer Reviewed:

Yes, this version has been peer-reviewed.


View statistics for this item...

Edit Record Edit Record (login required)