Interactive Sound Texture Synthesis through Semi-Automatic User Annotations

Schwarz, Diemo and Caramiaux, Baptiste. 2014. Interactive Sound Texture Synthesis through Semi-Automatic User Annotations. In: M. Aramaki; O. Derrien; R. Kronland-Martinet and S. Ystad, eds. Sound, Music, and Motion. 10th International Symposium, CMMR 2013. Lecture Notes in Computer Science, Vol. 8905. Springer, pp. 372-392. [Book Section]

[img]
Preview
Text
schwarz2014interactive.pdf - Accepted Version

Download (2MB) | Preview

Abstract or Description

We present a way to make environmental recordings controllable again by the use of continuous annotations of the high-level semantic parameter one wishes to control, e.g. wind strength or crowd excitation level. A partial annotation can be propagated to cover the entire recording via cross-modal analysis between gesture and sound by canonical time warping (CTW). The annotations serve as a descriptor for lookup in corpus-based concatenative synthesis in order to invert the sound/annotation relationship. The workflow has been evaluated by a preliminary subject test and results on canonical correlation analysis (CCA) show high consistency between annotations and a small set of audio descriptors being well correlated with them. An experiment of the propagation of annotations shows the superior performance of CTW over CCA with as little as 20 s of annotated material.

Item Type:

Book Section

Identification Number (DOI):

https://doi.org/10.1007/978-3-319-12976-1_23

Keywords:

sound textures, audio descriptors, corpus-based synthesis, canonical correlation analysis, canonical time warping

Departments, Centres and Research Units:

Computing > Embodied AudioVisual Interaction Group (EAVI)

Dates:

DateEvent
2014Published

Item ID:

11201

Date Deposited:

23 Jan 2015 12:17

Last Modified:

29 Apr 2020 16:05

URI:

https://research.gold.ac.uk/id/eprint/11201

View statistics for this item...

Edit Record Edit Record (login required)