Musical Audio Synthesis Using Autoencoding Neural Nets
Sarroff, Andy and Casey, Michael A.. 2014. Musical Audio Synthesis Using Autoencoding Neural Nets. Proceedings of the International Society for Music Information Retrieval Conference (ISMIR2014), [Article]
|
Text
AndySarroffMichaelCaseyICMC2014.pdf Download (303kB) | Preview |
Abstract or Description
With an optimal network topology and tuning of hyperpa-
rameters, artificial neural networks (ANNs) may be trained
to learn a mapping from low level audio features to one
or more higher-level representations. Such artificial neu-
ral networks are commonly used in classification and re-
gression settings to perform arbitrary tasks. In this work
we suggest repurposing autoencoding neural networks as
musical audio synthesizers. We offer an interactive musi-
cal audio synthesis system that uses feedforward artificial
neural networks for musical audio synthesis, rather than
discriminative or regression tasks. In our system an ANN
is trained on frames of low-level features. A high level
representation of the musical audio is learned though an
autoencoding neural net. Our real-time synthesis system
allows one to interact directly with the parameters of the
model and generate musical audio in real time. This work
therefore proposes the exploitation of neural networks for
creative musical applications.
Item Type: |
Article |
||||
Related URLs: |
|
||||
Departments, Centres and Research Units: |
|||||
Dates: |
|
||||
Item ID: |
17628 |
||||
Date Deposited: |
01 Apr 2016 13:31 |
||||
Last Modified: |
29 Apr 2020 16:16 |
||||
Peer Reviewed: |
Yes, this version has been peer-reviewed. |
||||
URI: |
View statistics for this item...
Edit Record (login required) |