Using Interactive Machine Learning to Sonify Visually Impaired Dancers’ Movement

Katan, Simon. 2016. Using Interactive Machine Learning to Sonify Visually Impaired Dancers’ Movement. Proceedings of MOCO'16, July 05 - 07, 2016, Thessaloniki, GA, Greece, [Article]

[img] Text
MOCO16_SKATAN_noCP.docx - Accepted Version
Available under License Creative Commons Attribution Non-commercial No Derivatives.

Download (613kB)

Abstract or Description

This preliminary research investigates the application of Interactive Machine Learning (IML) to sonify the movements of visually impaired dancers. Using custom wearable devices with localized sound, our observations demonstrate how sonification enables the communication of time-based information about movements such as phrase length and periodicity, and nuanced information such as magnitudes and accelerations. The work raises a number challenges regarding the application of IML to this domain. In particular we identify a need for ensuring even rates of change in regression models when performing sonification and a need for consideration of how to convey machine learning approaches to end users.

Item Type:

Article

Identification Number (DOI):

https://doi.org/10.1145/2948910.2948960

Keywords:

Interactive Machine Learning; Accessible Interfaces; Dance; Sonification.

Departments, Centres and Research Units:

Computing > Embodied AudioVisual Interaction Group (EAVI)

Dates:

DateEvent
6 June 2016Accepted
5 July 2016Published

Item ID:

18854

Date Deposited:

02 Sep 2016 13:20

Last Modified:

20 Jun 2017 10:35

Peer Reviewed:

Yes, this version has been peer-reviewed.

URI:

https://research.gold.ac.uk/id/eprint/18854

View statistics for this item...

Edit Record Edit Record (login required)