MEML is funded by the UK Arts and Humanities Research Council. It is an investigation into the musically expressive potential of machine learning (ML) when embodied within physical musical instruments. It proposes ‘tuneable ML’, a novel approach to exploring the musicality of ML models, when they can be adjusted, personalised and remade, using the instrument as the interface.

ML has been highly successful in allowing us to build novel creative tools for musicians. For example, generative models that bring new approaches to sound design, or models that allow musicians to build complex, nuanced mappings with musical gestures. These instruments offer new forms of creative expression, because they are configurable in intuitive ways using data that can be created by musicians. They can also offer new modes of control, with techniques such as latent space manipulation. Currently, to train a ML model, standard practice is to collect data (e.g sound or sensor data), create and pre-test the model within a data science environment, before testing it with the instrument. This distributed approach creates a disconnection between the instrument and the machine learning processes. With ML embodied within an instrument, musicians will be able to take a more creative and intuitive approach to making and tuning models, that will also be more inclusive to those without expertise in ML. Musicians can get the most value from ML if the whole process of machine learning is accessible; there are many creative possibilities in the training and tuning of models, so it’s valuable if the musician can have access to the curation of data, curation of models, and to methods for ongoing retuning of models over their lifetime.

We have reached the point where ML technology will run on lightweight embedded hardware at rates sufficient for audio and sensor processing. This opens up innumerable additions to our electronic, digital, and hybrid augmented acoustic instruments. Our instruments will contain lightweight embedded computers with ML models that shape key elements of the instruments behaviour, for example sound modification or gesture processing, responding to sensory input player and/or environment. This project will demonstrate how Tuneable ML creates novel musical possibilities, as it allows to create self-contained instruments, that can evolve independently from the complex data science tools conventionally used for ML.

The project asks how instruments can be designed to make effective and musical use of embedded ML processes, and questions the implications for instrument designers and musicians when tunable processes are a fundamental driver of an instrument’s musical feel and musical behaviour.

Team:

Chris Kiefer

Chris

Andrea Martelloni

Andrea

Nic Seymour-Smith

Nic

Anna Thomas

Anna

MEML is support by Sussex Digtial Humanities Lab

SHL