AI & Music by Robert Laidlow

by | Dec 18, 2020 | NWCDTP Research, PhD profiles | 0 comments

Robert Laidlow is a PhD student at the Royal Northern College of Music. His research focusses on developing methodologies for incorporating artificial intelligence into the compositional process; this usually results in new pieces of music, exploring the interaction between cutting-edge technology and live human performance in various ways.

First, just a little background on the field of AI & Music. AI (in this article, I am referring to the subfield of machine learning) has been used across the music industry for various tasks, but they are not always the most inspiring tasks for a creative practitioner. AI is very good at identifying and replicating patterns, which makes it very useful for music-related tasks such as automatically identifying which song is playing or recommending new music based on what you already like. It is much more difficult to train AI to generate new music, particularly new music that is not trying to replicate an existing composer. However, my aim as a researcher is not to create algorithms that can write music all on their own (replacing the composer), but rather to work with computer scientists on algorithms that can enhance, catalyse or subvert the creative process, leading to new and exciting results that could never have emerged otherwise.

The first major project I worked on was utilising AI that can write notated music (this is in contrast to an AI that generates audio). An initial field test was a short piece for solo keyboardist which was performed first at the Barbican Centre in March 2019 called Turing Test//Prelude. I used an algorithm that had been trained to generate music “in the style” of the composer Bach and stitched the results of this algorithm together with original music by the real Bach so that the two alternated. The audience had a game to play; they had to identify (in real time) when the music was Bach, and when it was an AI imitation (1). This was an excellent way to get the feel of an algorithm before I committed to writing a full piece of my own using such technology; this was the next task.

For the RNCM Future Music Festival 2019, I composed the work Three Entistatios, but this time the AI (developed by Christine Payne at OpenAI) was imitating my own music rather than Bach’s. I found this more rewarding from a creative standpoint: the music in places became a genuine conversation, a push-and-pull between myself and the algorithm imitating me; it was also fascinating, enlightening and educational to analyse what AI found easy to imitate in my own music, and what it found very difficult. More than anything, working closely with the results of such a sophisticated algorithm forced me to reconsider some very basic elements of music, based on the wildly unexpected way that AI treats some things we consider simple (such as the structure of a piece of music, or the way music might repeat itself). Through Turing Test//Prelude and Three Entistatios, I began to develop a working methodology for using AI, which I have written about in a chapter for an upcoming volume on music & technology and presented at a few conferences including NWCDTP’s Student Conference and the RMA Student Conference. (2) (3)

The next major project I worked on was called Alter, a work for mezzo-soprano and ensemble commissioned by the Barbican Centre for the PRiSM (the RNCM Centre for Practice & Research in Science & Music) team. My PhD at the RNCM is based at PRiSM, so I was delighted to work closely alongside my supervisors and other colleagues for this work. Alter was premiered as part of an event focussed on the life and work of Ada Lovelace, who in the mid-19th century predicted that computers might one day be able to compose music of their own. For this piece, then, I decided to collide the Victorian steampunk-esque vision of the computer that Lovelace and her contemporaries had with the reality of machine learning today, presenting the two side-by-side. To do this, I worked with mechanical engineer Jonathan Morris (CDP) to create a unique 3D-printed percussion instrument called the Lovelace Engine, styled after the Analytical Engine that inspired Lovelace. This instrument was played alongside music written by or inspired by AI, building upon my previous experience. For the first time in my work, we also worked with text generated by AI, set to music for mezzo-soprano, for which I used two algorithms: one developed with my supervisor David de Roure, and one (called GPT-2) developed by OpenAI. I am very proud of the result, which was not only a successful trial and development of many different AI systems and algorithms in live performance, but also engaged and intrigued the audience, many of whom caught me in the interval or afterwards to enquire about the research that went into both the new instrument and the artificial intelligence. The Lovelace Engine particularly captured peoples’ imaginations and was even the cover photo of the concert’s review in the Guardian. (4) (5) (6) (7)

2020 has been a little quieter, for obvious reasons. I was fortunate to fit in an exciting project with the BBC Philharmonic Orchestra in February, where two new pieces of mine were performed that used AI in different ways – one for string quartet, and one for solo violin. This event was an interactive one with audience, and it was brilliant to speak with members of the audience about this technology before they heard the pieces, so they could judge for themselves the success or otherwise of AI and techniques that use it. I even managed to set up a station where audience members could compose their own music using AI, which was then played by members of the orchestra. Over the lockdown period, I have been writing up as much as I can, since live performance is on hold for a while, and there has been no opportunity to travel to collaborate with computer scientists etc.

My plans for the rest of the PhD involve travel to California (which has been postponed from March this year), on a study visit to Silicon Valley. I am also writing music for the BBC Philharmonic Orchestra, who generously support my PhD, which pushes the techniques I have developed over the last 18 months much further. RNCM PRiSM has recently released a new implementation of an algorithm called SampleRNN, which generates audio, that I am excited to work with and incorporate into future pieces of music.

That’s a short tour of the research I’ve been up to. I would welcome any questions or comments – please get in touch if you have any thoughts!

I’m hugely grateful to the NWCDTP for their continued support of this research, which often requires very specialist equipment, expertise, or travel to tech centres with the necessary resources to run demanding AI algorithms.

 

(1) To hear Turing Test//Prelude performed by the Chineke! Orchestra, follow this link https://youtu.be/u5mNa6KE0lA?t=3400

(2) To hear Three Entistatios, follow this link https://www.youtube.com/watch?v=839rF8pucmY&ab_channel=RobertLaidlow

(3) I wrote more about Three Entistatios here robertlaidlow.co.uk/entistatios

(4) I wrote more about Alter here robertlaidlow.co.uk/alter

(5) I also wrote about it here https://royalphilharmonicsociety.org.uk/rps_today/news/alter-artificial-intelligence-and-the-modern-polymath

(6) Jonathan & I wrote more in detail about the creation of the Lovelace Engine here https://www.rncm.ac.uk/research/research-centres-rncm/prism/prism-blog/inventing-the-lovelace-engine/

(7) To hear Alter follow this link https://youtu.be/L1mQGaNmfUM

0 Comments