Coding Chaos Concert

|

The Coding Chaos concert was a collaboration with Spectrum Music performed on May 2, 2019 at The Tell in Toronto, Canada.

Part of Spectrum’s 2019 concert series featuring the Jungian archetypes, this concert represented The Creator, as described by them:

Where will we be when what we create can create us? Technology continues to grow at an exponential rate as we discover new ways to improve our everyday lives - but at what point do we lose ourselves to a technology advanced far beyond our control? In a digital world where technology constantly blurs the lines between fact and fabrication, the Spectrum composers probe the possibility of losing our sense of identity and grasp on reality - or will technology give us room to grow?

Along with 6 composers and 3 musicians we developed and performed music that had been created with or performed with the help of artificial intelligence.

Spectrum composers included: Mason Victoria, Chelsea McBride, Suzy Wilde, and Jackson Welchner with guest composers Harrison Argatoff and Nebyu Yohannes. Zoe Brown, the Assistant Director of Spectrum Music, coordinated and managed the concert.

Musicians were Bruce Cassidy, Larnell Lewis, and Chris Pruden.

Many thanks to everyone, this was a great collaboration in which we all learned what it was like working with AI tools.

You can find all the source and technical details for the concert on github. Visuals performed using Hydra Live Coding editor created by Olivia Jack. Customized heavily for the performance.

Pieces

Each pieces used different technologies which are described here.

Love - Nebyu Yohannes

Love - Nebyu Yohannes - still from live video

Love - Nebyu Yohannes - still from live video

  • Audio performance: Magenta AI Duet
  • Poetry: Img2poem and Kanye West AI
  • Handwriting: Handwriting Synthesis with Recurrent Neural Networks
  • Cathedral and sunrise visuals: Learning to See AI trained on 1500 images from Flickr.

This love song was one of the most fun to play and uses AI for all it’s aspects. I start by live drawing a castle or cathedral type image, interpreted by the AI as a cathedral. At the same time I’m triggering the stanzas of poetry, written by combining two different AI, and presented by animating AI generated handwriting. Cathedrals transition to sunrises and now the AI interprets a variety of found items as a picturesque morning landscape. A pipe cleaner as the horizon, a pine cone made a good mountain, dice worked as clouds, and small wooden cubes are seen as beach rocks. Videos of water drops on window panes and water drops on a naked torso match the feeling of longing the poem inspires and are superimposed on the AI generated images.

  • Keypad triggers and controls poem stanzas
  • Draw cathedrals on paper with pen
  • Rearrange pipe cleaner, pipe cone, 6x 6-sided dice, and 20x 10mm wood cubes for sunrise

Love

by Nebyu Yohannes, Img2poem and Kanye West AI

    Come daily to the very same
    The beginning of the spiritual light
    Its way
    Its water line
    It seems to stop trying to get

    The spirit of love come on me suddenly
    Climbs up to Your faith goodness when I am sad
     and feel You are precious and honored in my body
    Sweet love, renew thy force; be with me
     as with that fiesta of sunset in the sight of God and man

    A heart whose love is as strong as death
    It does not dishonor others, it is not made perfect in love
    It does not envy, it does not exist, nor You
    And so we know and rely on the tablet of Your lashes
    Or the wrinkled body of the sky of roses

    I have been a sinful man
    but teach me not forget
    what hast Thou done
    to seek them well

    I will know Your name

    the world is a beautiful thing
    just like a butterfly rose
    it is like a melody
    a feeling drawn to its warm heart
    feeling its own way

    i want to leave my heart up to my soul
    let me be your heart and joy
    embrace in your heart
    you won't find a lot
    just a dream
    just as you walk in your light a moment in time
    for love is innocent
    my love is yours

    love you can see
    you can see
    a smile in a smile
    like a puma in the distant mountain tops
     and you decide to leave me at the red rare deer
    four red roebuck at a green mountain
    horn at hip went my love feeds on your arm
    weeping may stay for the pale stones of your heart

    you are you
    and love
    look at your heart
    like a butterfly in the sky
    let us go into the sky
    let us walk
    i hear her voice in leaves
    she lies in a wild flower
    a butterfly tune
    i love you as if everything that exists

    you are the smallest one thing
    you are a beautiful conversation
    your body makes me hold your heart

    i know that rarity precedes extinction
    like that of the purple orchid in my garden
    its jealousy unyielding as the Tuscan mid the snow
    As the perfumed tincture of the heart where I have roots
    For I want as deep a dye

    the sun shines on the hills
    and all the wind blows terribly and everything is there
    remember the sun does not know
    the moon is a little bird
    who makes us fly
    like a bridge of fire
    to fly away from the hills

    if you could be a dream
    if you could be found in your mind
    it is not a dream of light
    it carries your heart

    the sun simmers
    elemental rays
    time is always there
    is always
    and the sun does not reach

    you know what you are
    you are always falling leaves on the ground
    don't you know your life is like a tree
    i know that i am not a poem i want to be bothered by

    i see softly the light
    i feel the scent
    Dancing in the wind
    the sun is going
    we go down
    the sea is going
    we go down
    the sky grows in a warm light
    i looked at night and round the sun
    and the sun does not reach

As Though I Knew - Harrison Argatoff

As Though I Knew - Harrison Argatoff - still from live video

  • Audio performance: Magenta AI Duet
  • Tree visuals: Learning to See AI. trained on 200 photos of trees by Harrison Argatoff.
  • Audio performance: VOCALOID5, Magenta AI Duet

Harrison supplied me with around 200 of his own photos of tree tops which I used to train the neural net and incorporated directly into the visuals. We wanted to keep the visuals as abstract as possible so using Hydra I heavily transformed the tree images that were directly used in the visuals. The AI generated trees had numerous problems - the photo resolution was very high but the AI can only work with very low resolutions so after a number of failed attempts to improve its handling of the input I came up with some hacks and workarounds that gave an impression of more detail than what was actually possible. It was also very difficult to find the right objects to represent leaves to the AI. I was getting desperate after tens of failed tests and completely frustrated just grabbed some sesame seeds from my kitchen - which worked fantastically, along with some sunflower seeds to add some negative space. The seeds were a bit to sticky and small to handle, but a reusable metal straw worked nicely to blow them around. I enjoyed being the wind blowing the leaves in time to the music.

  • Keypad triggers display of AI leaves, AI leaf kaleidescope, and tree photos and their effects
  • Metal straw, sesame seeds, sunflower seeds and thread to represent trees for AI
  • Audio drives movement of kaleidescope effects and movement

Sesame seed trees

Blowing leaves

Past Machinery - Jackson Welchner

Past Machinery - Jackson Welchner - still from live video

  • Audio performance: Magenta AI Duet, VOCALOID5

Video is found footage of time-lapse flowers blooming. Similar to Source Activate, this was a very fun instrument to play, allowing me to jam rhythmically with the bad. Jackson wanted nature to be the star, and after numerous overly complicated attempts, this focus on the movement and shapes of blooming spoke to us both.

  • Keypad controls the flower kaleidoscope effect and rotation
  • Keypad controls flower video; jump back and forward and playback speed
  • Audio drives the background kaleidoscope effect

The Process - Chelsea McBride

The Process - Chelsea McBride - still from live video

  • Audio performance: Magenta AI Duet

Heavily inspired by and a remixing of Max Cooper’s work. Chelsea felt her piece connected with “geometricity” of Max’s work. The Process has 6 sections and I tried to match a visual to each while adding further layers of shapes and the feeling of the final form being obscured until the end.

  • Keypad triggers each section
  • Webcam reads red shapes as “windows” overlaying the video
  • Audio drives the movement of the shapes overlay

Source Activate - Mason Victoria

Source Activate - Mason Victoria - still from live video

  • Audio performance: Magenta AI Duet

Visuals are both an homage to “Alex”, the piece Mason and I did previously in Creo Animam, and a metaphor of the process: one of many layers and often takes a step/layer backwards.

  • Layer colours randomly generated each run
  • Layers can be added and removed, either one at a time or rapidly, using the keypad
  • Audio drives the changes of the effects and background kaleidescope

Love-bot - Suzy Wilde

Love-bot - Suzy Wilde - still from live video

  • Audio performance: TTSReader
  • Poetry: Voicebox by BOTNIK
  • Handwriting: Handwriting Synthesis with Recurrent Neural Networks

One of the hardest pieces to “play”, requires practice of the timing (particularly for triggering th poetry stanzas) and video controls. Video of an advertisement for a large doll and a collection of black and white science fiction film clips. I was heavily inspired by the artificial voice Suzy found to narrate the poem.

  • Keypad controls poem stanzas
  • Keypad controls video speed, jump back and forward, and a mark and return to mark control

Love-bot

by Suzy Wilde and Voicebox

    Redwood stretched. Shellfish bristled the floor.
    And a masquerade of birds in limousines waited in the sky.
    My radiant love-bot stood tall and said,
    "Life, in even the simplest form,
     has always been a matter of finding the energy...
    but you already know that, don’t you, Mister Jones?

    Come and sit by the blood of the seasons with fast affection
     and let me witness your beauty.
    In the spirit of this song, may it rise up pure and free.
    I will not be battered by misuse, misguided trust and strong abuse,
     at least the men I chose were real and had the power to love and feel.
    Emotionless, apparently, but, bearing closer scrutiny, one can see
     indestructible passion to explore freely and at will."
    But of all the lovers I recall...
    You are "the robot".

    Love-bot, can you really feel it? Love itself?
    Are you someone I can come home to when my exhausting day is through?
    Count yourself a well-worn shoe?
    A friend that I can slip into?

    See temptation caving in me. I don’t care. I keep a little house in there.
    Deep in those push-up brassieres, tight dresses and rhinestone rings.
    They are simulated illusions, but Love-Bot, you got me singing.
    Sweet playground swing love.

    Your heart in the headlines,
    Shelves at the newspaper stands overflowing with your digital love.
    Go melt back down the white sand road.
    Board planes to mars with engines shining.
    Go fly like a stainless seagull,
     high above the outlets and electrical streams.
    Go fly and when you do, Love-bot,
     the first thing I know is that the sky will be lonely.

Bruce Cassidy - Bit Buddy

  • Audio performance: Magenta AI Duet

I actually used Hydra in a more typical “live coding” fashion for this improvised jam. Bruce is well known for his improvisations, so it was an honor and pleasure to be able to participate.

Development and Process

Mason Victoria was one of the composers I worked with on the Creo Animam concert in 2015. In 2018 he contacted me and wanted to know if I was interested in collaborating with Spectrum Music on a concert involving AI. I’d get to collaborate with the composers to help them integrate AI / machine learning into their pieces. Plus, as it turned out, the venue had a huge wall for projections so I could create visuals (also involving AI in some cases). At the time I hadn’t actually done any AI development, but knew Python and was willing to give it a go. I hadn’t ever done live visuals either, so I was starting from scratch. It took about 6 months of research, a month of part-time work and a month of full-time work to put the concert together.

After reaching out for help, I was very graciously offered free tickets to the Deep Learning Summit Toronto (thanks to John at RE•WORK) and a weekend machine learning beginners course (thanks to Maja at the Nova Institute).

As part of our research we also emailed and had chats with a number of very generous experts and artists including Yotam Mann, Sageev Oore, and Alex Tgen. Many thanks to them for all the help and support!

Part of the process was putting together an intro to AI video for the composers. I hosted evening information session and we watched the video and discussed all the possibilities. Over the course of a month I provided support and tools to the composers to help them realize their vision. Meanwhile, Mason and I had been working hard to develop the music performance AI, and settled on Google’s Magenta toolkit mainly because it was relatively easy for Mason to work and experiment with using the AI Jam interface. I sourced approximately 1000 jazz midi tracks and trained a version of the Magenta Melody (Attention) RNN model on those.

Mason had chosen musicians whose instruments could produce midi output so that we could feed that into the music performance AI. Mason could control the AI in real-time by changing what inputs were being “listened” to and when the AI should output a response and the length of the response. In addition he could control the “temperature” of the AI - essentially how much randomness affected the output (where low randomness meant a closer matching of the training data).

After securing The Tell as our venue we wanted to take advantage of their wall-sized projection system so I started developing the visuals. I immediately fell in love with both the Hydra Live Coding editor, created by Olivia Jack, and the Learning to See project, by Memo Akten. Without the work these two brilliant artists have done this show wouldn’t have been possible. I wanted to combine the two and set out to learn both and integrate them.

The visuals also had further requirements - the music was the focus of the concert, so the visuals needed to support the music without drawing too much attention. This meant, instead of live coding as normally done with Hydra, the code would be hidden and I would control relatively minimalist visuals using the keypad or webcam. In addition, Hydra visuals can often cause flashing and strobing effects, so I created code that helped reduce the amount of rapid changes that lead to strobing or flashing.

Mason lead the development and live control of the performative music AI while I manned the visuals. We lugged both of our desktop towers out to each rehearsal. These AI tools felt like a combination of a new instrument and a collaborator. For the visuals, each song had a unique visual instrument that had to be created and then I had to learn to play it. These tools required a great deal of experimentation to find out the strengths and weakness of the AI and how best “communicate” with it to get the output we wanted. It continued to surprise us, including in the final concert, where the music performance AI was more “virtuosic” and “flamboyant” than in rehearsal.

The amount of electronics in the concert added new challenges, including blowing the fuse twice during our sound check and a number of instances where computers decided to reboot or stop working.

Rehearsal setup

Concert setup

Learnings

AI / machine learning / deep learning tools are the “wild west” of software development at the moment. It is wonderful to see such fast-paced development done in a generally open manner. This concert couldn’t have been done without all the open source software available. I strongly believe that free and open source software is a requirement for ethical, innovative software and art practice.

The rapid pace of development leads to a lack of good documentation and many cases of “it worked when I wrote it” syndrome, where the code quickly becomes out of date and difficult to get working again. I struggled for a long time to get a proper development environment running and make it easy to train new deep learning models. There doesn’t seem to be a good answer for repeatability yet in the AI space. I played with Docker containers and Google Colab, including a very promising project that would allow for local colab containers with Nvidia support. Eventually, it was easiest to get things running on my Intel-based Nvidia card Ubuntu system using installed packages and Conda. The compatibility between different versions of software libraries and drivers is very limited (with most of the blame on Nvidia, although improvements came during the life of the project). I would really like to see better support for AMD hardware and a more robust set of solutions for easily replicating deep learning projects.

AI as an art tool and collaborator

Using AI has aspects beyond developing more traditional art software (a process that is for me very iterative and often involves a great deal of algorithm design). In addition to the design of the learning algorithm (which I didn’t do and instead selected from existing open source designs) there is the training phase, which feels like curation. Finding training data can be difficult and then the data often requires further selection, pruning or filtering. The training itself is fairly hands off, can take take many hours (or days for larger models) and feels very trial-and-error. Especially when the output of the tool is subjective it requires a great deal of time to evaluate if the model you’ve trained is something useful / interesting to use. Mason spent many hours experimenting with the models I trained to find out if they were usable. This extremely slow iteration loop makes the process quite frustrating.

When training the AI used for the visuals it was hard to know how much data was required to generate an interesting result. The process of training and testing the AI is slow enough that in a time limited situation, as the case for this concert, there weren’t opportunities to try many alternatives. When there was a seemingly usable tool there was still a long learning process to get familiar with its details, i.e., what sort of input would generate the most interesting output. This process felt a bit like learning how to best collaborate with another artist rather than figuring out all the menu options in a complicated piece of software.

For me personally the most successful AI tool was the sunrise generator. This was able to capture the impression and colours of a sunrise in an almost magical way. Playing with the components in the box to generate the sunrise image was one of the most fun controllers I have made and used.