Musebots were initially conceived by our own 'Chief Musebot Evangelist' Arne Eigenfeldt, along with Ollie Bown, an Australian composer whose research also deals with generative systems. From the start, the project has been defined by two ambitious goals: to get more people involved with cutting-edge musical intelligence and to experiment with musical autonomy. To the first goal, Musebots have always been open-source, meaning that developers share their programming between themselves and the public. In fact, many bots are already available for free online, through the Musical Metacreation research group, also run by Arne.
As a firmly analogue performer and musician, I was curious about the second goal: experimenting with 'musical autonomy'. My fears of an AI apocalypse aside, I wondered what that might look like in a concert situation. I also wondered what kind of challenges Musebot autonomy might create for human collaborators.
I asked Arne about the Musebots he's been working on, and what we could expect from the works he will be premiering at Play Nice: Musical Collisions Between Humans and Intelligent Machines, Friday, July 28 at the Gold Saucer Studio.
BA: Tell me a bit about the Musebot-Human collaboration you’re working on. What can audiences expect to hear and see?
AE: I'll be working with three different performers: Peggy Lee on cello, Matt Ariaratnam on prepared guitar, and Nathan Marsh on prepared guitar.
The work with Peggy is my latest musebot exploration, where a dozen or more identical Musebots generate a very ambient sound. Musebots send messages back and forth to negotiate musical aspects; in this case, the results of their discussions are presented to Peggy in traditional musical notation in the form of suggestions: "these are some pitches that would work with what we're doing". Peggy's performance is then taken into account in their future decisions, so a complex feedback loop emerges.
The work with Matt and Nathan will be quite different, exploring more of a noise aesthetic. We're going to have two separate performances, using the same Musebots, hopefully in an attempt to show how the Musebots react differently to different situations. Matt and Nathan are going to improvise on their guitars in unusual ways, using the guitar as a sound generating tool by drumming on the strings, attaching devices to it to create new sounds, etc.
Both Matt and Nathan have given me some recordings of their "typical" improvisations, and the Musebots have analyzed these recordings in an effort to learn certain characteristics of their playing. For example, given a recording of playing on the strings with chopsticks, the Musebots will have learned to recognize that particular playing style. The bots have a large library of recordings that they have access to, and "know" intimately, and can decide to play something in response to the live performer based upon their understanding of the sound environment. This is essentially the same kind of software that Apple or Spotify uses to recommend music; however, while those companies obviously want to be accurate in their recognition, I'm more interested in the inaccuracies – the liminal space between machine perfection and apparent failure. What happens when Matt or Nathan do something that the Musebots haven't heard before? The bots may respond with something quite unexpected, which, like the work with Peggy, will produce a complex feedback network between human and machine.
How do the Musebots you’ve designed reflect your own personality or interests ?
I'm a composer rather than an improviser myself, so I tend to think in large structures and organization. In the work with Peggy, the Musebots are generating an entire musical structure, and Peggy is considered a part of it; however, if Peggy decides not to play anything, the Musebots will happily complete the work on their own.
In the work with prepared guitar, I've set up a space for interaction. I think an important aspect is that I'm personally not part of the process once the performance begins. I know how the Musebots work, and I could influence the performance from that respect; Matt and Nathan don't know the logic behind the bots, and can only react to what they hear the Musebots do. This is kind of like how improvisors interact - they don't know what the other musician is going to do, or why they did what they did, but can only react the the sound as it is.
How do you think Musebots reflect their own interests?
In the work with Peggy, the Musebots are indifferent to the live performer. They have internal desires to create a musical surface to the structure they have agreed upon. The live performer can influence their actions, but not really control them.
In the works with live guitar, the Musebots intentions and desires are much simpler: react to what they hear. If the guitarist doesn't play anything, the Musebots will do something, but only half heartedly.
What's so challenging about bringing Musebots together with live human performers?
Ironically, not much from my end. I used to work exclusively with live performers and responsive systems. Musebots were developed to be autonomous and NOT rely upon humans. The other developers have previously created Musebots that follow this paradigm, so it will be more of a challenge to see how they decide to bring performers into the musebot loop.
Musebots have, for the most part, been modelled upon human interactions - beat makers, bass players, drone makers, etc. The Musebots tell each other what they are doing (rather than forcing the other Musebots to extract that information on their own). They can, however, tell other Musebots what they intend to do, which is something that human performers cannot do (at least while playing). How the other developers and performers resolve these differences will be quite exciting.
Do Musebots do it better?
As I mentioned, I'm a composer interested in creating large musical structures. A long time ago, I was a jazz bass player, and have always loved the mercurial nature of live improvisation. Musebots allow me to design virtual performers to do things that humans may not do for me, like play the same thing for a long time, or make a certain kind of algorithmic variation on a given musical phrase. I can provide the musebot with the specific intelligence that I want for a given musical situation, and make them ignore certain aspects. I can make three (or twelve) copies of a musebot and hear the result, which is a bit more difficult with humans!