Can you tell me a bit about how your Musebots work? How do they make choices?
My musebots are prototype-based and object-oriented, which means they classify everything and create new individuals based on a class prototype. Each musical phrase is such an individual, and in my world, instruments don't play the phrases, the phrases play themselves. Once a section of music plays, it's destroyed forever, so the only way for one of my bots to repeat itself is to clone something it's going to say, say it, and then say the clone version. My bots develop their ideas overtime by cloning and mutating statements they've already thought up.
So, where does 'creativity' come in, for the bots?
Original ideas in this hierarchical world occur as randomness, embedded a little bit in each part of the musebot's thought process. My bots can't do anything without consulting many random functions (oracles, prophets, astrologists, whatever - like, literally, since there is no real randomness, just irrelevant information, the same way there's no real magic, just things one doesn't understand, so many "random" functions actually look at the numerology of the current time and date) for "inspiration." The bots listen to Adrian and David as well, but they don't react directly; they accept other agents' inputs as influences rather than determinants. That means they can sometimes go off doing "their own thing," which deviates from the painfully-linear kind of interactivity that characterizes pedantic science demos.
It sounds like the relationships you're setting up will be fairly subtle.
Kids: this is not Bill Nye. The performance is not going to explain itself.
Right. So how can audiences learn more about what's going on, if they're interested?
I prioritize networking and general compatibility when developing software, which probably seems obvious to some humans I've worked with and totally surprising to others. My algorithms run in web browsers as well and I'll put some up online (in a more 'ambient' version) just prior to the concert.
Would your programs be interesting to someone who didn't already know how to read code?
What do you see as the main challenges in bringing together Musebots and live human performers?
Auditory and visual scene analysis, that is, the basic task of figuring out what they're seeing and hearing. Right now, if you feed even a very well-trained computer a video of an apple, most of the time it'll have no idea what it's looking at. Sensory processing poses serious issues for machines. They are mostly blind and mostly deaf. It's true for the sentinels in The Matrix, and it's true for Musebots.
Do Musebots do it better?
No. While there will come a time, perhaps even this month, when they entertain better, i don't think it will ever be wise to claim that they make better art for humans. That would be like saying that Montreal art is best for Vancouver, or white art is the best for the chinese community. The best art has to be emic. Computers make the best art for computers. To put it another way, as is the case with any system, metacreative algorithms don't share your interests - they can only estimate what you will respond to. That's why they're worth paying attention to, but keep the dose manageable.
Paul Paroczai is an electroacoustic composer, sound designer, and instrument builder currently living in Vancouver. He holds a B.Mus (composition) from the University of California Berkeley and an MFA from SFU.
I've included this photo of Paul in the croon-heavy role of 'Honza' from the musical production of Klasika. What's impossible to discern in this photo is that Paul's guitar (left) is actually a sly robo-guitar, which Paul souped up to play field recordings and other samples using an Arduino system that wirelessly communicated with the theatre's sound system. A frequent interdisciplinary collaborator, Paul decided to stretch his solo performance chops in the upcoming Play Nice, where he will interact live with a Musebot system he designed himself. Though he (unfortunately) doesn't promise any crooning, our chat gives a sense of Paul's unique take on Musebots as well as his sensitivity to his computer's feelings.
Your 'collaboration' is a bit different because the 'human' in the equation is you. What can we expect from your performance?
I think what I've put together has a lot in common with performances involving modular synthesizers so...I'll basically be sitting with my laptop and a controller and as I move knobs and press buttons the sounds will change in ways that are sometimes fairly easy to follow and other times a bit less clear. A difference worth noting is that while modular synths typically deal with parameters which shape timbre and articulation, the changes I'll make will be focused more on compositional parameters like pitch selection and rhythm.
How do you plan to collaborate with the bots? What's the interaction?
For this performance a good way to think of my musebots is as automated piano players. So it's like I sat a robot down on a piano bench with no instrument in front of it and turned it on so that its fingers were moving but not actually hitting any keys. And then what I do is roll a grand piano in front of it and all of a sudden the timing and placement of the robot's fingers determine what sounds are coming out of the piano, but then I roll away the piano and replace it with a guitar or a harp or a synthesizer or whatever. So where you hear me I'd say is in the instruments and timbres, but the bots determine what actually comes out of those instruments and when based on a looser set of playing instructions I've established. So I can also tell them to play slower or play faster or play in this key but I don't know exactly what key will be hit and when.
It sounds like you have a lot of control in this set-up. Is there any way in which this performance will reflect something like the Musebots' interests?
I really like my computer so I won't speak for its interests until it tells me them itself. That being said, when I make generative music systems, I usually start with some sort of aesthetic reference or system of logic in my head as a goal, but when I actually start building and testing, I'll try to follow sounds I like as they come up regardless of whether I've gotten to the point I'd initially set out for. So even if a test of the program completely misses the mark of my original goal, I'll stick with it if I like the way it sounds. In this way I think I'm always trying to give my computer as much creative space as I can so that if alternatives arise that are more exciting than my initial goal, I don't let them pass me by just because the program didn't do what I wanted.
You clearly love your computer. Is there anything you find challenging about this process?
My situation is a bit different than the other pieces on the bill since I'm both designing and performing with the systems and can always have a pretty good view of and control over what's happening and what's changing and what's going to happen. In my case then I think the challenge is just setting up as effective a system for communication between me and my bots as possible.
Do Musebots do it better?
If they don't what's the point?
Musebots were initially conceived by our own 'Chief Musebot Evangelist' Arne Eigenfeldt, along with Ollie Bown, an Australian composer whose research also deals with generative systems. From the start, the project has been defined by two ambitious goals: to get more people involved with cutting-edge musical intelligence and to experiment with musical autonomy. To the first goal, Musebots have always been open-source, meaning that developers share their programming between themselves and the public. In fact, many bots are already available for free online, through the Musical Metacreation research group, also run by Arne.
As a firmly analogue performer and musician, I was curious about the second goal: experimenting with 'musical autonomy'. My fears of an AI apocalypse aside, I wondered what that might look like in a concert situation. I also wondered what kind of challenges Musebot autonomy might create for human collaborators.
I asked Arne about the Musebots he's been working on, and what we could expect from the works he will be premiering at Play Nice: Musical Collisions Between Humans and Intelligent Machines, Friday, July 28 at the Gold Saucer Studio.
BA: Tell me a bit about the Musebot-Human collaboration you’re working on. What can audiences expect to hear and see?
AE: I'll be working with three different performers: Peggy Lee on cello, Matt Ariaratnam on prepared guitar, and Nathan Marsh on prepared guitar.
The work with Peggy is my latest musebot exploration, where a dozen or more identical Musebots generate a very ambient sound. Musebots send messages back and forth to negotiate musical aspects; in this case, the results of their discussions are presented to Peggy in traditional musical notation in the form of suggestions: "these are some pitches that would work with what we're doing". Peggy's performance is then taken into account in their future decisions, so a complex feedback loop emerges.
The work with Matt and Nathan will be quite different, exploring more of a noise aesthetic. We're going to have two separate performances, using the same Musebots, hopefully in an attempt to show how the Musebots react differently to different situations. Matt and Nathan are going to improvise on their guitars in unusual ways, using the guitar as a sound generating tool by drumming on the strings, attaching devices to it to create new sounds, etc.
Both Matt and Nathan have given me some recordings of their "typical" improvisations, and the Musebots have analyzed these recordings in an effort to learn certain characteristics of their playing. For example, given a recording of playing on the strings with chopsticks, the Musebots will have learned to recognize that particular playing style. The bots have a large library of recordings that they have access to, and "know" intimately, and can decide to play something in response to the live performer based upon their understanding of the sound environment. This is essentially the same kind of software that Apple or Spotify uses to recommend music; however, while those companies obviously want to be accurate in their recognition, I'm more interested in the inaccuracies – the liminal space between machine perfection and apparent failure. What happens when Matt or Nathan do something that the Musebots haven't heard before? The bots may respond with something quite unexpected, which, like the work with Peggy, will produce a complex feedback network between human and machine.
How do the Musebots you’ve designed reflect your own personality or interests ?
I'm a composer rather than an improviser myself, so I tend to think in large structures and organization. In the work with Peggy, the Musebots are generating an entire musical structure, and Peggy is considered a part of it; however, if Peggy decides not to play anything, the Musebots will happily complete the work on their own.
In the work with prepared guitar, I've set up a space for interaction. I think an important aspect is that I'm personally not part of the process once the performance begins. I know how the Musebots work, and I could influence the performance from that respect; Matt and Nathan don't know the logic behind the bots, and can only react to what they hear the Musebots do. This is kind of like how improvisors interact - they don't know what the other musician is going to do, or why they did what they did, but can only react the the sound as it is.
How do you think Musebots reflect their own interests?
In the work with Peggy, the Musebots are indifferent to the live performer. They have internal desires to create a musical surface to the structure they have agreed upon. The live performer can influence their actions, but not really control them.
In the works with live guitar, the Musebots intentions and desires are much simpler: react to what they hear. If the guitarist doesn't play anything, the Musebots will do something, but only half heartedly.
What's so challenging about bringing Musebots together with live human performers?
Ironically, not much from my end. I used to work exclusively with live performers and responsive systems. Musebots were developed to be autonomous and NOT rely upon humans. The other developers have previously created Musebots that follow this paradigm, so it will be more of a challenge to see how they decide to bring performers into the musebot loop.
Musebots have, for the most part, been modelled upon human interactions - beat makers, bass players, drone makers, etc. The Musebots tell each other what they are doing (rather than forcing the other Musebots to extract that information on their own). They can, however, tell other Musebots what they intend to do, which is something that human performers cannot do (at least while playing). How the other developers and performers resolve these differences will be quite exciting.
Do Musebots do it better?
As I mentioned, I'm a composer interested in creating large musical structures. A long time ago, I was a jazz bass player, and have always loved the mercurial nature of live improvisation. Musebots allow me to design virtual performers to do things that humans may not do for me, like play the same thing for a long time, or make a certain kind of algorithmic variation on a given musical phrase. I can provide the musebot with the specific intelligence that I want for a given musical situation, and make them ignore certain aspects. I can make three (or twelve) copies of a musebot and hear the result, which is a bit more difficult with humans!
Friday, July 28 at the Gold Saucer Studio will mark a world first for the intelligent musical agents known as ‘Musebots’. For the first time ever musebots will generate music in response to human collaborators, including musicians, a videographer and a poet. Featuring musebots designed by Arne Eigenfeldt, Matthew Horrigan, Paul Paroczai and Yves Candau, Play Nice will test the boundaries of machine creativity in an evening of eclectic music and performance.
Musebots are pieces of software that autonomously create music in collaboration with other musebots. Individual musebots are like players in a band: they make specific kinds of sounds and respond to the sounds made by others. Each bot ‘listens’ and makes decisions in real time, based on their unique role in the ensemble. Much like human performers, the bots can be unpredictable. Though they are coded by human composers, their performances are exciting because they don’t always do what they’re told.
Up until this point, the musebots have collaborated exclusively with other musebots; now they are going to be forced to respond to humans. Some of the city’s top improvisers will perform alongside the bots, including cellist Peggy Lee and guitarists Adrian Verdejo, Matthew Ariaratnam and Nathan Marsh. The bots will also have some interdisciplinary curve-balls to contend with, including spoken word by me (Barbara Adler) and live video by David Storen. Far more than a technical demonstration, the concert aims to showcase the musebots’ artistic potential and to highlight their broad musical range.
Musebots are the brainchild of Ollie Bown and Arne Eigenfeldt, two longtime designers of live generative music systems. A defining goal of the musebot project is to establish a creative platform for experimenting with musical autonomy, open to people developing cutting-edge music intelligence, or simply exploring the creative potential of generative processes in music.
The Musebot Project is open source: anyone can download musebots or learn to make their own at http://musicalmetacreation.org/musebots/
This ain't written by Barbara - this is Arne.
I don't want to blog, because it may turn into a lecture. Instead, I've asked Barbara Adler, an accomplished text artist and a fine accordionist – who is also one of the performers on this show – to post some of her thoughts about collaborating in this project.
What's it like to collaborate with virtual agents? What's different than playing with humans? What do you need to know about how they will behave? Can you trust them?
Stay tuned for some rowdy posts...