Can you tell me a bit about how your Musebots work? How do they make choices?
My musebots are prototype-based and object-oriented, which means they classify everything and create new individuals based on a class prototype. Each musical phrase is such an individual, and in my world, instruments don't play the phrases, the phrases play themselves. Once a section of music plays, it's destroyed forever, so the only way for one of my bots to repeat itself is to clone something it's going to say, say it, and then say the clone version. My bots develop their ideas overtime by cloning and mutating statements they've already thought up.
So, where does 'creativity' come in, for the bots?
Original ideas in this hierarchical world occur as randomness, embedded a little bit in each part of the musebot's thought process. My bots can't do anything without consulting many random functions (oracles, prophets, astrologists, whatever - like, literally, since there is no real randomness, just irrelevant information, the same way there's no real magic, just things one doesn't understand, so many "random" functions actually look at the numerology of the current time and date) for "inspiration." The bots listen to Adrian and David as well, but they don't react directly; they accept other agents' inputs as influences rather than determinants. That means they can sometimes go off doing "their own thing," which deviates from the painfully-linear kind of interactivity that characterizes pedantic science demos.
It sounds like the relationships you're setting up will be fairly subtle.
Kids: this is not Bill Nye. The performance is not going to explain itself.
Right. So how can audiences learn more about what's going on, if they're interested?
I prioritize networking and general compatibility when developing software, which probably seems obvious to some humans I've worked with and totally surprising to others. My algorithms run in web browsers as well and I'll put some up online (in a more 'ambient' version) just prior to the concert.
Would your programs be interesting to someone who didn't already know how to read code?
What do you see as the main challenges in bringing together Musebots and live human performers?
Auditory and visual scene analysis, that is, the basic task of figuring out what they're seeing and hearing. Right now, if you feed even a very well-trained computer a video of an apple, most of the time it'll have no idea what it's looking at. Sensory processing poses serious issues for machines. They are mostly blind and mostly deaf. It's true for the sentinels in The Matrix, and it's true for Musebots.
Do Musebots do it better?
No. While there will come a time, perhaps even this month, when they entertain better, i don't think it will ever be wise to claim that they make better art for humans. That would be like saying that Montreal art is best for Vancouver, or white art is the best for the chinese community. The best art has to be emic. Computers make the best art for computers. To put it another way, as is the case with any system, metacreative algorithms don't share your interests - they can only estimate what you will respond to. That's why they're worth paying attention to, but keep the dose manageable.