Musebots
  • Welcome
  • Info
    • Origins
    • News
    • Publications
  • Coding
    • Getting Started
    • About Ensembles
    • Max4Live Musebots
  • Performances
    • Play Nice
    • Play Nice II
  • Workshops & Gatherings
    • Byron Bay 2017
    • ICMC 2018 Workshop
  • Video
  • Blog

Musebots do it better?

5 Questions with Matt Horrigan

7/21/2017

0 Comments

 
Musebot designer Matt Horrigan is an instrumental and electroacoustic composer, improviser, writer and enthusiastic maker of sounds. In his multidisciplinary and often experimental work Matt draws on collage, pastiche, and expressionism as well as a research practice that explores music’s function as a referential art form. Accurately dubbed the Musebot project's "Chief Obfuscator", Matt's dry sense of humor comes through in our conversation, which touches on 'pretentious heavy metal', javascript poetry, and the limits of Musebot intelligence. Catch him and his Druid-inspired Musebots in Play Nice, Friday, July 28 at The Gold Saucer.
Picture

"Kids: This is not Bill Nye."
(but it is inspired by Spinal Tap...)


BA: I stumble into The Gold Saucer mid-way through your set. What do I see?

MH:  Adrian Verdejo riffing on the guitar, David Storen projecting occult symbols, my computer issuing heavily distorted melodies.  We're working off the sort of "monolithic," forbidding, highly pretentious heavy metal that went out of style about forty years ago but keeps getting dug up again by drony people like Sun O))).  Something about Spinal Tap's "The Druids" inspires me.
Can you tell me a bit about how your Musebots work? How do they make choices?

My musebots are prototype-based and object-oriented, which means they classify everything and create new individuals based on a class prototype.  Each musical phrase is such an individual, and in my world, instruments don't play the phrases, the phrases play themselves.  Once a section of music plays, it's destroyed forever, so the only way for one of my bots to repeat itself is to clone something it's going to say, say it, and then say the clone version.  My bots develop their ideas overtime by cloning and mutating statements they've already thought up.

So, where does 'creativity' come in, for the bots?

Original ideas in this hierarchical world occur as randomness, embedded a little bit in each part of the musebot's thought process.  My bots can't do anything without consulting many random functions (oracles, prophets, astrologists, whatever - like, literally, since there is no real randomness, just irrelevant information, the same way there's no real magic, just things one doesn't understand, so many "random" functions actually look at the numerology of the current time and date) for "inspiration."  The bots listen to Adrian and David as well, but they don't react directly; they accept other agents' inputs as influences rather than determinants.  That means they can sometimes go off doing "their own thing," which deviates from the painfully-linear kind of interactivity that characterizes pedantic science demos.  

It sounds like the relationships you're setting up will be fairly subtle.

Kids: this is not Bill Nye.  The performance is not going to explain itself.

Right. So how can audiences learn more about what's going on, if they're interested?

I prioritize networking and general compatibility when developing software, which probably seems obvious to some humans I've worked with and totally surprising to others.  My algorithms run in web browsers as well and I'll put some up online (in a more 'ambient' version) just prior to the concert.

Would your programs be interesting to someone who didn't already know how to read code?

Because I work in javascript, my programs look a little bit like poetry in english.  I name my variables as intuitively as possible, so you can read some of the code as if it was natural language, particularly some of the basic stuff about thinking.  If(livingConditions==true){playVigorously();}else{this.cancel();}.

What do you see as the main challenges in bringing together Musebots and live human performers?

Auditory and visual scene analysis, that is, the basic task of figuring out what they're seeing and hearing.  Right now, if you feed even a very well-trained computer a video of an apple, most of the time it'll have no idea what it's looking at.  Sensory processing poses serious issues for machines.  They are mostly blind and mostly deaf.  It's true for the sentinels in The Matrix, and it's true for Musebots.

Do Musebots do it better?

No.  While there will come a time, perhaps even this month, when they entertain better, i don't think it will ever be wise to claim that they make better art for humans.  That would be like saying that Montreal art is best for Vancouver, or white art is the best for the chinese community.  The best art has to be emic.  Computers make the best art for computers. To put it another way, as is the case with any system, metacreative algorithms don't share your interests - they can only estimate what you will respond to.  That's why they're worth paying attention to, but keep the dose manageable.



0 Comments



Leave a Reply.

    Author

    This blog is written by Barbara Adler, one of the performers on the bill for Play Nice. What's it like to work with artificial agents? What assumptions can you bring in to the collaboration? Who buys the beer?

    Archives

    July 2017
    June 2017

    Categories

    All

    RSS Feed

Powered by Create your own unique website with customizable templates.
  • Welcome
  • Info
    • Origins
    • News
    • Publications
  • Coding
    • Getting Started
    • About Ensembles
    • Max4Live Musebots
  • Performances
    • Play Nice
    • Play Nice II
  • Workshops & Gatherings
    • Byron Bay 2017
    • ICMC 2018 Workshop
  • Video
  • Blog