Paul Paroczai is an electroacoustic composer, sound designer, and instrument builder currently living in Vancouver. He holds a B.Mus (composition) from the University of California Berkeley and an MFA from SFU.
I've included this photo of Paul in the croon-heavy role of 'Honza' from the musical production of Klasika. What's impossible to discern in this photo is that Paul's guitar (left) is actually a sly robo-guitar, which Paul souped up to play field recordings and other samples using an Arduino system that wirelessly communicated with the theatre's sound system. A frequent interdisciplinary collaborator, Paul decided to stretch his solo performance chops in the upcoming Play Nice, where he will interact live with a Musebot system he designed himself. Though he (unfortunately) doesn't promise any crooning, our chat gives a sense of Paul's unique take on Musebots as well as his sensitivity to his computer's feelings.
Your 'collaboration' is a bit different because the 'human' in the equation is you. What can we expect from your performance?
I think what I've put together has a lot in common with performances involving modular synthesizers so...I'll basically be sitting with my laptop and a controller and as I move knobs and press buttons the sounds will change in ways that are sometimes fairly easy to follow and other times a bit less clear. A difference worth noting is that while modular synths typically deal with parameters which shape timbre and articulation, the changes I'll make will be focused more on compositional parameters like pitch selection and rhythm.
How do you plan to collaborate with the bots? What's the interaction?
For this performance a good way to think of my musebots is as automated piano players. So it's like I sat a robot down on a piano bench with no instrument in front of it and turned it on so that its fingers were moving but not actually hitting any keys. And then what I do is roll a grand piano in front of it and all of a sudden the timing and placement of the robot's fingers determine what sounds are coming out of the piano, but then I roll away the piano and replace it with a guitar or a harp or a synthesizer or whatever. So where you hear me I'd say is in the instruments and timbres, but the bots determine what actually comes out of those instruments and when based on a looser set of playing instructions I've established. So I can also tell them to play slower or play faster or play in this key but I don't know exactly what key will be hit and when.
It sounds like you have a lot of control in this set-up. Is there any way in which this performance will reflect something like the Musebots' interests?
I really like my computer so I won't speak for its interests until it tells me them itself. That being said, when I make generative music systems, I usually start with some sort of aesthetic reference or system of logic in my head as a goal, but when I actually start building and testing, I'll try to follow sounds I like as they come up regardless of whether I've gotten to the point I'd initially set out for. So even if a test of the program completely misses the mark of my original goal, I'll stick with it if I like the way it sounds. In this way I think I'm always trying to give my computer as much creative space as I can so that if alternatives arise that are more exciting than my initial goal, I don't let them pass me by just because the program didn't do what I wanted.
You clearly love your computer. Is there anything you find challenging about this process?
My situation is a bit different than the other pieces on the bill since I'm both designing and performing with the systems and can always have a pretty good view of and control over what's happening and what's changing and what's going to happen. In my case then I think the challenge is just setting up as effective a system for communication between me and my bots as possible.
Do Musebots do it better?
If they don't what's the point?