Musebots
  • Welcome
  • Info
    • Origins
    • News
    • Publications
  • Coding
    • Getting Started
    • About Ensembles
    • Max4Live Musebots
  • Performances
    • Play Nice
    • Play Nice II
  • Workshops & Gatherings
    • Byron Bay 2017
    • ICMC 2018 Workshop
  • Video
  • Blog

Play Nice:
Musical collisions between humans and intelligent machines

July 28, 2017
Gold Saucer, Vancouver
  • Program
  • Designers
  • Performers
  • Video
  • Software
<
>
Moments: Monochromatic
Arne Eigenfeldt, musebot developer
Peggy Lee, cello

Hewn from Living Rock
David Storen, video musebot
Matthew Horrigan, audio musebots
Adrian Verdejo, guitar

Indifference Engine vs. Matthew Ariaratnam
Arne Eigenfeldt, musebot developer
Matthew Ariaratnam, prepared guitar

Convolved shadowgraph
Yves Candau, musebot developer and movement
Barbara Adler, text

Indifference Engine vs. Nathan Marsh
Arne Eigenfeldt, musebot developer
Nathan Marsh, prepared guitar

Solo
Paul Paroczai, musebot developer and live performance

Clasm
Arne Eigenfeldt, audio musebot developer; Matthew Horrigan, video musebot developer

Additional Credits:
Publicity : Barbara Adler
Photography : Ethan Eigenfeldt
Video documentation : Vilhelm Sundin
Video camera : Vilhelm Sundin and Ethan Eigenfeldt
Picture
Picture

Arne Eigenfeldt

Picture
I approach my software tools as extensions of my compositional thought process; this is different than many in the generative music field, who consider the potential for software improvisation. As such, musebots can be considered as tiny versions of myself, executing my musical wishes, building (and re-building) structures and surfaces from germinal ideas instantiated into their intentions and desires.
​​

Matthew Horrigan

Picture
  Ifn
vulnerability is inthereof (vincibility),
op-
                (pression) is
                ?posit
                                Excess thereof.

Yves Candau

Picture
Yves Candau is a movement and sound artist, coming to these experiential disciplines after completing graduate degrees in mathematics and cognitive science.  Yves seeks to integrate his research and artistic endeavours through a number of cross-disciplinary fascinations: mindfulness as the foundation of creative processes, working from found materials and through emergent forms, and leveraging the transformational potential of somatic practices. 
Paul Paroczai
Picture
​Paul Paroczai is an electroacoustic composer, sound designer, and instrument builder currently living in Vancouver. He holds a B.Mus (composition) from the University of California Berkeley and an MFA from SFU.

Peggy Lee

Picture

cello

Cellist, improviser, composer Peggy Lee is a member of the New Orchestra Workshop, and collaborates frequently with Ron Samworth and Tony Wilson and with her husband, drummer Dylan van der Schyff, as well as leading/co-leading The Peggy Lee Band, Waxwing, and Beautiful Tool.

Matthew Ariaratnam

Picture

prepared guitar

Matthew Ariaratnam is a composer, improviser, guitarist, and music educator. His music and research focuses on listening, field recordings, sonic textures, graphic/alternative scoring, prepared instruments, electroacoustic music, relational aesthetics, musicking, and songwriting.

David Storen

Picture

video

David Storen is a composer, improviser, and multimedia artist based in Vancouver, British Columbia.​ David's work as a composer is highly influenced by the visual arts and by the continual search for ways to integrate them into his practice.

Adrian Verdejo

Picture

guitar

Adrian Verdejo is a Canadian classical guitarist living in Vancouver. He performs as a soloist and with the Victoria Guitar Trio. Adrian has also played with the Vancouver Symphony Orchestra, Turning Point, Aventa Ensemble and many more. 

Barbara Adler

Picture

text

Barbara Adler is an interdisciplinary artist whose work brings together literary performance, composition, live event production and arts education to explore the intersections between text, music, sound and theatre. Her work has been presented through multiple solo and band albums, publication in spoken word anthologies and performances at major music and literary festivals, including The Vancouver Folk Festival, The Vancouver Writers Festival, The Winnipeg Folk Festival, and the Vienna Literature Festival.​

Nathan Marsh

Picture

prepared guitar

Nathan Marsh is a sound artist, educator and performer based in Vancouver who is concerned with the concept of experience and “objecthood” in music. He emphasizes collaboration and physical involvement in the creation of his works, intending to draw people into the physicality of music as a lived experience rather than as a predominantly auditory phenomenon.
Moments: Monochromatic
Arne Eigenfeldt, musebot developer
Peggy Lee, cello
Hewn from Living Rock
David Storen, video musebot
Matthew Horrigan, audio musebots
Adrian Verdejo, guitar
Indifference Engine vs. Matthew Ariaratnam
Arne Eigenfeldt, musebot developer
Matthew Ariaratnam, prepared guitar
Convolved shadowgraph
Yves Candau, musebot developer and movement
Barbara Adler, text
Indifference Engine vs. Nathan Marsh
Arne Eigenfeldt, musebot developer
Nathan Marsh, prepared guitar
vulcan_a
Paul Paroczai, musebot developer and live performance
Sonoclasm
Arne Eigenfeldt, audio musebot developer
​Matthew Horrigan, video musebot developer
  • Moments
  • Hewn
  • Indifference 1
  • Convolved
  • Indifference 2
  • Solo
  • Sonoclasm
<
>
Moments: Monochromatic
Arne Eigenfeldt

This is, by far, the most complicated musebot setup I use. There is no guarantee that it will work on anyone else's machine, unfortunately. It requires a number of third party Max objects, it requires Max to run in 64 bit mode to open the musebots, it uses Soundflower, and it requires Ableton Live to run in 64 bit mode, and have Max for Live.

Audio routing
  • SoundflowerBed
    • no outputs
  • Ableton Live
    • input from Soundflower 64
    • output to audio interface
  • MultiBOT
    • output to Soundflower 64
      • (it sends chan 1/2 to ListenerBOT, and 3/4 to Ableton)
  • ListenBOT
    • input from Soundflower 64
  • LivePerformerBOT
    • input from audio interface (i.e. a microphone)
    • output to Soundflower 64
      • (it sends chan 1/2 to ListenerBOT, and 3/4 to Ableton)

Conductor
Monochrome_Conductor
Lots of changes and additions. 
  • the performance is actually initiated by the OrchestratorBOT; thus, when OrchestratorBOT sends /broadcast/newComposition (via its Launch button), the Conductor turns on (after a delay, in which the Conductor waits for a chord /plan message to arrive from PCsetBOT, and parameters to arrive from ParamBOT).
  • when an ensemble is launched from OrchestratorBOT, the Conductor opens up the ports to those musebots only. This is borrowed from Moments: Polychromatic, which uses many audio musebots that remain open during successive performances, but not necessarily active. In Monochromatic, all audio is produced in MultiBOT. A button is available that will make all open musebots active (and thus receive messages from the Conductor).
  • broadcast messages are filtered in two ways: first, not all messages get through (/tempo, /activeBot); second, messages are only passed to active musebots.
  • control over Ableton's tempo (required for tempo-based delays) is sent via MIDI, as well as turning Ableton on and off. Tempo is received from OrchestratorBOT's ensemble file.

Musebots
LivePerformerBOT
Application must be run in 64-bit mode. Requires MaxScore to run (available as a Package in Max 7).
Audio input would most likely come from an audio interface. Output sent to Soundflower 64: internally, channels 1 and 2 are sent to the ListenerBOT, while channels 3 and 4 are sent to Ableton. Audio must be on, and the gain raised to a good level.
This musebot translates some of the messages being passed around into musical notation for a live performer.
  • from PCsetBOT: it displays current available pitches. These are displayed within the instrument's range (cello, by default), and transposed, if necessary;
  • active pitches from the audio musebots (MultiBOT).
For this performance, the live cellist also viewed the ParamBOT, and could see when sections changes where upcoming. 

MultiBOT
Audio out must be set to Soundflower (64ch): internally, it sends channels 1 and 2 to ListenBOT, and channels 3 and 4 to Ableton. Other than that, no user interface controls.
This musebot generates all the audio using resonant noise set to frequencies received from PCsetBOT. The number of RezNoizBot subpatchers (which are actual independent musebots) is set within OrchestratorBOTs ensemble, including the style: Londyn (long drones); Milan (medium melodic durations); Siena (short tones).
Like many of my musebots, how the individual musebot reacts to its environment is dependent upon internal parameters (or personalities):
  • impatience (how long it takes to begin playing);
  • persistence (how long it will play once it begins);
  • vitality (how much energy it has to add extra voices);
  • consistency (how much it will change over the course of a section);
  • compliance (how closely it will attempt to match the goal parameters from ParamBOT);
  • repose (whether it prefers to play in sparse sections, or more active sections.
These parameters are all set by the ensemble score, sent from OrchestratorBOT.

ParamBOT
This musebot generates the overall structure for each composition. When it receives a /newComposition message from OrchestratorBOT, it will generate a number of moments/sections for the specified duration (default is 600 seconds, or ten minutes). For each moment, a consistent, or varying, parameter will be generated, including:
  • speed (interpreted by the Conductor);
  • arousal (interpreted differently by different musebots, but loosely the activity level);
  • valence (interpreted differently by different musebots, but loosely the pleasantness);
  • volume.
  • spectrum (target Bark band slices for each section).
These parameters are sent as messages every few seconds, along with a continuous progress through the section value (0.0 - 1.0). 
Parameters can be generated by hand by clicking the "generate" button.

PCsetBOT
This musebot generates a single pitch set for the entire composition, using the lowest valence level (interpreted as the most complex) provided by the ParamBOT. Then, for each section, a subset of this complete set is selected, depending upon the section's valence.

ListenBOT
This musebot analyzes the combined audio output of MultiBOTs and live input using 24 band Bark analysis. It compares these values to the requested timbre from ParamBOT for the current section, and then sends out a message /spectralDifference of the difference between the two. 

OrchestratorBOT
This musebot loads ensembles, which are text files describing which musebots are active in a performance, along with individual parameter settings (i.e. impatience values). Ensemble files allow me to curate the relationships between audio musebots; for example, placing many musebots in an ensemble, but all with fairly low impatience parameters would still create fairly sparse music. In other performances, OrchestratorBOT continually loads new ensembles, and limits the duration of specific ensembles; in Moments: Monochromatic, only a single ensemble plays in a performance. In the case of Play Nice, it was ensemble 5, which contains 10 audio musebots.
Once all musebot applications are loaded, and the Ableton Live set is running, click on "launch" to begin a performance. This sends the ensemble data to the MultiBOTs, and triggers a newComposition message.

Moment_Monochromatic_Ableton
This is the Ableton Live set. It's only purpose is to collect the Soundflower audio, and add a single cohesive set of processing: Chorusing and reverb.


Hewn from Living Rock
David Storen (video musebot); Matthew Horrigan (audio musebots)


Conductor
Matt_Max_Musebot_Conductor

Musebots
ds_vidBOT

MidiBOT

MIDIGuitarInputBOT

mh_VideoBOT



Indifference Engine vs. Matthew Ariaratnam
Arne Eigenfeldt

Requires​ SoundflowerBed for Mac. In performance, live audio (i.e. from an audio interface) is collected in AudioAnalyzerBOT. catartBOT sends its audio via Soundflower 2chan to effectsBOT, which sends its audio back out the audio interface. 


Conductor
Max_Musebot_Conductor
A standard Conductor, with a few idiosyncratic changes: 
• I've included a timer to display the elapsed time for a performance (in [p timer]).
• Rejigged /broadcast/statechange (i.e. coming from a musebot) to turn it into /mc/statechange (i.e. coming from the Conductor). The musebot spec suggests that statechange is a /mc message, assuming that someone is controlling the Conductor and then telling all musebots to change their states. This is actually the first time I've every used statechange, and it is coming from MIDIcontrollerBOT: thus, I needed to change the Conductor itself (since musebots cannot send /mc messages). This underlines the movement away (or at least mine) from the Conductor potentially having some smarts, to now doing nothing more than routing messages.


Musebots
AudioAnalyzerBOT
This musebot analyzes audio input, and sends feature analysis as messages. 
Click on adc~ to select the input source, select "mic on" from the menu, and raise the input gain. Note that it is possible to play a soundfile for testing purposes, and monitor that soundfile as well.
Four types of audio feature analysis are executed, including spectral centroid (spectral centre of the audio, rather than pitch analysis. Works very well for non-pitched material), loudness, sparse/active (is current loudness greater or less than the average from the last few seconds?), and spectral flux (amount of spectral variation in the audio). Smoothing plays an important role in these analysis, because it resets its limit every few seconds. Thus, centroid doesn't have a fixed low and high frequency; instead, it is the relative centroid for the last few seconds. [set min] for loudness can be used if the incoming signal has background noise (in my case, the constant hum of the amplifier). 
The features can be remapped to other values (i.e. arousal or valence).  Lastly, an averaged Bark analysis (25 bands) is also sent.

effectsBOT
Five AudioUnit effects – Delay, Pitch (transposition), NewTimePitch, RingMod, and Distortion. I stored a series of presets using the pattr system, and interpolate between them based upon incoming valence and arousal messages (from AudioAnalyzerBOT).

catartBOT
This requires IRCAM's CataRT software, and FTM software. It only works in Max 6 as a Max patch.
Diemo Scharz's concatenative real-time synthesis software analyses soundfiles, and displays them as (approximately) 250 ms grains in an XY grid based upon analysis features. I use pitch/loudness. I wrote a granular synthesis engine to play grains based upon an XY input (inside the bpatcher grain_sub). In this case, the XY input is a musebot message from AudioAnalyzerBOT centroid and loudness features. 
I provided 1 GB of material, organized into six sets of sounds – bells, cello, ethnic, matt, strings, voice. All soundfiles were pre-analyzed by cataRT, and could quickly be individually (and autonomously) loaded during performance. Folders automatically changed on statechange messages, which were sent from AudioAnalyzerBOT when activity went to 0.0. Individual soundfiles were selected from within the folder based upon a distance function based upon the real-time Bark band analysis (which was also done for every soundfile). 
Thus, realtime spectral analysis was done on the live performer, and catartBOT was trying to play the closed soundfile in its library; however, that selection was limited by its current contest: the sample folder. My first rehearsals with Matthew only used recordings from his previously improvisations; catartBOT was very successful at playing back very similar material. However, I found it immensely more interestingly when the musebot couldn't play back exact copies, and was forced to look for something similar in a folder where there was a possibility that nothing was very similar.
Convolved Shadowgraph
Yves Candau
​

Conductor
Patch
The changes I made to the Conductor included...


Musebots
Bot#1
This bot does...

Bot#2
This bot does...
Indifference Engine vs. Nathan Marsh
Arne Eigenfeldt

Requires​ SoundflowerBed for Mac. In performance, live audio (i.e. from an audio interface) is collected in AudioAnalyzerBOT. catartBOT sends its audio via Soundflower 2chan to effectsBOT, which sends its audio back out the audio interface. 

Conductor
Max_Musebot_Conductor
A standard Conductor, with a few idiosyncratic changes: 
• I've included a timer to display the elapsed time for a performance (in [p timer]).
• Rejigged /broadcast/statechange (i.e. coming from a musebot) to turn it into /mc/statechange (i.e. coming from the Conductor). The musebot spec suggests that statechange is a /mc message, assuming that someone is controlling the Conductor and then telling all musebots to change their states. This is actually the first time I've every used statechange, and it is coming from MIDIcontrollerBOT: thus, I needed to change the Conductor itself (since musebots cannot send /mc messages). This underlines the movement away (or at least mine) from the Conductor potentially having some smarts, to now doing nothing more than routing messages.


Musebots
AudioAnalyzerBOT
This musebot analyzes audio input, and sends feature analysis as messages. 
Click on adc~ to select the input source, select "mic on" from the menu, and raise the input gain. Note that it is possible to play a soundfile for testing purposes, and monitor that soundfile as well.
Four types of audio feature analysis are executed, including spectral centroid (spectral centre of the audio, rather than pitch analysis. Works very well for non-pitched material), loudness, sparse/active (is current loudness greater or less than the average from the last few seconds?), and spectral flux (amount of spectral variation in the audio). Smoothing plays an important role in these analysis, because it resets its limit every few seconds. Thus, centroid doesn't have a fixed low and high frequency; instead, it is the relative centroid for the last few seconds. [set min] for loudness can be used if the incoming signal has background noise (in my case, the constant hum of the amplifier). 
The features can be remapped to other values (i.e. arousal or valence).  Lastly, an averaged Bark analysis (25 bands) is also sent.

effectsBOT
Five AudioUnit effects – Delay, Pitch (transposition), NewTimePitch, RingMod, and Distortion. I stored a series of presets using the pattr system, and interpolate between them based upon incoming valence and arousal messages (from AudioAnalyzerBOT).

catartBOT
This requires IRCAM's CataRT software, and FTM software. It only works in Max 6 as a Max patch.
Diemo Scharz's concatenative real-time synthesis software analyses soundfiles, and displays them as (approximately) 250 ms grains in an XY grid based upon analysis features. I use pitch/loudness. I wrote a granular synthesis engine to play grains based upon an XY input (inside the bpatcher grain_sub). In this case, the XY input is a musebot message from AudioAnalyzerBOT centroid and loudness features. 
I provided 1/2 GB of material, organized into six sets of sounds – construction, machines, nathan, soundscape, tools, traffic. All soundfiles were pre-analyzed by cataRT, and could quickly be individually (and autonomously) loaded during performance. Folders automatically changed on statechange messages, which were sent from AudioAnalyzerBOT when activity went to 0.0. Individual soundfiles were selected from within the folder based upon a distance function based upon the real-time Bark band analysis (which was also done for every soundfile). 
Thus, realtime spectral analysis was done on the live performer, and catartBOT was trying to play the closed soundfile in its library; however, that selection was limited by its current contest: the sample folder. My first rehearsals with Nathan only used recordings from his previously improvisations; catartBOT was very successful at playing back very similar material. However, I found it immensely more interestingly when the musebot couldn't play back exact copies, and was forced to look for something similar in a folder where there was a possibility that nothing was very similar.
vulcan_a
Paul Paroczai
​

Conductor
Patch
The changes I made to the Conductor included...


Musebots
Bot#1
This bot does...

Bot#2
This bot does

Sonoclasm
Arne Eigenfeldt (audio musebot); Matthew Horrigan (video musebots)

Conductor
Matt_Max_Musebot_Conductor
We used Matt's Conductor which allows for network communication over WiFi. All users need to use the same Conductor on their own machines, but only one should be running and sending time code.
  1. Find your IP address using System Preferences>Network>Advanced>TCPIP. Other computers need to enter this into "Client IP address to request" numbers boxes.
  2. Enter other  computer's IP addresses into this field on your Conductor.
  3. Click "Try to create port..."
Messages sent within your musebot ensemble will be sent over the network to other users running this computer, and their ensemble's messages will be passed through your Conductor.
Note that Matt's Conductor will rewrite the config.txt file for every musebot loaded: the id of the musebot will no longer match the directory name, and other Conductors will not be able to launch the musebot. 


Musebots
AutechreBOT
A beatbot that produces odd subdivisions: bar lengths are consistent, but calculations range from 3 to 19 subdivisions. The bot produces only kick, snare, and hihat: the higher the vdensity, the more likely all three parts will play. Incoming hdensity​ controls how many onsets per measure will be calculated.
AutechreBOT also produces a randomly varying amount of degradation and bit-crunching. Lastly, AutechreBOT loads samples from a directory of eight samplebanks, choosing a new sample-set each time it runs.


RunglerBOT-1
RunglerBOT-2
Two separate noisy synthbots using jvkr's rungler algorithm. This algorithm takes four input values that produce unpredictable output when the values change. I created a series of LFOs whose speed is controlled by incoming arousal, and amount by incoming valence. Additional mapping of these parameters to signal degradation. I used two separate instances (which require two separate musebot folders), since the same algorithm reacts differently to the same input due to its chaotic nature.

MIDIcontrollerBOT
I used a Korg NanoController to perform in concert, mapping four sliders to four musebot messages (arousal, valence, hdensity, vdensity), as well as another four to directly control the volume of the musebots.

AudioAnalyzerBOT

Rather than send my controller messages to the videoBOT, I decided to send the output of a real-time audio analysis. All audio musebots (Autechre and two Runglers) sent their audio output to Soundflower; this musebot received its input from Soundflower. (Note that SoundflowerBed was running, and also sent the audio output from channels 1/2 to my audio interface.) Also note that the microphone still had to be turned on (it actually controls audio input, rather than only mic input), and the level raised, in order to receive audio from Soundflower. Lastly, I mapped activity => activity, and flux =>, since I didn't want to have two inputs for arousal and valence going to the RunglerBOTs.

ReassignMessagesBOT
Finally, Matt's videoBOT required a single message, "density", which my musebots don't actually send. Thus, the ReassignMessagesBOT allows you to take any message (I happened to use "centroid" from the AudioAnalyzerBOT) and give it another message name, in this case "density".
Powered by Create your own unique website with customizable templates.
  • Welcome
  • Info
    • Origins
    • News
    • Publications
  • Coding
    • Getting Started
    • About Ensembles
    • Max4Live Musebots
  • Performances
    • Play Nice
    • Play Nice II
  • Workshops & Gatherings
    • Byron Bay 2017
    • ICMC 2018 Workshop
  • Video
  • Blog