Creating an audiovisual set: Performing with hydra, while also performing music

The process of promoting our second Bit Graves album, Murmur, led us down a couple new paths. First, we started using hydra video synth to create visualizers for the music. Second, we planned to perform the album live, and we started thinking about what kind of show we could put together.



We decided to try to do it all at once: we would perform music from Murmur against the backdrop of some live hydra video art that matched our album promo videos.



In total, our goals consisted of:

  • We'll project some pre-coded, live-synthesized hydra visual art to the audience while we perform music

  • The art should be audioresponsive, both on a micro-scale (it should react to small rhythmic changes in the music) and macro (it should coincide with large structural changes in the music)

  • We should touch and adjust the visuals as little as possible, so we can focus on performing the music.



There's some subtlety to these goals. Our music is semi-improvised. That meant that if we wanted the visuals to be audioresponsive, we couldn't just press play on a pre-recorded video (and that would be no fun anyway). The challenge became: give ourselves the smallest set of visual controls that let us respond to the music and the conditions of the venue, without distracting us from actually playing the music.





On September 15, 2023, we made this happen, performing with some friends at The Chapel in Seattle. Here is a video of the entire Bit Graves set from that night.



This post describes the hodgepodge of technical solutions we cobbled together to get hydra to do what we wanted. There are surely better ways of doing this, and we'd love to hear different practices from other people. Going forward, we'll assume the reader has some familiarity with hydra.



Getting started: Adding midi and building a visual setlist



We wanted a way for the performer on stage to navigate the audience-facing video between pre-coded scenes, roughly matching up with different songs from our album. We quickly realized that using a midi controller to direct hydra would make more sense than squinting at a laptop keyboard in the dark.





To our extreme fortune, hydra has a midi extension that we found to be excellent, so getting hydra to respond to an external controller was easy. Using a physical midi interface also allowed us to hide hydra's own interface during the performance, since there would be no need to type or click anything.



We found that this pattern worked fine for swapping between scenes:



mpd218 = midi.input(0).channel(9);scene1 = () => solid(1,0,0).out();scene2 = () => solid(0,1,0).out();mpd218.onNote(36, scene1);
mpd218.onNote(37, scene2);

Tap a midi pad to render scene 1. Tap a different pad to render scene 2.



This approach only renders a single scene at a time. An alternative approach would run all scenes simultaneously but fade between them. This would become computationally expensive as the number of scenes increased, but would allow for more interesting transitions.



We ended up with a lot more than two scenes, and it didn't make sense to have a different pad for each one. In order to minimize the midi interface, we arranged all the scenes into a setlist and only exposed controls for previous and next:



sceneIndex = 0;
setlist = [scene1, scene2, scene3];

nextScene = () => {
  if (sceneIndex == setlist.length - 1) sceneIndex = 0;
  else sceneIndex++;
  setlist[sceneIndex]();
};  mpd218.onNote(37, nextScene);// ...complementary logic for `prevScene`





Macro-responsiveness: Evolving scenes over time



We wanted each hydra scene to evolve over the course of several minutes as the music did the same. If we were devoting our full attention to performing the visuals, we would have mapped continuous hydra parameters to knobs on the midi controller and carefully adjusted the knobs to fit the music. But we wanted to focus on playing music and mostly ignore the visuals, so we decided to prearrange slow, measured changes.



Hydra allows you to use time as a parameter:



solid(() => 0.5 + Math.sin(time / 30) * 0.5, 0, 0).out()



By default, this measures time since hydra loaded in the browser. For us, it made more sense to measure time since the specific scene loaded, so that we could produce consistent musical transitions from the beginning of the matching song. To do that, we inserted this line during every scene change:



choo.state.hydra.hydra.synth.time = 0; // reset time



As mentioned at the start, our songs aren't identical every time, and the timings written into the hydra patches were only approximate. We decided to expose another midi control that allowed us to scrub hydra-time forward or backward by a few percent by tapping a pad, in order to reactively align the sonic and visual energy with minimal effort:



offset = 0;
offsetLerp = 0;
// hydra's built-in `update` runs each frame update = () => { offsetLerp += (offset - offsetLerp) * 0.1; }; tt = () => Math.max(0, time * (1.0 + offsetLerp)); mpd218.onNote(38, () => (offset -= 0.05)); // scrub backward mpd218.onNote(39, () => (offset += 0.05)); // scrub forward
// now use `tt()` instead of `time` inside all scenes





Micro-responsiveness: Incorporating the environmental FFT



Hydra includes a built-in way to visualize the spectrum of the computer's live audio input (by default its microphone):



shape(4, () => 0.5 + a.fft[0] * 0.5).out()



In terms of making the scenes respond to rhythmic changes in some way, this is everything one needs. However, we weren't able to predict how loud the performance venue would be, and wanted to be able to react if the microphone needed to be more or less sensitive. So we added one midi control that could be tapped to coarsely increase or decrease the sensitivity:



_aScale = 10;
a.setScale(_aScale);mpd218.onNote(40, () => { // bigger scale, less sensitive mic  _aScale /= 0.75;  _aScale = Math.min(40, _aScale);  a.setScale(_aScale)});
mpd218.onNote(41, () => { // smaller scale, more sensitive mic  _aScale *= 0.75;  _aScale = Math.max(5.625, _aScale);  a.setScale(_aScale)});



This turned out to be really important. Several of our scenes contain fft-dependent additive video feedback, and if the audio signal was strong, the visuals would quickly wash out to pure white. We actually wanted them to wash out... but only occasionally. These controls helped us make one or two critical adjustments during the show to ensure our audioresponsive visuals sat just at the right threshold.





Adding graceful transitions



As mentioned before, we chose to only load a single scene at a time. By default, that introduced a hard cut between scenes. To offer some smoother options, we wrapped every scene in a global midi control that allowed us to fade to black. For variety's sake we added another one that sort of noise-washed the scene without darkening it.



wrapMainScene = (sceneFn) => {
  sceneFn()
    .modulateScrollY(noise(200, 2), cc(15).range(0,0.9))
    .mult(cc(14))
    .out()
};
scene1 = () => solid(1,0,0); setlist = [scene1, ...];
nextScene = () => { ... wrapMainScene(setlist[gSceneIndex]); };

Note: This code sample also requires the hydra-arithmetics extension.



These were the only two continuous controls exposed to the performer. Between them, we could gradually illuminate, darken, or obfuscate the visuals between scenes. This was particularly useful for making the visuals stop neatly whenever the music stopped.





Loading everything at once



By this point, our hydra script contained all of the following: Eight different hydra scenes; midi controls to load the previous/next scene; controls to scrub time forward/backward; controls to increase/decrease microphone sensitivity; controls to transition in and out of scenes; plus a bit of initial boilerplate to load the required hydra extensions and textures.



When this project started, we were editing everything directly inside hydra and using hydra's encoded url to save our work. However, by this point we had so many scenes, with enough glue between them, that it amounted to a large program (by hydra's minimal standard). Eventually hydra refused to run, complaining that the encoded url was too long.



So, we put everything in a big JS file and loaded that into hydra:



const response = await fetch(...url to script containing all scenes...);
const bgjs = await response.text();
eval(bgjs);
initBitgraves();

It's probably better to use hydra's loadScript here instead of fetch, but for some reason it wasn't working for us and we didn't invest in fixing it.



This also required adding some boilerplate in the fetched file to expose initBitgraves() to the window.



At the beginning of our show, we pasted those few lines into hydra, then hid the hydra code interface, and we were ready to go. You can view the final JS file on our github. Note that if you want to try it yourself, you'll need to use a midi controller, and the midi constants in our file are programmed to match our Akai MPD218. In case it makes a difference, we are running hydra inside a fullscreen Firefox tab on macOS.



The night of the show



Approaching the show, we ran through everything a couple times in our rehearsal space, but we had no idea how things would play out in a real performance. Among all the different moving parts, we did our best to predict what we'd need on stage. But you never really know.



Right before we began our set, our friend and album producer Ben Marx told the audience a story about our Analog Heat failing on stage at a past show. It was a great reminder that just playing our music, without also trying to manage the visuals, can be a complete house of cards.



Somehow, everything went according to plan, and our pile of hydra kludges survived the night without any issues. It felt magical to see them on a huge, bright screen behind us in a dark room with a big sound system. Flashes of color illuminated our synthesizers and silhouetted our bodies to emphasize big moments in the music. The laptop did not crash. The Analog Heat and the monome norns did not crash. Nothing, as far as we know, crashed.





After the end of our set, as we stood on stage, the applause of the audience continued to trigger swells and glows in the audioresponsive noise behind us. Mentally, too, the moment right after a performance is always a blur. (Watch the show here.)



We're new to the world of visual synthesis, and this is the only audiovisual show we've attempted so far. We want to start incorporating camera and recorded video sources, and we haven't yet explored any hydra-like alternatives. If you've worked on anything like what we're doing here, please share your ideas.



After this show, we couldn't help but wonder: maybe next time we should get a fog machine.





This blog is part of a series about our album Murmur. Read the previous blogs, listen to the album, or contact us via bitgraves.com.

Published by Ben Roth (ben) 7 months ago on Tuesday the 26th of September 2023.

To reply you need to sign in.