At many of the larger theatres, as a sound designer and composer, I don’t actually program the sound machinery, necessarily. The Festival Theatre sound system is a behemoth of networked devices spread out over the entire building – computers, screens sharing and controlling other computers, amps, fiber optic cable….it’s a massive system and there is no way I would be able to get my head around it all to use it optimally.

My sound operator and programmer is William Griff (you can see him in the photo below), and he’s managing the technical end of the sound system and making my material sound good. I provide the material and the cues, and he makes sure it all goes out the system in the ways that I want (and suggests some ways I haven’t thought of, but invariably make it sound even better).

some of the many sound files that I made out of the audio I’ve composed; these are both “music” and “sound design” and often straddle both categories. Uh, there are many, many more than in this picture

Once we set up everything and get all the computers talking to each other, there’s a phase of making sure that the ways in which you’ve translated your work actually work, and speak in this new environment of multiple speakers and multiple computers. We spend a time just ensuring that the architecture works – that the sound is being transmitted and that you learn a way of working together, a shorthand and an audio picture that makes sense for the room and the story. Experimenting with placing that sound here and then the other part of the cue there, and learning how you and the technician think and work so that you can communicate efficiently.

As part of the shift I was speaking about in the previous post, there’s a second stage to it as well. You end up working through the technical architecture phase back into the idea and artistry phase again. A kind of beginner’s mind that reveals a lot that you didn’t hear before in your quest for making the thing you imagine.

The thing I actually like about not programming (and not everyone likes this) is that it frees me to listen and move into a listening mindset. A lot of our programming yesterday was me standing in various places in the hall, letting cues run, calling out some thoughts and ideas to Will to translate into programming and settings.

Will, translating

To free myself to think about sound and story, to imagine the bodies onstage moving through space (the actors are not present for programming, it’s a slow and tedious process and their time is better spent elsewhere). To pay attention to the ways the sound unfolds in the room, the qualities of the storytelling it offers.

To be sure, the architecture and the programming and the technical nerding out is fun, and useful, and does serve the story as well. But ultimately you need to step out from behind the desk and sit in a seat, and listen.

It’s the best.

The Shift

my station at the Festival theatre during tech and onstage rehearsals

There comes a time in the process of rehearsal for the sound designer and composer where the shift from the discoveries in the rehearsal hall have to move into the space. Pre-production and rehearsal hall is about looking for story elements and frames, and perhaps getting a rough draft of logistics, but in the hall (aka the theatre) is where we operationalize these discoveries (and make new ones besides).

I’ve mentioned before (perhaps not here) that the work of the designer (any designer) is to figure out what the world is – what are the parameters of the world in which the characters are telling the story? What are its limits and constraints? What are its possibilities? What are its surprises? These are things that I can explore and discover in theory through observing rehearsals and discussing the story with the creative team.

At some point, though, the discoveries that you make in the rehearsal hall have to be translated into the theatre. The first step of this process is a kind of fundamental re-imagination of how one’s audio speaks to the software available.

The industry standard playback software is Qlab, which is quite different from many sound players like iTunes or whatnot (you can see the interface in the top right monitor in the picture above). What Qlab allows you to do is treat your audio as a series of gestures, similar to a programming language (although instead of coding, you’re telling Qlab what to do with your audio (/MIDI/OSC/camera/lights/etc – Qlab can control all these things, although usually theatres use it for audio and projections, at least for now).

This isn’t a post about Qlab – there are great places to learn about it, not least at the figure53 website – but more about how I make the shift from discovery to execution mode.

a screenshot of Qlab

When you build audio, the timeline is linear – you start at time 0 and as the playhead advances, the sound plays:

  • The sound of rain fades in
  • A car drives in from the left, and stops in the centre
  • The engine stops, and the door opens and closes
  • Footsteps move from beside the car towards you
  • A door opens and shuts
  • The sound of rain fades out

In Qlab, you have to think of each of these things as an action, and program your Qlab session accordingly

  • Cue 1: The sound of rain fades in
    • a seamless loop of rain plays when the cue is triggered, but the volume is infinitely quiet
      • the audio loops forever or a predetermined # of times
    • the audio of the rain fades in to a predetermined level
  • Cue 2: A car drives in from the left, and stops in the centre
    • the audio of the car starts quietly and is created so it sounds as if it is moving from far left to near offstage (e.g. offstage left). When it reaches its final position, the engine stops
    • This audio plays once
    • (the sound of the rain continues through this)
  • Cue 3: the door opens and closes
    • This audio is made in another DAW and imported into Qlab
    • (the sound of the rain continues)
  • Footsteps move from beside the car towards you
    • This is not a cue – this is the actor entering from offstage, on the same side where the car stopped
  • A door opens and shuts
    • This is not a cue – this is the actor opening and closing a door on the set
  • Cue 4: The sound of rain fades out
    • A cue to fade the audio in cue 1 is triggered

The key is to consider the audio unfolding as a stack of gestures, like Scratch or a computer program. Every thing that happens, sound-wise, is a cue: a sound starting, a sound moving from one side to another, a sound fading in volume, a sound stopping.

Which means in the movement from the recording software to Qlab, you have to re-imagine how your audio is working in relation to the actor action onstage.

I could go on and on about this whole process (in fact I teach semester-long courses on this) but essentially, the shift I’m speaking of means that I have to move from thinking linearly to sequentially. It’s a fun shift, and full of a lot of new discoveries too, which I’ll speak about in another post.

because of course by now