There comes a time in the process of rehearsal for the sound designer and composer where the shift from the discoveries in the rehearsal hall have to move into the space. Pre-production and rehearsal hall is about looking for story elements and frames, and perhaps getting a rough draft of logistics, but in the hall (aka the theatre) is where we operationalize these discoveries (and make new ones besides).
I’ve mentioned before (perhaps not here) that the work of the designer (any designer) is to figure out what the world is – what are the parameters of the world in which the characters are telling the story? What are its limits and constraints? What are its possibilities? What are its surprises? These are things that I can explore and discover in theory through observing rehearsals and discussing the story with the creative team.
At some point, though, the discoveries that you make in the rehearsal hall have to be translated into the theatre. The first step of this process is a kind of fundamental re-imagination of how one’s audio speaks to the software available.
The industry standard playback software is Qlab, which is quite different from many sound players like iTunes or whatnot (you can see the interface in the top right monitor in the picture above). What Qlab allows you to do is treat your audio as a series of gestures, similar to a programming language (although instead of coding, you’re telling Qlab what to do with your audio (/MIDI/OSC/camera/lights/etc – Qlab can control all these things, although usually theatres use it for audio and projections, at least for now).
This isn’t a post about Qlab – there are great places to learn about it, not least at the figure53 website – but more about how I make the shift from discovery to execution mode.
When you build audio, the timeline is linear – you start at time 0 and as the playhead advances, the sound plays:
- The sound of rain fades in
- A car drives in from the left, and stops in the centre
- The engine stops, and the door opens and closes
- Footsteps move from beside the car towards you
- A door opens and shuts
- The sound of rain fades out
In Qlab, you have to think of each of these things as an action, and program your Qlab session accordingly
- Cue 1: The sound of rain fades in
- a seamless loop of rain plays when the cue is triggered, but the volume is infinitely quiet
- the audio loops forever or a predetermined # of times
- the audio of the rain fades in to a predetermined level
- a seamless loop of rain plays when the cue is triggered, but the volume is infinitely quiet
- Cue 2: A car drives in from the left, and stops in the centre
- the audio of the car starts quietly and is created so it sounds as if it is moving from far left to near offstage (e.g. offstage left). When it reaches its final position, the engine stops
- This audio plays once
- (the sound of the rain continues through this)
- Cue 3: the door opens and closes
- This audio is made in another DAW and imported into Qlab
- (the sound of the rain continues)
- Footsteps move from beside the car towards you
- This is not a cue – this is the actor entering from offstage, on the same side where the car stopped
- A door opens and shuts
- This is not a cue – this is the actor opening and closing a door on the set
- Cue 4: The sound of rain fades out
- A cue to fade the audio in cue 1 is triggered
The key is to consider the audio unfolding as a stack of gestures, like Scratch or a computer program. Every thing that happens, sound-wise, is a cue: a sound starting, a sound moving from one side to another, a sound fading in volume, a sound stopping.
Which means in the movement from the recording software to Qlab, you have to re-imagine how your audio is working in relation to the actor action onstage.
I could go on and on about this whole process (in fact I teach semester-long courses on this) but essentially, the shift I’m speaking of means that I have to move from thinking linearly to sequentially. It’s a fun shift, and full of a lot of new discoveries too, which I’ll speak about in another post.