wilding AI in Montreal

To wild: to make wild. To see and hear the wild underneath the tame, to know it is there and embrace it fully, pull it out from under, with love.

I am honoured and grateful to be part of a cohort of artists and thinkers in the research cohort Wilding AI. Over the last few years, we have met in different locales – online and around the world – to think about what it means to work and create in this so-called algorithmic moment.

We all have (as collectives do) many differing ideas and interests (I’ve written about our residency at Fiber Festival in Amsterdam earlier this year in a blog post here). We recently finished a residency at Université de Montreal’s Laboratoire formes · ondes, and presented a collection of works at a sold-out show at SAT Montreal.

This project was a rubber-meets-the-road of our many discussions, Google Meets, emails, scholarly articles, residencies, dancing, meals, hugs over the last few years. It was the first time we all sat down to make something that we could share with others.

The process of discussion between us, and the many open labs we have hosted around the world has been an incredible watering of the soil. We’ve wrestled with software, with plugins, githubs, jet lag (so much jet lag), Google Colab and Jupyter notebooks, microphones, the world on fire, speakers, guitars and drumsticks, our own bodies and voices in the moment of making. We’ve all approached what it means to make with tools that are (supposedly) efficient, that prioritize efficiency and automation and want us to see and use them in that way.

Some of us did that, some of us didn’t (or maybe it would be more accurate to say we all did to varying degrees).

But what we also did was to invite ourselves and each other to be wild: to ask ourselves what is down deep, what is wild in our own minds, hands, bodies, ears, hearts, and to allow that wildness to sing in our making with the machine. Each song was different; each song was wild. And the discussions we had among ourselves and with others was wild too.

Every moment we talked with others about being wild in this machine space – about rejecting some part of the workflow, or listening to the composition and what it needed despite/beyond/outside the math, to hear the algorithm break down and embrace the possibility of that endpoint – I saw it land with people. I saw it dawn on them that there is room, that the machine imperative can be lived with instead of lived under. I saw it land that they can listen to their bodies and their ancestors, that they can be slow, they can be human and live in the gap between machine math and the mess and wildness of what it means to be on this earth, together.

And that is what we, WAI, did in Montreal – we were together, on this earth, listening, sharing and being wild.

(the image for this post comes from SpatGRIS, the spatialization software used at the Laboratoire. More info here).

radical sonic futures

What does “radical sonic futures” mean?

To me, it’s a term about what it means to consider or reach for radical outcomes as regards our relationships to sound and what we hear it offer us. Outcomes that are not yet present, but nascent, humming tones that are dimly sensed.

And through that listening, that reaching, we come to a deeper understanding of what it means to inhabit the world around us, now, in this present moment. That we seek a path to engage with our world as it is, not as something outside ourselves, but through being a part of it, in it.

And that the future is the place and time where they blossom. And the things they blossom into are radical, much more radical than our relationship to sound and each other in the present moment (and especially in the present moment).

And when I say “the future”, I mean the tick, the moment, the sample after this one.

I was recently commissioned to write an essay on the topic of “audible futures” for an upcoming publication – a massive project that will involve many current (and radical) voices in the field of sound studies. I thought a lot about what that word “future” meant. What did they mean by “future”? What do any of us mean? I thought too about what it might mean to be invited into this community of sound people, many of whom I knew as names on a page, writing ideas that drive the things I think about and share with you here. What can I offer as part of this gathering? What can I say that hasn’t been said before, and better?

To their credit (and my endless gratitude), the editors wanted to hear what I thought, rather than defining the term or the circumstance for me*. They were clear that they were asking me to imagine, and they spoke passionately about the community of thought they were trying to build. They urged me to reach for a fantastical dreaming of our aural future(s), something hidden or around a corner.

So I thought about it. What is around the corner, present but just out of sensing? What is hidden? How can I share it?

Thinking, I keep coming back to the thing I always come back to: that sound is a way for us to connect to the moment, to the reality of being, to being in relationship. It is a way to forge new understandings and openness. That to be open to sound, to receive it with all your senses, is to be attuned to the universe in a way that reveals the rich interconnectedness of atoms we call reality, that we call friendship, that we call the sunshine on our skin, that we call joy, that we call love.

You know: radical sonic futures.


*

We all know that defining terms can sometimes collapse them – see the beautiful “listening protocols” article from Dr Salome Voegelin et al’s project Listening Across Disciplines at the University of Arts London):

“…the aim to compile such a vocabulary soon collided with the desire not to stultify listening and hearing in a lexical definition. In other words, not to turn the heard into a visual object and not to deprive the sonic of its fluidity, ephemerality, and even unreliability, upon which, after all, its particularity and its knowledge gain relies. And so, while there was a desire to develop shared words, to improve a cross-disciplinary understanding and use of the sonic, there was also a caution against what words do or do not permit the doing of, once written down and lexically defined.” (Voegelin; link)

wilding AI @ fiber festival

A number of us from the ongoing Wilding AI project just returned from the Fiber Festival in Amsterdam, where we had an open lab day and residency as part of our research into using machine learning and AI in workflows that centre sound. Many of us are investigating the idea of “sonic agents” (e.g. Maurice Jones’ project “feral ai”) but we all have our own takes on the concept of “wilding” and how it might apply to the research-creation AI space.

And therein lies our strength, and the strength of the project: that we all bring our own perspectives, interests and opinions to the group, and that we listen to each other; that we get inspired and stay curious; and that we all move and grow together in our own ways. Like a tree maybe – each branch has its own pattern, size, shape, density, but every branch is a part of the tree, and they are ways the tree stays nourished, shoots its roots into the ground, reaches for the sun, drinks in the rain, provides shelter and shade when the days grow too hot.

Every one of us has different levels of bewilderment and surety in this very broad field of what we currently call “AI”. Each one of us has their own set of refusals and embraces in what they want to do and how they want to do it (here’s a link to a powerpoint of the talk I gave at the open lab day). And we by no means all agree all the time. But what each person brings to the group is welcomed, listened to, sifted, expanded and deepened through being together in all the different ways.

I know I keep coming back to listening and ways of being together, sometimes in ways that don’t make sense on the surface when discussing specifics about, say, rehearsal rooms in Canadian theatre, or machine learning, or sound designing. But I do that because I keep learning the same lesson, over and over, everywhere – the ways in which we show up in a room (virtual or IRL) has a profound effect, more than any tool, any software, any code.

It’s the thing that I know, and that I keep knowing, and what I always find down at the bottom of everything, lifting us up, softly and strongly.

Daniela Huerta (MX/DE; IG @babyvulture) leading us through a Pauline Oliveros score, during a working group online meeting in preparation for our 2025 MUTEK Montreal residency with some of the members of Wilding AI (Gadi Sassoon (IT), Maurice Jones (CA/DE), Alexandre Saunier (CH/FRA), and ds)

working in tandem, in immediacy

I’m working with visual artist Tazeen Qayyum on a new performance project with sound and performative drawing. Tazeen’s practice stems from her training in Mughal miniature painting, and she has taken this tradition and exploded it outwards to many different expressions. One of her modes of working results in her performative drawings that she constructs in her studio, but also live. She works by minutely drawing one word, inscribing it 1000’s of times in patterns, resulting a document of meditation that is incredibly striking.

We’ve long been fans of each other’s work, and have been looking for a way to work together, me with my instruments and electronics and her with her drawing. What was particularly interesting to both of us was not just performing in the same space at the same time, but looking for with some kind of possible way for us to communicate and transform our practices together. What we did at her most recent (as of time of writing) exhibition was host an open rehearsal, where we actually exposed the process of investigation to outside viewers in real time.

This was an absolutely extraordinary experience, to share our thoughts and ideas out loud with others present. It was so interesting to involve and incorporate people’s observations of what they experienced as we tried new strategies in working together.

I am tremendously excited by the project’s potential. We’ll be doing a lot more of this (our schedules permitting) and there are many new paths to discover.

Travels

I spent some time as a guest composer at Elektronmusik Studion (EMS) in Stockholm recently:

It was a tremendously inspiring time, and while I was there for 10 days, I never really got over my jet lag, going into the studios at all hours. There were a number of fantastic spaces there, all of which had slightly different capabilities in terms of gear and sound.

The thing that I thought so incredible, though, was the very strong commitment from the staff that EMS was a public good. This went beyond the funding they received from the Swedish government (the studio celebrates it’s 60th year this year), but is rooted very strongly in the genesis of the institution (housed initially in a worker’s building and undergoing many shifts in outlook and equipment). At every turn I heard and experience the studio as being a gathering place for new sounds and ideas. The composers-in-residence do not have to pay anything for access to the studios, and are invited on the basis of the ideas they wish to explore at the studio.

As a result, you get a very broad range of practices and age groups, and the potential for cross-inspiration is great. I thoroughly enjoyed my time there, not just for my own work but for the conversations I had with the staff and the artists there.

Looking forward to working on the piece I started there. Stay tuned.

Subharmonies

Putting on my touring musician hat again after some time to play at Cluster in Winnipeg. I’ll be performing my quadraphonic Rückstreuung project, which made its debut at the Museum of Contemporary Art Toronto. Here’s an excerpt from that performance:

live excerpt of the quadraphonic project rüstreuung @ MOCA, Toronto in 2022

As you can see from the video, it’s really about architecture and space. The quadraphonic dissemination is explicitly chosen to allow for the tones to intersect and create rhythms and to allow the listener to discover tones that are located in specific places. Moving from one space to another reveals different sound interactions.

The project was created at Akademie der Künste during a residency at the Studio for Electroacoustic Music, where I was able to experiment on the venerable Subharchord, one of the few still functioning:

the subharchord @ Akademie der Künste SEM

Here’s where I ended up (thanks to Robert Lippok for taking the video):

Since I don’t have a Subharchord (boo) I’m using a Moog Subharmonicon, which uses the same principle of audio synthesis. The “sub” in both those names refers to the fact that instead of overtones from a fundamental pitch (which is what many synths do), these instruments divide the frequency to create subharmonies to the fundamental. Tuning these sub levels results in intersection of tones, and the use of a filter further allows one to shape the quality of the sound.

I’m really looking forward to going back to Winnipeg. I’m playing at a venue that I was at the opening of, the West End Cultural Centre – in fact, I think my mom performed at that opening. I’ve played there many times since, and it’s going to be nice to go home and explore that space with sound.

See you there?

UPDATE: I found this so I’m going to try and build it in Max/MSP. It’ll probably take me a year. ↓

quasi schematic from subharchord.com

And so, on

This weekend we will open Romeo and Juliet for the 2024 Stratford Festival season, and so this series will end, to be replaced by other posts about current projects, ideas, sounds, and thoughts.

I’ve been so surprised/delighted/humbled by how many of you have mentioned following the blog during this project. I hope I’ve been able to give you some new perspectives about how I work in theatre. It’s not the only, and definitely not the definitive way, but just one way. As I often tell my students, there are 6 ways to do any one thing – the key is to find the way that makes sense to you, and that gives you the results you want. Hopefully some of these posts have given you some ideas in your practice.

I hope you’ll continue to come back here – I have many interesting projects in the pipeline, in particular a stint at Elektronmusicstudion in Stockholm. You can read more about that here.

Feel free to email me at deb < at > debsinha dot com – I’d love to hear from you.

Have a great summer.

d

Moving through

the view from The Balcony. Yes, that one

It’s been quiet on the blog of late, because things are moving apace. Our tech time in the theatre is moving fast, trying cues, balancing, making sure we have even coverage (or as even as we can – there are over 300 speakers in this hall but even so, not every seat has even sound).

We are now into our Previews, which are for the paying audience. Before that we had:

  • Onstage rehearsals
  • Onstage rehearsals with tech (lights and sound)
  • Tech rehearsal (a first draft of the light and sound elements)
  • Tech dress (a second draft but with costumes)
  • Quick change rehearsal (where the dressers and actors practice any wardrobe changes that have to be, um, quick) (this is usually just before the Tech Dress)
  • Dress 1 (we should be close now)
  • Dress 2 (often with invited guests and the company)
  • Previews 1-3, with 4 hour rehearsals following – designers are released after the rehearsal after the 3rd preview

We should note that the acting company continues with rehearsals in the rehearsal hall as well, to nail down acting notes and needs.

preview 1! I deliberately take seats away from where I usually sit to double check

Time somehow moves faster during these 3 previews. Very often I’m juggling multiple timetables here at the Festival, because I often work on 2 or more shows each season. This season I’m only on R and J (as it is affectionately called) so I thought there would be a little more space to breathe! However, that has gone the way of all illusions…

With the advent of an audience, the show takes on a life of its own. It becomes what it is, rather than what you think it is. There’s something about the conglomeration of a plethora of consciousnesses (uh if that’s a word) that can – not always, but can – completely turn things on their head, some shows more than others. In this case, it seems that the sound and music seem to be operating like they should, but there are a lot of new things I’ve found in sitting in a house with others. Often (like this time) it’s levels – things being too loud, or improperly balanced with all the sound absorbing meat sacks (aka people) in the house. I had some of that today, which I’ve been able to address during the night’s rehearsal.

Sometimes, though (thankfully not this time, and not often) seeing a show with others completely changes scenes, cues or maybe huge decisions that were made, and you have to go back to the drawing board. At this point, and at this festival, more often than not this doesn’t happen, because everyone knows how things shift and change and they make decisions that keep that in mind. But sometimes, you need to completely rework some of the ideas you thought you had.

The key there is to be able to move fast – to know your tools, to extrapolate, to have a strong picture in your mind how sound is moving through the space and through the story. Some of this you can do with practice, on your own, some of it takes root as you work in the room, some of it can only be guessed at, and you try on the next preview. It’s tricky. The key is to make sure that you’ve done your due diligence – that you know how to use your tools quickly and efficiently, that you can make decisions and sense the musicality and the sound through the space in a very deep way, that you can make an offer that makes sense, that you refrain from flailing, and be deliberate and care-full (but also fast).

It takes time to prepare, but there’s prep and there’s prep. To know how to move fast, to move well, and to move while listening – that’s the key.

Sound

At many of the larger theatres, as a sound designer and composer, I don’t actually program the sound machinery, necessarily. The Festival Theatre sound system is a behemoth of networked devices spread out over the entire building – computers, screens sharing and controlling other computers, amps, fiber optic cable….it’s a massive system and there is no way I would be able to get my head around it all to use it optimally.

My sound operator and programmer is William Griff (you can see him in the photo below), and he’s managing the technical end of the sound system and making my material sound good. I provide the material and the cues, and he makes sure it all goes out the system in the ways that I want (and suggests some ways I haven’t thought of, but invariably make it sound even better).

some of the many sound files that I made out of the audio I’ve composed; these are both “music” and “sound design” and often straddle both categories. Uh, there are many, many more than in this picture

Once we set up everything and get all the computers talking to each other, there’s a phase of making sure that the ways in which you’ve translated your work actually work, and speak in this new environment of multiple speakers and multiple computers. We spend a time just ensuring that the architecture works – that the sound is being transmitted and that you learn a way of working together, a shorthand and an audio picture that makes sense for the room and the story. Experimenting with placing that sound here and then the other part of the cue there, and learning how you and the technician think and work so that you can communicate efficiently.

As part of the shift I was speaking about in the previous post, there’s a second stage to it as well. You end up working through the technical architecture phase back into the idea and artistry phase again. A kind of beginner’s mind that reveals a lot that you didn’t hear before in your quest for making the thing you imagine.

The thing I actually like about not programming (and not everyone likes this) is that it frees me to listen and move into a listening mindset. A lot of our programming yesterday was me standing in various places in the hall, letting cues run, calling out some thoughts and ideas to Will to translate into programming and settings.

Will, translating

To free myself to think about sound and story, to imagine the bodies onstage moving through space (the actors are not present for programming, it’s a slow and tedious process and their time is better spent elsewhere). To pay attention to the ways the sound unfolds in the room, the qualities of the storytelling it offers.

To be sure, the architecture and the programming and the technical nerding out is fun, and useful, and does serve the story as well. But ultimately you need to step out from behind the desk and sit in a seat, and listen.

It’s the best.

The Shift

my station at the Festival theatre during tech and onstage rehearsals

There comes a time in the process of rehearsal for the sound designer and composer where the shift from the discoveries in the rehearsal hall have to move into the space. Pre-production and rehearsal hall is about looking for story elements and frames, and perhaps getting a rough draft of logistics, but in the hall (aka the theatre) is where we operationalize these discoveries (and make new ones besides).

I’ve mentioned before (perhaps not here) that the work of the designer (any designer) is to figure out what the world is – what are the parameters of the world in which the characters are telling the story? What are its limits and constraints? What are its possibilities? What are its surprises? These are things that I can explore and discover in theory through observing rehearsals and discussing the story with the creative team.

At some point, though, the discoveries that you make in the rehearsal hall have to be translated into the theatre. The first step of this process is a kind of fundamental re-imagination of how one’s audio speaks to the software available.

The industry standard playback software is Qlab, which is quite different from many sound players like iTunes or whatnot (you can see the interface in the top right monitor in the picture above). What Qlab allows you to do is treat your audio as a series of gestures, similar to a programming language (although instead of coding, you’re telling Qlab what to do with your audio (/MIDI/OSC/camera/lights/etc – Qlab can control all these things, although usually theatres use it for audio and projections, at least for now).

This isn’t a post about Qlab – there are great places to learn about it, not least at the figure53 website – but more about how I make the shift from discovery to execution mode.

a screenshot of Qlab

When you build audio, the timeline is linear – you start at time 0 and as the playhead advances, the sound plays:

  • The sound of rain fades in
  • A car drives in from the left, and stops in the centre
  • The engine stops, and the door opens and closes
  • Footsteps move from beside the car towards you
  • A door opens and shuts
  • The sound of rain fades out

In Qlab, you have to think of each of these things as an action, and program your Qlab session accordingly

  • Cue 1: The sound of rain fades in
    • a seamless loop of rain plays when the cue is triggered, but the volume is infinitely quiet
      • the audio loops forever or a predetermined # of times
    • the audio of the rain fades in to a predetermined level
  • Cue 2: A car drives in from the left, and stops in the centre
    • the audio of the car starts quietly and is created so it sounds as if it is moving from far left to near offstage (e.g. offstage left). When it reaches its final position, the engine stops
    • This audio plays once
    • (the sound of the rain continues through this)
  • Cue 3: the door opens and closes
    • This audio is made in another DAW and imported into Qlab
    • (the sound of the rain continues)
  • Footsteps move from beside the car towards you
    • This is not a cue – this is the actor entering from offstage, on the same side where the car stopped
  • A door opens and shuts
    • This is not a cue – this is the actor opening and closing a door on the set
  • Cue 4: The sound of rain fades out
    • A cue to fade the audio in cue 1 is triggered

The key is to consider the audio unfolding as a stack of gestures, like Scratch or a computer program. Every thing that happens, sound-wise, is a cue: a sound starting, a sound moving from one side to another, a sound fading in volume, a sound stopping.

Which means in the movement from the recording software to Qlab, you have to re-imagine how your audio is working in relation to the actor action onstage.

I could go on and on about this whole process (in fact I teach semester-long courses on this) but essentially, the shift I’m speaking of means that I have to move from thinking linearly to sequentially. It’s a fun shift, and full of a lot of new discoveries too, which I’ll speak about in another post.

because of course by now