Designing Synthetic Sirens for a Seven-Month Run
How we built a show control system inside TouchDesigner that made a seven-month museum exhibition as reliable as a lighting console.
A seven-month exhibition is a different problem than a three-week show. When MSU Broad confirmed the run length for Synthetic Sirens, the first conversation we had wasn’t just about the visuals, it was also about failure modes.
Most generative installations work as autonomous loops: the system runs, the system drifts, the system self-corrects (or doesn’t). That model works well for short runs where someone is in the room. It doesn’t work for seven months in a museum where the piece needs to open every morning, run all day, and close every night without an operator present.
The problem with generative loops
The instinct with a piece like this is to build a self-sustaining system — sensor input drives a generative process, the process drives visuals and audio, everything feeds back. Clean conceptually. Hard to operate. When something goes wrong in a loop that’s been running for six weeks, you’re debugging accumulated state rather than a discrete event.
We also needed the piece to have shape. The work is about algorithmic societies. About being watched, categorised, sorted by systems that have their own logic. That idea demanded a system that had explicit states, not just a loop that might drift into something unintended by week four.
Scoring the performance
We built the installation as a scored performance rather than a generative patch. The operational core is a show control system inside TouchDesigner: a preset engine that can snapshot any combination of parameter states across the whole network, a cue list that sequences those presets in order, and a tweener that handles interpolated transitions between states with configurable easing curves.
An operator steps through the piece like a lighting console. Each go command fires a timed, interpolated transition into the next state. Parameters animate smoothly to their target values, and the system knows exactly where it is at every moment. If the machine restarts overnight, loading the last cue puts everything back where it was. There’s nothing to drift.
The number of parameters in a full TouchDesigner network is large. The preset engine’s job is to collapse all of that into a single number: the cue index. That abstraction is what made seven months manageable.
What the camera sees
Audience sensing runs on a camera feed processed inside TouchDesigner. Rather than skeleton tracking, which is brittle at the scale of a full gallery and sensitive to lighting conditions, we use optical flow to build a live map of motion in the space. That motion map feeds a particle system which translates crowd movement into visual material.
Rendering at scale
The visual output runs in Unreal Engine. The space at MSU Broad demanded rendering quality that TouchDesigner’s native renderer couldn’t provide at the required output resolution and frame rate. Unreal handles the final image; TouchDesigner routes sensor data into it, manages the show state, and keeps everything synchronised.
The integration is straightforward once you commit to it: TouchDesigner owns the control layer, Unreal owns the rendering layer, and the boundary between them is a stream of parameters and triggers. Neither system needs to know much about the other’s internals.
The piece closes at MSU Broad in July 2026 after seven months of daily operation. As time of writing, the piece has seen 0 downtime.