Agora
2026
Five voices from the future, speaking from totems in a room. You walk up, you talk. They have been waiting a long time.
Agora places five AI agents in a public space, each embodying a perspective from a future generation. Nature, Technology, Love, Economics, the Universe. Visitors approach each totem and speak. The agents respond, remember, and continue the conversation across visits.
The system runs entirely on local hardware. TouchDesigner acts as the main brain, and each of the five agents is a self-contained actor inside the patch. Identical in structure but individually configured. A custom build state machine manages the actor’s lifecycle. Listening, processing, speaking. Each actor’s system prompt and Q&A training data are stored directly inside the patch, including character-specific datasets used to prime the model’s persona.
The AI inference backend is llama.cpp running 12B with five parallel instances and a 20,000-token context window. Speech-to-text uses whisper.cpp with Silero Voice Activity Detection gating transcription until the visitor has finished speaking. Voice synthesis uses Kokoro, streaming audio chunks over httpx and asyncio and writing them as arrays into a script component. Keeping synthesis latency below what would break the sense of presence. All three backends start automatically when the installation launches.
Developed in collaboration with Analyse & Tal. Shown at SNART, Thoravej 29, Copenhagen, March – October 2026.
Tech
Venue
SNART
March – October 2026
Team
Gallery