Thu, Jan 23 | Berlin

CTM Festival: Wilding AI Lab

Created in collaboration with the initiative “Wilding AI”, this four-day lab will assemble a group of participants to learn about the application of generative AI in spatial audio, and collectively explore the wilder territories of AI.

Time & Location

Jan 23, 2025, 11:00 a.m. – Jan 26, 2025, 5:00 p.m.

Berlin, Veteranenstraße 21, 10119 Berlin, Germany

About the event

Set within the 4DSOUND environment of MONOM, Berlin’s centre for spatial sound, the Wilding AI Lab is designed as a public experiment for sound artists and musicians to learn about and experiment with some of the latest AI systems bridging large-language models and generative AI sound, all within the spatialised audio environment. The lab follows the artist and researcher Beth Coleman’s appeal for imagining an AI »that can be free—if not to imagine, then to generate—speeding through possibilities, junctures that are idiotic until they are not.«

Over four days the lab features a series of morning skill-sharing workshops for lab participants and interested members of the public, plus afternoons of hands-on experimentation closed to lab participants only. The sessions and lab will be hosted by artists and researchers including Portrait XO, Beth Coleman, Alexandre Saunier, and Maurice Jones, who among others have been active as the artistic core to the ongoing research-creation project »Wilding AI.«

Launched in August 2024 at the MUTEK festival in Montreal, »Wilding AI« presents an open space to reopen the black boxes of generative AI. It gathers folks to encounter each other and the manifold artistic and technological experiments, interventions and provocations, which not only imagine but materially manifest wild AI futures. The initiative has continued experimenting at MUTEK.MX in October 2024 before landing at MONOM in Berlin just ahead of CTM Festival.

Wilding AI Lab Calendar

23 January – Day 1: Word

Day 1 (Word) centres around questions of storytelling and world-building in times of generative AI. Beth Coleman will introduce Wilding AI as a critical approach to artistic research-creation in the age of algorithmic culture. Following, Maurice Jones will lead a skill-sharing session exploring the critical application of large-language models in and through artistic practice.

11:00–13:00 Public Skill-Sharing Session: Introduction to Wilding AI (Beth Coleman), followed by an introduction to the critical application of large-language models in and through artistic practice (Maurice Jones)

14:00–19:00 Wilding AI Lab (open call participants only): Artistic experimentation with large-language models for storytelling and world-building

24 January – Day 2: Sound

Day 2 (Sound) is focused on the latest developments in generative sound. Led by independent artist and researcher Portrait XO the morning session will provide a crash course ranging from the latest AI tools to practices of data sonification.

11:00–13:00 Public Skill-Sharing Session: Introduction to generative sound (Portrait XO)

14:00–19:00 Wilding AI Lab (open call participants only): Artistic experimentation with generative sound in the spatial audio environment

25 January – Day 3: Space

Day 3 (Space) is focused on translating word and sound into the spatial audio environment. Led by Alexandre Saunier, the morning skill-sharing session will share the latest AI-driven tools for sound spatialization developed by the Wilding AI collective.

11:00–13:00 Public Skill-Sharing Session: Introduction to 4DSOUND (William Russell) and AI-driven tools for spatialization (Alexandre Saunier)

14:00–19:00 Wilding AI Lab (open call participants only): Artistic experimentation and collective prototyping and composition utilising large-language models, generative sound and spatial audio techniques

26 January – Day 4: Open Lab

Day 4 (Open Lab) will invite audiences and publics for a series of prototype presentations, artistic interventions, talks and presentations sharing both process and outcomes of the lab and its participants.

Open Call Conditions

The call is open to artists worldwide. Applicants should have an already existing practice that ties into this lab’s focus on critically dissecting multi-modal applications of generative AI in word, sound, and motion, as well as clear ideas or questions on what next steps they would like to take in expanding this practice.

Up to 12 participants will be selected by lab mentors Beth Coleman, Maurice Jones, Alexandre Saunier, and Rania Kim (Portrait XO).

Selected participants must pay a fee of 50 euro to take part in the lab. You will receive a CTM 2025 festival pass, and catered lunch for the lab’s duration including the day of the public presentations. We are also able to provide selected fellows with letters of invitation for those needing them for visa or funding applications, or other professional reasons. Unfortunately we cannot financially support travel and accommodation costs.

Open Call Schedule and Deadlines

  • Application Deadline: 29 November 2024
  • Selected participants announced: before 20 December 2024
  • Lab dates: 23–25 January 2025
  • Public output: 26 February 2025