Jamais Vu is an interactive installation designed to evoke the sensation of unfamiliarity with the familiar. It incorporates a generative soundscape using crowdsourced audio from Montreal, collected via a web app and physical devices. The project draws on Viktor Shklovsky's concept of 'defamiliarization' and Baudrillard’s ideas on hyperreality to prompt critical reflection on how our data-driven world reinterprets personal experiences. Using a blend of technology, sound, and visual projections, the installation challenges perceptions of reality and empowers participants to shape the soundscape. Inspired by hauntology, it encourages reimagining the present and envisioning new futures.
Design Narrative
The installation features a central projection area of suspended fabric sheets onto which audio-reactive visuals are projected, representing a digitized portrayal of Montreal. Participants are invited to wander throughout and interact with the space from multiple angles. The room is enveloped by a spatialized generative soundscape, made by our participatory archive of crowdsourced audio submitted to our website. These sounds—everyday recordings of the city—are passed through a machine learning process that fragments, distorts, and reassembles these sounds into a reimagined algorithmic city. Handheld recording devices placed throughout the installation allow participants to contribute new sounds in real-time, providing a tactile way to engage while inside the installation. This interplay of different mediums and modes of participation is designed to create a space for interpretation and subjective sensory experience that resists being reduced to a singular, data-driven narrative. Each iteration of Jamais Vu is unique, consisting of a new combination of sounds from the past and present, always changing and reacting to new data fed by participants and giving agency back to the audience through the data collection process. In this way, the installation becomes a kind of haunted space: a simulated environment in which the boundaries between real and unreal, past and present, are fluid.
Notes on Machine Learning
AudioStellar was used to achieve a recreation of urban sounds that are fed on our own small dataset of field recordings, as well as data collected from the web application. AudioStellar’s clustering algorithm allowed us to have different clusters of sound extracted from the audio files. Also, AudioStellar’s built-in tools such as sequencers and effects allowed us to have in-software sound design and also randomized and mutated sequences over time to keep the sound fresh and new for each iteration.
A Max/MSP patch was designed and developed for the receiving audio files from the device. Firstly, a Node server was running to receive the Argon device’s audio files and store them in the host computer. In addition to that, a Python script was continuously running that keeps track of the incoming files and stores them in a queue. The queue processing is (wait line = audio_file_duration + 15secs. which is the machine learning part’s duration). This script would trigger the Max/MSP patch with the correct audio file’s name to be played. The sound would then first be played in a normal manner and then slowly reverb + delay effects would take over to disintegrate and decay the original audio file, and ultimately it would be replaced with the Machine Learning model’s decoded sound. The model that we used was Isis trained by IRCAM and then processed through RAVE as an encoder/decoder. This model was useful because it was trained on speech and we could use it to replicate the speech-audio files that we’d receive from the device. This patch also sent OSC messages to TouchDesigner where the visuals were rendered for when the ML model was activated with its sound frequency sent over to TD for special visual effects.