Description
In the first iteration of the project, the project unveiled a metamorphosis of real-human-beings, a symbiosis of pixels and emotion. Within this audiovisual journey, four digital portraits manifested, each accompanied by a symphony of sound. In the regenerated edition, the work invites viewers to partake in the deconstruction.
The generative AI models allow for unexpected results that are created by each participant’s input, mainly data taken from their faces/selfies. The base model is instructed to deconstruct the faces and match them with elements that represent a face in a virtually manipulated domain. Given that the first edition was all manually done by the creator, now in collaboration with the AI model, various results could be produced.
This audio/visual project can promote an immersive experience in which the viewer/participant reflects more upon their own identity, and how it’s shaped within a digital environment. Their physicality is first and foremost manipulated by the AI models, which are a reflection of how much authority these models can have in a post-generative AI world. The final alteration also reflects on how “selfies” were thought of previously, presently, and how generative AI will shape a future for them. This new collaboration between the artist, a neural network, and the audience present newer areas of thinking on the new wave of cybernetics, observing systems, and creation as a whole.