Francesca Panetta, Halsey Burgund, Magnus Bjerg, Shehani Fernando / United Kingdom, Denmark, United States/ 2023
You’ve gone viral in a social media video; but it’s not “you”, rather a digital version that looks and sounds just like you making statements you would never say.
As AI and deepfake technologies develop at a startling pace, evidence is becoming destabilised. We must prepare ourselves. Unstable Evidence explores a world of AI-produced mis- and disinformation that is just around the corner. Using perhaps the strongest material possible – the audience members themselves and the subject matter they most abhor – it provides a shocking wake-up call on the personal and societal consequences of AI-driven synthetic media creation.
The work uses AI-enhanced techniques – digital humans, voice cloning and synthetic lip-syncing – to present audiences with a series of social-media style videos in which they see themselves making controversial statements that they strongly disagree with. From anti-vax propaganda to cancel culture, white nationalism to gay marriage and gun rights, audience members quickly and viscerally become aware of how easy it is to have their own image, video or voice manipulated.
Through engaging with the educational aspect of the experience, audiences will become more aware of synthetic media and understand the pros and cons of this rapidly developing technology. They will learn what is needed to minimize the risks to themselves and society at large. From legislation to transparency, to mitigating technological interventions – audiences will understand what is needed to steer this technology in a pro-social direction and keep their own identities safe.