Epistemic Collapse?
Synthetic Media, Truth Decay and the New Autonomy Threat
Imagine waking up to a video of an Irish party leader apparently admitting to vote tampering. It looks real. It sounds real. It carries a watermark from a familiar platform and is being shared by people you trust. By lunchtime, it has been debunked as a deepfake. By dinner, a new problem appears: people start saying that a completely genuine corruption video is “probably AI” as well.
That is epistemic collapse in miniature: not just believing false things, but losing confidence that we can know anything at all. Synthetic media is accelerating what researchers call “truth decay” - a diminished role for facts and analysis in public life, growing disagreement about basic facts, blurred boundaries between fact and opinion, and declining trust in institutions that once anchored public debate.
In a world where anything can be faked, it becomes tempting to treat everything as fake. For autonomy, the capacity to form stable, justified beliefs and act on them, that is an existential problem.
From misinformation to truth decay
Misinformation is not new. What is new is the ease and speed with which convincing fabricated audio, images and video can now be produced and distributed at scale. Deepfake tools are no longer the preserve of specialist labs or film studios. Free or low-cost tools allow anyone with a laptop or phone to:clone a voice from a short audio sample, generate photorealistic faces and bodies or fabricate “evidence” of events that never occurred.
Industry reports now estimate hundreds of thousands of deepfake videos circulating online, with year-on-year growth measured in multiples, not percentages. The majority of documented cases so far relate to fraud, financial crime and non-consensual sexual imagery, but politics and harassment are catching up fast. At the same time, researchers studying truth decay point to four structural shifts in our information environment:
more disagreement about established facts
a blurred line between fact and opinion
the dominance of emotion and anecdote over data and analysis
collapsing trust in journalism, science and public institutions
Synthetic media does not create these trends, but it pours fuel on them.
Deepfakes and the crisis of knowing
It is tempting to see deepfakes as a simple “fake news” problem: people see a fabricated clip, believe it, and vote the wrong way. The reality is more subtle and, in some ways, more dangerous. Experimental studies on synthetic political video have found that deepfakes do at times deceive people. But even when viewers don’t fully believe a fake, exposure can increase uncertainty and undermine trust in news more generally. After seeing a deepfake, some participants become less confident in all video evidence, including authentic footage.
Real-world election case studies are beginning to show the same pattern. In one recent European election, last-minute deepfake audio clips of party leaders, circulated via encrypted messaging apps, did not simply mislead supporters of one party. They contributed to a broader mood of confusion and suspicion – a sense that “everything is manipulated” and nobody really knows what is going on.
UNESCO has described this as a “crisis of knowing”: when deepfakes blur the boundary between real and artificial, citizens need more than technical detection tools. They need new skills and supports to navigate uncertainty in an AI-mediated information space.
This crisis has two faces:
Naive belief – people accept fabricated material as real and act on it.
Radical doubt – people begin to treat even authentic evidence as suspect, giving authoritarians, abusers and corrupt actors an easy escape route (“that video is just AI”).
Both undermine autonomy. In the first case, our choices rest on false premises. In the second, we become too disoriented or cynical to choose at all.
Autonomy in an environment of engineered doubt
Autonomy is not just having options on a menu. It depends on an environment in which people can: distinguish signal from noise, assess reasons and evidence and build or revise their beliefs in good faith. Synthetic media interferes with that environment in several ways.
1. Attacking evidential trust
When you cannot trust your own eyes and ears, you are pushed to outsource judgement. You rely instead on platform labels, “community notes”, influencers or partisan fact-checkers. That transfer of epistemic authority makes people more vulnerable to manipulation by whoever controls the filters.
2. Weaponising emotional salience
AI-generated content is often optimised for outrage, fear, desire or sympathy. It bypasses careful reasoning and goes straight for the emotional levers that shape our attention and behaviour. Even if we later discover that a clip was fake, the emotional impact has already done its work.
3. Normalising cynicism
When doctored clips, out-of-context images and AI-fabricated “receipts” circulate constantly, a defensive posture of “believe nothing” begins to feel rational. But permanent suspicion is not the same as critical thinking. Autonomy requires more than cynical detachment. It needs enough shared reality for joint decision-making and public reason.
These threats are not confined to high politics. Deepfake voices are used in scams that impersonate loved ones in distress. Fraudsters clone CEOs’ voices to instruct staff to transfer funds. Young people, especially girls and LGBTQ+ teens, are targeted with deepfake sexual images and reputational smears.
Each of these harms operates on the same vulnerability: our dependence on recognisable human signals – faces, voices, gestures – as anchors of trust.
Law is starting to respond – but slowly and unevenly
Regulators are beginning to recognise synthetic media as a structural risk, not just another “content moderation” headache. In the EU, the AI Act includes specific transparency obligations for deepfakes and synthetic media. Providers of AI systems that generate realistic content will be required to clearly disclose that content is AI-generated or manipulated and implement technical measures (such as watermarking or metadata tags) to make such content detectable.
The European Commission has followed up with work on content provenance and a draft code of practice for AI-generated content. Some Member States are already proposing national rules on deepfakes, including criminal penalties where synthetic media is used for fraud, harassment or non-consensual imagery. These steps matter. They begin to treat synthetic media as an infrastructural issue, not a mere user-choice problem.
But three limits are already visible:
Label fatigue
If every second image or clip is labelled “synthetic”, users may start to ignore the labels. And once content is downloaded, cropped, re-uploaded or shared in private channels, those labels can easily fall away.Global mismatch
EU rules cannot fully bind non-EU platforms or actors, nor can they easily govern what flows through encrypted messaging apps, where many of the most impactful deepfakes circulate.Focus on single items, not cumulative effect
Most legal tools target individual pieces of content. Autonomy, however, is damaged cumulatively – through the constant drip of confusion, cynicism and emotional manipulation over time.
Towards autonomy-preserving information systems
If epistemic collapse is the disease, transparency rules are only one part of the treatment. At the Centre for Digital Ethics, we argue that we need to think in terms of autonomy-preserving information systems - ecosystems that protect people’s ability to know, deliberate and choose, rather than simply flooding them with disclosures.
That means working on at least three layers:
1. Technical and infrastructural measures
Robust watermarking and provenance standards that travel with content across platforms.
Default tools that allow users to see where a piece of media came from and how it has been altered.
Public-interest verification infrastructure that is not controlled solely by the largest platforms or governments.
2. Institutional and legal safeguards
Stronger duties on very large platforms to assess and mitigate systemic risks from synthetic media, not just traditional “fake news”.
Clear liability pathways where synthetic media is used to perpetrate fraud, harassment, extortion or reputational harm especially against children and other vulnerable groups.
Protection and funding for journalists, researchers and civil society organisations who track synthetic campaigns and expose coordinated manipulation.
3. Autonomy literacy
We need to move beyond “media literacy” as a set of individual skills for spotting false headlines. The deeper task is autonomy literacy: helping people understand how their attention, emotions and beliefs are shaped in an environment of engineered doubt – and equipping them to push back.
That includes:
basic understanding of how synthetic media and deepfakes are created
recognising emotional “hooks” and slowing down before reacting or sharing
building social norms around “healthy uncertainty”: being willing to say
“I don’t yet know if this is real, and I won’t act on it until I do.”
Crucially, autonomy literacy is not about blaming individuals for systemic design failures. It is about giving citizens the tools to remain agents not just targets in a heavily mediated world.
The deeper democratic risk
Some commentators point out that generative AI’s measurable impact on recent elections has so far been “muted” in terms of fact-checked incidents or swing-vote changes. Even if that is true for the moment, focusing narrowly on vote totals misses the deeper risk. Synthetic media and truth decay erode the background conditions that make democratic disagreement possible:
a shared sense that facts exist
that evidence can be tested and challenged
that institutions, while imperfect, are at least trying to tell the truth
If we lose that, we do not simply get more lies. We get a civic culture in which nothing can be trusted, where manipulation thrives precisely because people have given up on the possibility of knowing. At that point, epistemic collapse is no longer just a media problem. It becomes a constitutional problem.
Where CDE fits in
The Centre for Digital Ethics was founded on a simple conviction: that digital systems should support, not erode, human autonomy and democratic life.
On synthetic media and truth decay, our work focuses on:
Research – mapping how deepfakes and synthetic content are affecting autonomy, trust and vulnerability in Ireland and Europe.
Policy engagement – contributing to EU and Irish regulatory debates on AI governance, platform accountability and autonomy-focused rights.
Education and training – developing autonomy literacy workshops and materials for schools, professionals and communities.
Advocacy – amplifying the voices of those most affected by synthetic media harms, including children, marginalised groups and survivors of digital abuse.
If you are interested in collaborating, hosting a workshop, or supporting this work, we would love to hear from you.
Because if we want a future where people can still think clearly, trust wisely and choose freely, we cannot treat synthetic media as a niche technical curiosity.
It is a frontline autonomy issue and it is arriving faster than most of our institutions are prepared to handle.