The Synthetic Reality Problem
In September 2023, two days before Slovakia's parliamentary elections, an audio recording surfaced on social media. In it, a candidate appeared to discuss plans to rig the vote and raise the price of beer. The recording was a deepfake -- an AI-generated synthetic audio clip -- but it spread during the country's 48-hour pre-election media blackout, when fact-checkers and journalists were legally prohibited from responding. The candidate lost. Whether the deepfake was decisive is debatable. That it was deployed with strategic precision is not [1].
This is no longer a hypothetical scenario. In December 2024, Romania's Constitutional Court annulled the first round of its presidential election, citing evidence of a coordinated influence campaign that used social media manipulation -- including AI-generated content -- to boost a previously unknown far-right candidate from under 1% in polls to winning the first round [2]. In India's 2024 general election, tens of millions of dollars were reportedly spent on AI tools to segment voters, generate personalized robocalls, and create synthetic video endorsements in multiple languages [3]. In the United States, a deepfake robocall impersonating President Biden urged New Hampshire primary voters to stay home in January 2024 [4].
The World Economic Forum's Global Risks Report 2024 ranked misinformation and disinformation as the number one global risk over the next two years -- ahead of extreme weather, armed conflict, and economic downturn [5]. This was not alarmism. It was an assessment that synthetic media, powered by generative AI, has crossed from nuisance to systemic threat.
"The threat is not that people will believe a specific deepfake. The threat is that the existence of deepfakes gives everyone -- politicians, propagandists, and citizens alike -- permission to disbelieve anything."
-- The "liar's dividend," as described by Chesney and Citron [6]
Under the Hood: How AI Generates Synthetic Media
To understand the threat, you need to understand the machinery. I will explain the core technologies as an engineer would: with enough precision to grasp what makes them powerful, without requiring a PhD in machine learning to follow.
Generative Adversarial Networks (GANs): The First Wave
The deepfake era began with Generative Adversarial Networks, introduced by Ian Goodfellow in 2014 [7]. The architecture is elegant and adversarial by design. Two neural networks are trained simultaneously:
- The Generator creates synthetic images (or audio, or video) from random noise, trying to produce outputs that look real.
- The Discriminator examines both real and generated samples and tries to distinguish between them.
Think of it as a counterfeiter and a detective locked in an arms race. The counterfeiter gets better at forgery because the detective keeps catching flaws; the detective gets sharper because the counterfeiter keeps improving. After millions of training iterations, the generator produces outputs so realistic that the discriminator -- and often human observers -- cannot reliably tell them apart.
GAN-based face-swap systems like DeepFaceLab and FaceSwap dominated from 2017 to roughly 2022. They required significant compute and technical skill, which initially limited the threat to state-level actors and determined hobbyists. That constraint has collapsed.
Diffusion Models: The Revolution
The generative AI revolution that began in 2022 was powered not by GANs but by diffusion models -- a fundamentally different approach that has proven more stable, more controllable, and far more capable [8].
Here is how diffusion models work, in plain language:
The key insight is that diffusion models do not generate images in a single pass like GANs. They iteratively refine, which makes them more stable during training and capable of producing far more diverse, high-quality outputs. Systems like Stable Diffusion (open-source, Stability AI), DALL-E 3 (OpenAI), and Midjourney can generate photorealistic images in seconds on consumer hardware [8][9].
For disinformation purposes, the critical development is not image quality alone -- it is controllability. Techniques like ControlNet allow precise manipulation of pose, composition, and facial expression. IP-Adapter and InstantID enable identity transfer: feed the model a few photos of any public figure, and it can generate that person in any setting, with any expression, saying anything. Inpainting allows surgical modification of real photographs -- adding or removing people, changing text on signs, altering evidence.
Video Synthesis: The Moving Deepfake
Image generation was concerning. Video generation is transformative. Models like Sora (OpenAI), Runway Gen-3, Kling (Kuaishou), and Veo 2 (Google DeepMind) can generate photorealistic video clips from text prompts or reference images. The underlying approach extends diffusion to the temporal dimension: instead of denoising a single image, the model denoises a sequence of frames while maintaining temporal coherence -- consistent motion, lighting, and physics [10].
As of early 2026, generated videos still exhibit telltale artifacts under close inspection: inconsistent finger counts, physics violations in complex scenes, temporal flickering in long sequences. But the quality trajectory is steep. What required a Hollywood visual effects studio five years ago can now be approximated by a consumer with a GPU and an open-source model.
Voice Cloning: The Invisible Deepfake
Audio deepfakes may be the most dangerous modality because they are the hardest for humans to detect and the easiest to deploy at scale. Modern voice cloning systems like ElevenLabs, Resemble AI, and open-source alternatives like XTTS and Bark can clone a voice from as little as 3-15 seconds of reference audio [11].
The technical pipeline works in three stages:
- Speaker embedding extraction: A neural network analyzes the reference audio and produces a compact mathematical representation (an "embedding") of the speaker's vocal characteristics -- timbre, pitch patterns, speaking rhythm, accent.
- Text-to-speech synthesis: A generative model (typically a neural codec language model or a diffusion-based vocoder) produces speech from arbitrary text, conditioned on the speaker embedding. The output sounds like the target speaker saying words they never said.
- Prosody transfer: Advanced systems can transfer emotional tone, emphasis patterns, and natural speech disfluencies ("um", "uh", pauses) to make the output sound conversational rather than robotic.
The Slovakia deepfake audio was effective precisely because audio deepfakes exploit a cognitive vulnerability: we evolved to trust voices. A fake photograph triggers visual skepticism. A familiar voice speaking naturally does not trigger the same alarm. A Regula survey in 2024 found that 49% of organizations worldwide had encountered audio or video deepfakes, with deepfake incidents increasing 118% year-over-year [12].
LLM-Powered Text Disinformation: The Scale Machine
While deepfake images and audio capture headlines, the most scalable vector for AI-powered disinformation is text generated by large language models. An LLM like GPT-4, Claude, or Llama can generate thousands of unique, contextually adapted, linguistically fluent disinformation messages per hour -- each tailored to a specific audience, platform, and narrative frame.
The operational implications are staggering. Where a traditional troll farm required human operators (the Internet Research Agency employed roughly 400 people at its peak), an LLM-powered operation can:
- Generate diverse content across dozens of personas, writing styles, and languages simultaneously
- Adapt in real time to trending topics, breaking news, and platform moderation signals
- Produce "astroturf" engagement: comments, replies, reviews, and forum posts that create an illusion of organic consensus
- Craft micro-targeted messaging by combining demographic data with prompt engineering to produce messages that resonate with specific voter segments
An SMU Data Science Review study (Spring 2025) found that bots made up only 0.06% of social media users but were responsible for 3.5% of all comments -- and their content was disproportionately inflammatory and politically polarizing [13]. With LLMs, the volume, quality, and targeting precision of bot-generated content will increase by orders of magnitude.
The most dangerous scenarios combine all four modalities: an LLM generates a narrative and scripts dialogue; a diffusion model produces a photorealistic video of a politician; a voice cloning system adds their actual voice; and the package is distributed through bot networks that simulate organic sharing. Each component is commercially available. The assembly requires no state-level resources.
The Democratic Impact: Case Studies
The theoretical threat has become operational reality. Below are documented cases where AI-generated content intersected with democratic processes -- presented not as a catalog of doom, but as an evidence base for understanding the threat model.
Slovakia, September 2023
A deepfake audio recording of Progressive Slovakia leader Michal Simecka discussing election rigging circulated during the 48-hour pre-election media blackout. The timing was strategic: fact-checkers could not respond within the blackout period, and the audio was plausible enough to seed doubt. Simecka's party lost. The incident became a landmark case study in election-targeted deepfakes [1].
Romania, November-December 2024
Romania's Constitutional Court took the unprecedented step of annulling the first round of the presidential election after declassified intelligence revealed a coordinated campaign -- involving bot networks, algorithmic manipulation on TikTok, and AI-assisted content generation -- that propelled far-right candidate Calin Georgescu from under 1% in polls to winning the first round. This was the first time a European democratic election was annulled on grounds that included AI-enabled information manipulation [2].
India, April-June 2024
India's general election saw AI deployed at industrial scale by multiple parties. Deepfake videos of Bollywood celebrities endorsing candidates went viral. AI-generated audio of deceased politicians was used in campaign robocalls. Parties spent tens of millions on AI-powered voter micro-targeting, generating personalized messages in dozens of languages and dialects. The scale was unprecedented: India's election involved 970 million eligible voters across 543 constituencies [3].
United States, January 2024
A robocall using a cloned voice of President Biden reached thousands of New Hampshire voters, urging them not to vote in the primary. The FCC traced the call to a political consultant who used ElevenLabs' voice cloning technology. The incident led to the FCC's February 2024 declaratory ruling that AI-generated voices in robocalls violate the Telephone Consumer Protection Act [4].
The "Liar's Dividend": The Deeper Damage
The most corrosive effect of synthetic media may not be any specific deepfake, but the epistemic damage to public trust. Legal scholars Bobby Chesney and Danielle Citron identified this as the "liar's dividend": once deepfakes are widespread, anyone can dismiss genuine evidence as fabricated [6]. A real recording of a politician saying something compromising? "It's a deepfake." Authentic footage of police misconduct? "AI-generated." The existence of synthetic media provides a universal alibi. This erosion of shared evidentiary standards is, in many ways, more damaging to democracy than any individual fake.
Disinformation operates on an asymmetry: generating synthetic content takes seconds; verifying it takes hours or days. A deepfake can go viral in minutes, reaching millions before any fact-check is published. Even when debunked, research shows the initial false impression persists -- a phenomenon psychologists call the "continued influence effect" [14]. This temporal asymmetry is the core challenge for democratic resilience.
The Arms Race: Detection and Its Limits
The natural policy response to synthetic media is: "build better detectors." This is necessary but insufficient, and policymakers must understand why.
How Detection Works (Today)
Current deepfake detection systems operate on several principles:
- Artifact analysis: Early GAN-generated images left statistical fingerprints -- spectral artifacts in the frequency domain, inconsistent noise patterns, unnatural skin textures. Detectors trained on these features achieved 90%+ accuracy against 2020-era fakes. But diffusion models produce fundamentally different artifacts, and detection models trained on GAN-generated content fail catastrophically against diffusion-generated content [15].
- Physiological inconsistency detection: Analyzing blinking patterns, pulse signals visible in skin color changes, head-pose consistency, and lip-sync alignment. These work well against low-effort fakes but are increasingly defeated by higher-quality generation.
- Neural network classifiers: Training deep learning models (typically CNNs or Vision Transformers) directly on datasets of real vs. synthetic media. The UK's AI Safety Institute and the Alan Turing Institute have invested heavily in this approach through the Deepfake Detection Challenge [16].
- Multimodal inconsistency: In video, checking whether audio matches lip movements, whether lighting is physically consistent, whether reflections in eyes match the environment. These approaches are more robust but computationally expensive.
Why Detection Is Losing the Arms Race
Here is the uncomfortable engineering reality: detection is structurally disadvantaged. There are three reasons.
First, the generalization problem. A detector trained on outputs from Stable Diffusion v1.5 may fail against Stable Diffusion XL, DALL-E 3, or Midjourney v6. Each new model architecture produces different statistical signatures. Detectors must be continuously retrained against a moving target, while attackers simply switch to the latest model [15].
Second, the compression problem. Social media platforms compress, resize, re-encode, and add filters to every piece of media uploaded. These transformations destroy many of the subtle artifacts that detectors rely on. A deepfake that is detectable in its original form may become undetectable after being uploaded to Instagram, re-downloaded, and shared on WhatsApp [15].
Third, the adversarial robustness problem. Attackers can use the same AI techniques to make deepfakes that specifically evade detection. By treating the detector as the "discriminator" in a new adversarial training loop, an attacker can iteratively refine their fakes until the detector is fooled. This is not theoretical -- it is a standard technique in adversarial machine learning.
This does not mean detection is useless -- it remains an essential layer of defense. But it means that detection alone cannot be the primary strategy. We need complementary approaches: provenance, authentication, and institutional resilience.
Fighting Back: AI-Powered Countermeasures
If AI is the weapon, AI must also be part of the shield. But the most effective countermeasures are not just technical -- they are systemic, combining technology with standards, institutions, and literacy.
Content Provenance: The C2PA Standard
The most promising structural countermeasure is content provenance -- cryptographically proving the origin and editing history of media, rather than trying to detect fakes after the fact. The Coalition for Content Provenance and Authenticity (C2PA), backed by Adobe, Microsoft, Google, Intel, BBC, and others, has developed an open technical standard for embedding tamper-evident metadata into media files [17].
Here is how C2PA works:
- At creation: A camera, phone, or software application cryptographically signs the media file at the moment of capture, recording the device, time, location, and settings.
- At each edit: Every modification (crop, filter, AI enhancement) is logged as a new entry in a tamper-evident "manifest" -- a chain of cryptographic hashes that makes any unauthorized alteration detectable.
- At distribution: Platforms and browsers can verify the provenance chain and display a trust indicator to users, similar to the HTTPS padlock in web browsers.
The key insight of C2PA is that it shifts the burden from detection to authentication. Instead of asking "is this fake?", the question becomes "does this have a verified provenance chain?" Media without provenance is not automatically fake, but its absence becomes a meaningful signal -- especially for high-stakes content like political speech, news footage, and legal evidence.
The challenge is adoption. C2PA is technically sound but requires buy-in from device manufacturers (cameras, phones), software companies (editing tools), platforms (social media, messaging apps), and browsers. As of early 2026, adoption is accelerating -- Adobe's Content Credentials are integrated across Creative Cloud, Leica and Sony have shipped C2PA-enabled cameras, and Google and Meta have committed to supporting the standard -- but we are years from universal deployment [17].
Digital Watermarking: SynthID and Beyond
Google DeepMind's SynthID represents the most advanced deployed watermarking system for AI-generated content. SynthID embeds an imperceptible statistical signal into AI-generated images, audio, video, and text that can be detected algorithmically but is invisible to humans [18].
For images, SynthID modifies the pixel values in a way that is statistically detectable but imperceptible to the eye -- surviving crops, filters, screenshots, and compression. For text, it subtly adjusts the token probability distribution during generation, creating a statistical signature in word choices that a trained classifier can detect.
The limitation of watermarking is that it only works if the generation tool embeds the watermark. Open-source models like Stable Diffusion have no mandatory watermarking. An adversary using a custom or fine-tuned model will not include watermarks. Watermarking is therefore primarily effective for responsible disclosure by legitimate providers rather than as a defense against malicious actors.
AI-Powered Fact-Checking and Verification
A new generation of AI-powered verification tools is emerging to help journalists and fact-checkers work faster:
- ClaimBuster and Full Fact's AI tools: Automatically identify check-worthy claims in political speech and match them against databases of verified facts [19].
- InVID/WeVerify: A browser plugin backed by the EU's Horizon programme that helps journalists verify video authenticity through reverse image search, metadata analysis, and forensic tools [20].
- EU DisinfoLab's veraAI project: Develops AI methods for detecting local manipulations in synthetic images, analyzing traces left by generative AI, and deepfake detection of persons of interest [21].
These tools do not replace human judgment -- they augment it. A fact-checker using AI can triage hundreds of claims per hour instead of dozens. But the bottleneck is not technology; it is institutional capacity. Most newsrooms and electoral commissions do not have dedicated verification teams, and the few that exist are overwhelmed during election periods.
Policy Frameworks: The Regulatory Landscape
The policy response to AI-enabled disinformation is fragmented but accelerating. Here is where the major jurisdictions stand.
European Union: The AI Act and the Digital Services Act
The EU has the most comprehensive regulatory framework. Two instruments are directly relevant:
The EU AI Act (Regulation 2024/1689) [22] imposes specific transparency obligations on deepfakes. Article 50(4) requires that any person who generates or manipulates AI-generated content that "constitutes a deep fake" must disclose that the content is artificially generated or manipulated. This applies regardless of whether the content is harmful -- the disclosure obligation is triggered by the act of generation itself. AI systems that generate synthetic audio, image, video, or text must be designed so their outputs are "marked in a machine-readable format and detectable as artificially generated or manipulated."
The Digital Services Act (DSA) complements this by imposing obligations on platforms. Very large online platforms (VLOPs) -- those with over 45 million monthly EU users -- must conduct systemic risk assessments that specifically address the dissemination of AI-generated disinformation, and must implement mitigation measures including content labeling, detection tools, and researcher data access [23].
The EU's Strengthened Code of Practice on Disinformation, signed by major platforms under the DSA framework, includes commitments on labeling AI-generated content, providing transparency reports, and supporting independent research on disinformation dynamics.
United States: Patchwork and Momentum
The US regulatory response is fragmented across federal and state levels. At the federal level:
- The FCC's February 2024 declaratory ruling confirmed that AI-generated voices in robocalls violate the Telephone Consumer Protection Act, providing a legal basis for enforcement against audio deepfakes used in political calls [4].
- The DEEPFAKES Accountability Act and the AI Labeling Act have been introduced in Congress but, as of early 2026, have not been enacted.
- Executive orders on AI safety (October 2023, subsequently modified) addressed synthetic content but without binding enforcement mechanisms.
At the state level, over 40 states have introduced or passed legislation targeting deepfakes, primarily focused on non-consensual intimate imagery and election interference. California, Texas, and Minnesota have enacted laws specifically criminalizing the distribution of deceptive deepfakes within a defined period before elections [24].
China: Early and Prescriptive
China's Deep Synthesis Provisions (effective January 2023) were the world's first binding regulations specifically targeting deepfakes. They require providers of "deep synthesis" services to label AI-generated content, obtain consent for face and voice cloning, and maintain logs that enable traceability. The regulations are prescriptive and enforcement-oriented -- a reflection of China's broader approach to internet governance, which prioritizes state control alongside content authenticity [25].
Platform Policies: Uneven and Retreating
The Brennan Center for Justice identified a worrying trend in 2024-2025: major social media companies drastically reduced their content moderation capacity and election integrity teams precisely as AI-generated content volumes surged [26]. Meta disbanded its responsible AI team and relaxed political content policies. X (formerly Twitter) eliminated most of its trust and safety staff. TikTok's moderation of AI-generated political content has been inconsistent despite being the platform most implicated in the Romania case.
This creates a governance gap. Even where regulations exist, enforcement depends on platform cooperation -- and that cooperation is weakening.
| Jurisdiction | Key Instruments | Deepfake-Specific? | Enforcement |
|---|---|---|---|
| EU | AI Act Art. 50, DSA, Code of Practice | Yes -- labeling + disclosure | Phased (2025-2027) |
| US (Federal) | FCC ruling, proposed bills | Partial (robocalls only) | Limited |
| US (States) | 40+ state laws (CA, TX, MN) | Yes -- election + NCII | Variable |
| China | Deep Synthesis Provisions | Yes -- labeling + consent | Active |
| UK | Online Safety Act, DSIT initiatives | Emerging | Building |
A European Framework for Democratic Resilience
Based on the evidence above, I propose a framework organized around five pillars. These draw on my experience advising on AI policy at the intersection of technology and public institutions.
Pillar 1: Mandatory Provenance Infrastructure
The EU should mandate C2PA-compatible content provenance for all AI-generated media within the single market, with a phased implementation timeline:
- Phase 1 (2026): All EU-based AI generation services must embed C2PA-compatible provenance metadata.
- Phase 2 (2027): VLOPs must display provenance indicators for all media and flag content without provenance chains during election periods.
- Phase 3 (2028): Device manufacturers selling in the EU market must support C2PA at the hardware level for cameras and microphones.
This is not censorship. It is authentication infrastructure -- the digital equivalent of requiring that food labels list ingredients. Citizens retain the right to create and share any content; the system simply ensures they can verify what they are consuming.
Pillar 2: Election-Period Rapid Response Capacity
Every EU member state should establish a Rapid Response Unit for AI-enabled election threats, operating under the national electoral commission with authority to:
- Receive and triage synthetic media reports from citizens, journalists, and platforms within 2 hours during election periods
- Issue public verification assessments using standardized credibility ratings
- Coordinate with platforms for expedited review of flagged content, with binding response-time requirements under the DSA
- Operate through media blackout periods -- the Slovakia case demonstrated that blackout rules designed for traditional media create exploitable gaps for synthetic media
Pillar 3: Investment in Detection R&D as Public Infrastructure
Detection is losing the arms race, but the solution is not to abandon it -- it is to fund it at a scale commensurate with the threat. The EU should establish a European Synthetic Media Detection Facility, analogous to cybersecurity CERTs, that:
- Maintains continuously updated detection models trained against the latest generation architectures
- Provides detection-as-a-service APIs for newsrooms, electoral commissions, and law enforcement
- Funds adversarial red-teaming to stress-test detection systems before elections
- Publishes open benchmarks so the research community can measure progress transparently
The Alan Turing Institute's Deepfake Detection Challenge and the EU DisinfoLab's veraAI project are excellent models for this work. They need to be scaled from research projects to operational infrastructure [16][21].
Pillar 4: Media Literacy as Democratic Infrastructure
Technology alone cannot solve this. Citizens need the cognitive tools to navigate an information environment where synthetic media is pervasive. The EU should fund a continent-wide media literacy programme that teaches:
- Source verification habits: checking provenance, looking for C2PA indicators, using reverse image search
- Emotional manipulation awareness: understanding that AI disinformation increasingly targets emotions rather than facts -- what the EU DisinfoLab's Zea Szebeni calls "deep lore": synthetic images and narratives that build emotional mythologies even when audiences know they are not real [21]
- The "pause before sharing" principle: training the habit of verification before amplification
This is not about making everyone a forensic analyst. It is about building a population-level immune system against information manipulation. Finland's media literacy programme -- integrated into primary education since the 2010s -- provides a proven model that has consistently ranked the country among the most resilient to disinformation in the EU [27].
Pillar 5: Platform Accountability with Teeth
The DSA provides the legal framework. What is missing is enforcement capacity and political will. Specifically:
- Mandatory algorithmic audits during election periods to verify that recommendation systems are not amplifying synthetic disinformation
- Financial penalties proportional to global revenue (not fixed fines) for platforms that fail to implement AI-content labeling commitments
- Researcher data access: enforcing the DSA's Article 40 provisions that require platforms to provide vetted researchers with access to data for studying systemic risks, including disinformation dynamics
- Pre-election stress tests: requiring VLOPs to demonstrate their capacity to handle synthetic media surges before each national or European election
"The question is not whether AI will be used to manipulate democratic processes. It already is. The question is whether democratic institutions will adapt faster than the threat evolves."
The Choice Ahead
The information environment that sustains democracy is under unprecedented pressure. Generative AI has industrialized the production of synthetic media, collapsed the cost of disinformation operations, and created an epistemic crisis where the very concept of evidence is under threat.
But the same AI capabilities that enable synthetic disinformation also power the most promising countermeasures: detection systems that can flag manipulated content, provenance standards that can authenticate the real, verification tools that can accelerate fact-checking, and media literacy programmes that can build societal resilience.
The outcome is not predetermined. It depends on choices -- policy choices, investment choices, institutional design choices -- that are being made right now. Europe, with the AI Act, the DSA, and the institutional depth of its democratic traditions, has the foundation to lead. The question is whether we will match the ambition of the threat with the ambition of the response.
In an era where reality itself can be synthesized, the defense of truth is not a technical problem alone. It is a political commitment, a democratic discipline, and ultimately, a civilizational choice.
- Check provenance: Look for Content Credentials on images before sharing. Use tools like Content Authenticity Verify (contentcredentials.org/verify).
- Pause before sharing: If content triggers a strong emotional reaction, that is precisely when you should verify before amplifying.
- Support quality journalism: Fact-checking organizations and investigative newsrooms are the frontline defense. They need funding.
- Demand transparency: Ask your representatives to support mandatory AI-content labeling and platform accountability measures.
- Learn the tells: Familiarize yourself with common deepfake artifacts -- unnatural blinking, inconsistent lighting, audio-visual desync. They will not catch everything, but they raise your baseline awareness.
References
- Ajder, H., et al. "The State of Deepfakes." Deeptrace / Sensity AI. See also: AP News. "Slovakia election deepfakes." September 2023. apnews.com/article/slovakia-election-deepfakes-artificial-intelligence
- Romania Constitutional Court, Decision of December 6, 2024, annulling first-round presidential election results. See: Reuters, "Romania court annuls presidential election first round." reuters.com/world/europe/romania-court-annuls-presidential-election
- Goldstein, J., et al. "Generative Language Models and Automated Influence Operations." Stanford Internet Observatory / Georgetown CSET, 2023. arxiv.org/abs/2301.04246. India election coverage: MIT Technology Review, "The Era of AI Persuasion in Elections." December 2025. technologyreview.com/2025/12
- Federal Communications Commission. "FCC Makes AI-Generated Voices in Robocalls Illegal." Declaratory Ruling, February 8, 2024. fcc.gov/document/fcc-makes-ai-generated-voices-robocalls-illegal
- World Economic Forum. "Global Risks Report 2024." January 2024. weforum.org/publications/global-risks-report-2024
- Chesney, R. and Citron, D. "Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security." 107 California Law Review 1753 (2019). papers.ssrn.com/sol3/papers.cfm?abstract_id=3213954
- Goodfellow, I., et al. "Generative Adversarial Networks." Advances in Neural Information Processing Systems (NeurIPS), 2014. arxiv.org/abs/1406.2661
- Ho, J., Jain, A., Abbeel, P. "Denoising Diffusion Probabilistic Models." NeurIPS, 2020. arxiv.org/abs/2006.11239
- Rombach, R., et al. "High-Resolution Image Synthesis with Latent Diffusion Models." CVPR, 2022. arxiv.org/abs/2112.10752
- Brooks, T., et al. "Video Generation Models as World Simulators." OpenAI Technical Report (Sora), February 2024. openai.com/index/video-generation-models-as-world-simulators
- Wang, C., et al. "Neural Codec Language Models are Zero-Shot Text to Speech Synthesizers" (VALL-E). Microsoft Research, 2023. arxiv.org/abs/2301.02111
- Regula. "The Deepfake Trends 2024." Industry survey, 2024. regulaforensics.com/resources/deepfake-trends-2024
- SMU Data Science Review. "Bot Activity and Political Discourse on Social Media." Spring 2025. scholar.smu.edu/datasciencereview
- Lewandowsky, S., et al. "Misinformation and Its Correction: Continued Influence and Successful Debiasing." Psychological Science in the Public Interest 13(3), 2012. doi.org/10.1177/1529100612451018
- Groh, M., et al. "Deepfake Detection by Human Crowds, Machines, and Machine-Informed Crowds." PNAS, 2022. doi.org/10.1073/pnas.2110013119
- Alan Turing Institute (CETaS). "From Deepfake Scams to Poisoned Chatbots: AI and Election Security in 2025." Stockwell, S., November 2025. cetas.turing.ac.uk/publications/deepfake-scams-poisoned-chatbots
- Coalition for Content Provenance and Authenticity (C2PA). Technical Specification v2.1. c2pa.org/specifications
- Google DeepMind. "SynthID: Identifying AI-generated content." deepmind.google/technologies/synthid
- Full Fact. "Automated Fact Checking." fullfact.org/about/automated
- InVID / WeVerify. EU Horizon-funded video verification project. invid-project.eu
- EU DisinfoLab. "AI Against Disinformation" and the veraAI project. disinfo.eu/ai-against-disinformation
- European Parliament and Council. "Regulation (EU) 2024/1689 -- The AI Act." Art. 50 on Transparency for Deepfakes. eur-lex.europa.eu/eli/reg/2024/1689/oj
- European Parliament and Council. "Regulation (EU) 2022/2065 -- Digital Services Act." eur-lex.europa.eu/eli/reg/2022/2065/oj
- National Conference of State Legislatures (NCSL). "Deepfakes Legislation Tracker." ncsl.org/technology-and-communication/artificial-intelligence-2024-legislation
- Cyberspace Administration of China. "Provisions on the Management of Deep Synthesis Internet Information Services." Effective January 10, 2023. cac.gov.cn/2022-12/11/c_1672221949354811
- Brennan Center for Justice. "Election Disinformation: The Threat in 2024." brennancenter.org/our-work/research-reports/election-disinformation
- Lessenski, M. "Media Literacy Index 2023." Open Society Institute Sofia / European Policies Initiative. osis.bg/?p=4243&lang=en