The Night a Democracy Was Annulled

On December 6, 2024, Romania's Constitutional Court did something unprecedented in European democratic history: it annulled a presidential election. Not because of ballot fraud. Not because of vote buying. Not because of counting irregularities. It annulled the election because a coordinated artificial intelligence operation — manipulated videos, amplifier bots, algorithmically optimized fake accounts — had contaminated the information ecosystem to the point where the result could no longer be considered the free expression of popular will [1].

No shots were fired. No parliament was occupied. But the election was dead.

Romania is not an isolated case. It is a presage. In 2024, more than four billion people in 64 countries went to the polls in what the media christened the "super election year" — and, simultaneously, the first major electoral cycle of the generative AI era. From the deepfake robocall impersonating Joe Biden that reached 25,000 New Hampshire voters in January, to the Chinese state-sponsored synthetic media operation in Taiwan confirmed by Microsoft as the first verified instance of a government using generative AI to influence a foreign election [2], the year produced an inventory of incidents that would have been science fiction three years earlier.

And all of it unfolded while global democracy was at its lowest point in four decades.

91 Autocracies worldwide — more than democracies for the first time in 20 years (V-Dem 2025)
8% Of citizens feel "very confident" distinguishing real from AI-generated content (Carnegie 2026)
EUR 120M Fine imposed on X/Twitter for DSA violations including ad transparency and data access (EU Commission 2025)
26,000 Families harmed by algorithmic discrimination in the Dutch childcare benefits scandal

The V-Dem Institute's Democracy Report 2025 registers 88 democracies against 91 autocracies, with 72% of the world's population living under authoritarian regimes — the highest proportion since 1978 [3]. This is the backdrop against which AI is reshaping democratic life. Not as a distant, speculative threat, but as an operational reality that European institutions are racing to govern.

This article argues that deepfakes, while the most visible threat, are only the surface layer of a deeper transformation. AI is reshaping the entire stack of democratic infrastructure: how citizens form beliefs (Section 2), how algorithms curate public discourse (Section 3), how governments make automated decisions about citizens' lives (Section 4), and how Europe is constructing the world's most comprehensive regulatory response (Section 5). Understanding this full picture is essential for anyone working in AI policy, democratic governance, or the intersection of both.

The Cognitive Attack Surface: What AI Does to Democratic Beliefs

The Apocalypse That Wasn't — and the One That Was

Let us be honest: the fear that 2024 would be the year deepfakes destroyed global democracy was, in large measure, exaggerated. An analysis from Harvard's Ash Center concluded that "there are no clear signals that AI changed the outcome of any election" — with the possible exception of Romania [4]. The Knight First Amendment Institute at Columbia was more direct: "Don't panic (yet)" [5].

But Bruce Schneier and Nathan Sanders, the Harvard authors, added a crucial nuance: that the apocalypse did not arrive does not mean the danger is not real. It means we do not yet know how to measure it. And Nate Persily, of Stanford, offered the most precise formulation:

"The basic rule of AI and democracy is that it amplifies the capabilities of all actors — good and bad — in the system to achieve exactly the same objectives they have always had."

— Nate Persily, Stanford Law School. Ash Center / MIT Media Lab panel, 2024 [8].

AI did not invent disinformation. It did not invent electoral manipulation. What it did was dramatically reduce its cost, exponentially increase its scale, and — ironically — democratize its access. Anyone with a laptop can now produce what previously required a production studio and a team of graphic designers. The barrier to entry collapsed.

The Liar's Dividend: Quantified

The most consequential research finding of the 2024 election cycle did not concern deepfakes that deceive the public. It concerned something more insidious: the possibility that the mere existence of deepfakes — regardless of whether they are used — is sufficient to poison democracy.

This is the "liar's dividend", a term coined in 2018 by Bobby Chesney and Danielle Citron [6]. But it was the study by Schiff, Schiff, and Bueno, published in the American Political Science Review (2024), that quantified it with five experiments and more than 15,000 participants [7]:

The Liar's Dividend — Experimental Evidence

When a politician accused of a scandal claims the evidence is a deepfake, their support increases — more than if they apologize, more than if they remain silent. The strategy works regardless of partisan affiliation. It is more effective against text and audio than video, but it works across all formats. The deepfake era does not only permit fabricating lies that look like truth. It permits dismissing truths by calling them lies.

Source: Schiff, K. J., Schiff, D. S. & Bueno, N. S. "The Liar's Dividend: Can Politicians Claim Misinformation to Evade Accountability?" American Political Science Review, pp. 1-20 (2024). 5 experiments, N > 15,000.

In India, during the 2024 elections, a candidate alleged that an audio recording in which he criticized his own party was a deepfake. Independent fact-checkers confirmed the audio was authentic. But by then, the narrative damage was done: doubt had been planted, and in an ecosystem of distrust, doubt is sufficient. As Persily put it: "The most ubiquitous phenomenon was not that people believed false things. It was that politicians dismissed true things as false" [8].

Three Mechanisms of Cognitive Distortion

Celeste Kidd (UC Berkeley) and Abeba Birhane (Mozilla Foundation) published a study in Science in 2023 that reframed the entire debate [9]:

"No malicious force is required to use the system intentionally to generate misinformation. The harms occur through the normal use of these systems."

— Celeste Kidd and Abeba Birhane, "How AI Can Distort Human Beliefs," Science, vol. 380 (2023) [9].

Their argument is as simple as it is devastating: no malicious actor is needed for AI to distort people's beliefs. Normal use of generative models is sufficient. LLMs hallucinate information, present biases as facts, and do so with a tone of authority that humans — evolutionarily wired to trust confident-sounding sources — absorb without question. The three documented mechanisms:

Three Mechanisms of AI-Driven Cognitive Distortion (Kidd & Birhane 2023)
1
Confidence Bias
From infancy, humans grant more credibility to sources that exhibit confidence. AI models never doubt, never say "I don't know," never hesitate. This artificial certainty translates into perceived credibility — a heuristic exploit at the neurological level.
2
Invisible Biases
Many biases transmitted by models are undetectable by users, precisely because they consult the model to become informed. This creates a circularity problem: using defective lenses to verify whether the lenses are defective. Training data encodes historical power structures that emerge as "neutral" outputs.
3
Resistant Beliefs
Once a belief forms with high certainty — and AI produces responses that sound extremely confident — it becomes extraordinarily resistant to revision, even when faced with contradictory evidence. This is belief perseverance, amplified by algorithmic authority.

The implication for democracy is structural: disinformation does not need to be deliberately created to propagate. It emerges from the everyday use of tools that hundreds of millions of citizens consult as though they were encyclopedias. The Carnegie Endowment's 2026 survey found that 55% of respondents believe AI amplifies political polarization, 55% believe it increases political violence, and only 50% think it can help citizens participate more effectively in democracy — an exact split reflecting genuine societal uncertainty [10].

The Algorithmic Public Sphere: How Recommender Systems Shape Political Discourse

Before any deepfake reaches a voter, the information architecture has already been shaped by something more fundamental: recommender systems. These are the algorithms that determine what content appears in a citizen's feed on YouTube, TikTok, Instagram, or X — and therefore what political information they encounter, how it is framed, and which perspectives are amplified or suppressed.

The Technical Architecture of Attention Markets

Modern recommender systems are multi-stage pipelines. Taking YouTube as a representative example (based on Covington et al.'s foundational paper [11]), the architecture consists of:

  1. Candidate generation: from a corpus of hundreds of millions of videos, deep neural networks trained on user history narrow the field to hundreds of candidates in milliseconds.
  2. Ranking: a separate neural network scores each candidate using features including watch time, click-through rate, and engagement signals. The objective function is optimized for engagement — not for accuracy, not for democratic value, not for epistemic quality.
  3. Re-ranking: business rules and policy filters adjust the final order, including diversity requirements, freshness, and demotion of borderline content.

The critical insight is in step two. When the objective function maximizes engagement, the system structurally favors content that provokes strong emotional reactions — outrage, fear, moral indignation — because these emotions drive clicks, comments, and shares. Political extremism is, computationally speaking, engaging content.

The Radicalization Pathway Debate

Ribeiro et al. (FAT* 2020) conducted the most comprehensive audit of radicalization pathways on YouTube: 330,925 videos across 349 channels, with 72 million comments analyzed [12]. Their key findings: users consistently migrated from milder to more extreme content, Alt-lite content was easily reachable from Intellectual Dark Web channels, and the migration was "not only consistent throughout the years, but also significant in its absolute quantity."

The study has important caveats — the lead author acknowledged that "it's very hard to make the claim that the radicalization is due to YouTube or due to some recommender system," and subsequent research has complicated the picture. But the European policy significance lies not in proving causation for any single radicalization event, but in recognizing that engagement-optimized algorithms create structural incentives for polarization at a systems level — regardless of whether any individual user is "radicalized."

Europe's Response: The DSA and Algorithmic Accountability

The EU's Digital Services Act (DSA) represents the world's first binding regulatory framework for algorithmic accountability. For Very Large Online Platforms (VLOPs) — those with over 45 million monthly active users in the EU — the DSA imposes obligations that directly address the recommender system problem:

DSA Algorithmic Accountability Stack
DSA Article Obligation Democratic Significance
Art. 27 Disclose main parameters of recommender systems in plain language, including user options to modify them Citizens can understand and control the algorithms curating their information diet
Art. 34 Annual systemic risk assessments including risks to democratic processes and fundamental rights Platforms must proactively identify how their systems threaten democracy
Art. 35 Risk mitigation measures including adjustment of recommender system parameters Platforms must act on identified democratic risks, not merely report them
Art. 37 Independent audits of compliance with systemic risk obligations Third-party accountability for algorithmic impacts
Art. 38 Offer at least one recommender version not based on GDPR profiling Users can opt out of personalized algorithmic curation
Art. 40 Data access for vetted researchers to study systemic risks Independent scrutiny of algorithmic effects on democracy

To enforce these obligations, the European Commission established the European Centre for Algorithmic Transparency (ECAT) within its Joint Research Centre in Seville in April 2023. ECAT provides the technical expertise to inspect algorithmic systems, assess VLOP compliance, and develop methodologies for measuring systemic risks [13]. The Commission can order platforms to explain "the design, logic and functioning of their algorithmic systems, including their recommender systems" — and non-compliance carries fines of up to 6% of global annual turnover.

This framework has already produced enforcement action. In 2025, the Commission found X/Twitter in breach of DSA rules on deceptive design, advertising transparency, and researcher data access, imposing a fine of EUR 120 million [13]. The signal is clear: algorithmic accountability in the EU is not aspirational. It is operational.

When Algorithms Govern: The Other Democratic Threat

The public conversation about AI and democracy focuses overwhelmingly on external threats: deepfakes, bots, foreign interference. But there is a second vector, equally consequential and far less discussed: the use of AI by democratic governments themselves to make automated decisions about citizens' lives. When these systems fail, they do not just produce bad outcomes. They erode the legitimacy of democratic institutions from the inside.

The Dutch Childcare Benefits Scandal (Toeslagenaffaire)

The most devastating European case study of algorithmic governance failure is the Dutch childcare benefits scandal. The Dutch Tax Authority used an algorithmic system to flag parents suspected of fraudulently claiming childcare benefits. The system disproportionately targeted families with dual nationality — particularly those of Moroccan, Turkish, and Surinamese descent — in what investigators subsequently characterized as institutional algorithmic discrimination [14].

The consequences were catastrophic. Over 26,000 families were falsely accused of fraud. Many were forced to repay tens of thousands of euros they did not owe. Families lost homes, marriages collapsed, children were placed in foster care. The scale of injustice was so severe that the entire Dutch government of Prime Minister Mark Rutte resigned in January 2021 — the first time in European history that a government fell because of an algorithmic system.

The Structural Lesson

The toeslagenaffaire revealed that the threat AI poses to democracy is not exclusively external. When democratic governments deploy opaque algorithmic systems to make consequential decisions about citizens' welfare, education, housing, or legal status — without adequate transparency, contestability, or redress mechanisms — they undermine the very democratic legitimacy they are meant to serve. The algorithm becomes an instrument of administrative arbitrariness, precisely the kind of arbitrary power that democratic institutions exist to prevent.

The SyRI Ruling: Europe's First Algorithmic Rights Case

In February 2020, the District Court of The Hague struck down SyRI (System Risk Indication), a Dutch government system that cross-referenced data from multiple government databases to identify citizens suspected of welfare fraud, tax evasion, or labor law violations [15]. The court ruled that SyRI violated Article 8 of the European Convention on Human Rights (right to private life) because:

  • The system's risk models were opaque — citizens could not know what data was used or how risk scores were calculated.
  • SyRI was disproportionately deployed in low-income and minority neighborhoods, creating a two-tier system of government scrutiny.
  • There were inadequate safeguards against errors and no effective mechanism for citizens to challenge automated decisions.

The SyRI ruling was a landmark: the first time a European court struck down an algorithmic surveillance system on human rights grounds. It established the principle that algorithmic governance must meet the same standards of transparency, proportionality, and non-discrimination that apply to all exercises of state power.

The Fairness Automation Problem

The academic foundation for understanding these failures was laid by Sandra Wachter, Brent Mittelstadt, and Chris Russell in their influential paper "Why Fairness Cannot Be Automated" (2021) [16]. Their core argument: mathematical definitions of algorithmic fairness are necessarily incomplete because fairness is a contextual, contested, and inherently political concept. No single metric — demographic parity, equalized odds, individual fairness — can capture what fairness means in every context. The choice among fairness criteria is itself a normative choice that algorithms cannot make.

For democratic governance, this has a profound implication: automated decisions about citizens' lives cannot be fully delegated to algorithms. They require human judgment, institutional accountability, and democratic oversight. The EU AI Act's classification of AI systems used in welfare, justice, immigration, and education as "high-risk" (requiring conformity assessments, human oversight, and transparency documentation) is, in part, a regulatory response to this insight.

Europe's Regulatory Architecture: The World's Most Comprehensive Framework

Europe has constructed a regulatory apparatus for AI and democratic processes that has no parallel anywhere in the world. It rests on four mutually reinforcing pillars, each addressing a different dimension of the AI-democracy intersection:

Europe's Four-Pillar AI-Democracy Regulatory Stack
1
EU AI Act
Regulation 2024/1689 [24]. Risk-based classification: prohibited practices (Art. 5), high-risk systems including those used in democratic processes (Art. 6), and transparency obligations for deepfakes and AI-generated content (Art. 50). Fines up to EUR 35M or 7% global turnover.
2
Digital Services Act
DSA [25]. Systemic risk assessments for VLOPs (Art. 34), algorithmic transparency (Art. 27), researcher data access (Art. 40), and content moderation accountability. Enforced via ECAT. Fines up to 6% global turnover.
3
Political Advertising Transparency
TTPA Regulation. Specialized requirements for political advertisements: targeting restrictions, mandatory transparency labels, and prohibition of micro-targeting based on sensitive data categories. Complements DSA's general advertising transparency.
4
Media provider protections, AI-generated content provisions (Art. 17), audience measurement transparency, spyware protections for journalists [26]. Applied from August 2025.
European Media Freedom Act

The AI Act and Democratic Processes: Articles 5 and 50

The EU AI Act's treatment of democratic processes operates at two levels. Article 5 establishes outright prohibitions on AI practices deemed incompatible with EU values — including subliminal manipulation techniques, social scoring by public authorities, and real-time biometric surveillance in public spaces (with narrow exceptions). These prohibitions took effect in February 2025.

Article 50 addresses deepfakes specifically, requiring that providers of AI systems ensure machine-readable marking and detectability of AI-generated or manipulated content, and that deployers disclose when AI is used to create realistic synthetic content. The AI Office has initiated a Code of Practice on AI-Generated Content to operationalize Article 50, proposing a "Common Icon" — an interim label containing the acronym "AI" (or language-equivalent: "KI" in Germany, "IA" in France, Spain) — pending a finalized EU-wide symbol [17].

However, a significant gap exists. An academic paper by Guilherme Cunha (available on SSRN) argues that AI-generated deepfakes used in electoral contexts should be reclassified from "limited risk" under Article 50 to prohibited practices under Article 5, citing "the polarization of societies and enhancement of disenchantment with electoral processes and democratic institutions" [18]. Under the current framework, electoral deepfakes require only transparency labeling — a measure that critics argue is insufficient given that labels can be removed, ignored, or never seen in viral distribution chains.

Enforcement Reality: The X/Twitter Case

The DSA's credibility as a democratic safeguard depends on enforcement. The Commission's action against X/Twitter in 2025 tested this. The Commission found X in breach of DSA rules on deceptive design (the verified badge system created an "unjustified distinction" between verified and unverified users), advertising transparency (the ad repository failed to meet requirements), and researcher data access (terms of service and pricing structures effectively blocked independent research) [13]. The EUR 120 million fine and compliance orders demonstrated that the DSA enforcement machinery is operational — though critics argue the fine represents a fraction of X's revenue and may not deter future violations.

Global Regulatory Landscape: Europe vs. the Rest

AI and Electoral Integrity: Regulatory Comparison (as of early 2026)
Jurisdiction Instrument Scope Status
European Union AI Act + DSA + TTPA + EMFA Comprehensive: prohibited practices, systemic risk, algorithmic transparency, political advertising, media freedom In force (phased)
United States (federal) No binding federal legislation FCC ruling on AI-generated robocalls (Feb 2024); no comprehensive framework Stalled
United States (states) State laws (NY, FL, TX, AZ, MN, etc.) Disclosure of AI-generated content in political advertising Fragmented
Brazil TSE Resolution 23.732 (2024) Prohibits deepfakes in electoral campaigns In force
United Kingdom Online Safety Act (2023) + AI Safety Institute Content moderation obligations; AI testing capabilities In force
China Deep Synthesis Provisions (2023) + Generative AI Measures Mandatory watermarking, content review, real-name registration In force

The contrast is stark. Europe has a four-pillar, mutually reinforcing regulatory architecture. The United States has no binding federal legislation and a fragmented patchwork of state laws. This divergence is not merely a difference in regulatory philosophy — it has direct consequences for the protection of democratic processes in an era where AI-generated threats are transnational and platform-mediated.

AI for Democracy: The Other Side of the Ledger

It would be irresponsible — and contrary to the evidence — to tell only half the story. The same technology that threatens electoral integrity has applications that are revitalizing democratic participation in ways that would have been impossible five years ago.

The Habermas Machine: AI-Mediated Deliberation

Researchers from Google DeepMind and other institutions developed the Habermas Machine — a language model trained to facilitate citizen deliberations on divisive topics including immigration and climate policy. The results were striking: statements generated through AI-mediated deliberation were 56% more likely to be accepted by all participants compared to human-only group statements, without silencing minority voices [19]. The system works by identifying bridging positions — formulations that capture common ground across opposing viewpoints — rather than optimizing for engagement or agreement with the majority.

The technical architecture is instructive. Unlike conventional recommender systems that optimize for engagement (which structurally favors polarization), the Habermas Machine uses a bridging-based objective function that rewards cross-partisan agreement. This is the same principle behind Twitter/X's Community Notes system (originally Birdwatch), where notes that receive approval from users across the political spectrum are surfaced preferentially. The insight: the choice of objective function determines whether AI amplifies polarization or facilitates consensus.

Decidim: Barcelona's Open-Source Democracy Platform

Barcelona's Decidim platform — Catalan for "we decide" — is perhaps the most mature example of technology-mediated participatory democracy in Europe. Launched in 2016, Decidim is open-source software used by over 450 organizations across 35 countries for participatory budgeting, collaborative legislative drafting, and citizen assemblies [20]. It was used for the EU's Conference on the Future of Europe and has been adopted by cities from Helsinki to Mexico City.

The integration of AI into platforms like Decidim and Pol.is (which uses principal component analysis for opinion mapping) opens new possibilities: automated clustering of citizen proposals, real-time sentiment analysis of deliberation transcripts, and multilingual participation without interpretation bottlenecks. The OECD documented in 2025 that governments using AI to analyze citizen comments in public consultations reduced processing time by 50% while handling volumes that were previously impossible: 4,000 detailed responses on land-use planning in Fort Collins, Colorado; 1,000 testimonies from residents after the 2025 Los Angeles fires, transformed into actionable policy inputs while preserving citizens' original language [21].

Documented Positive AI Applications in Democratic Processes
AI-Mediated Deliberation (Habermas Machine) +56% consensus
Claim Verification (ClaimBuster) >80% precision
Citizen Comment Processing (OECD) 50% time saved
Multilingual Participation Universal access

Sources: Habermas Machine (Google DeepMind / Patterns, 2025); ClaimBuster (Hassan et al., University of Texas, 2017); OECD "Governing with AI" (2025). Interpretation boundary: these are early results from specific deployments, not generalizable across all democratic contexts.

Fact-Checking at Scale

An experimental study by Viorela Dan (2025), with 2,085 participants, quantified something critical: deepfake videos showing a fabricated scandal — whether sexual, corruption, or prejudice-related — cause substantial reputational damage to the targeted politician "regardless of whether the technique was cheap or sophisticated" [22]. But the good news: exposure to a journalistic fact-check reduced and in some cases completely eliminated the negative effect. Fact-checking works. But it arrives late.

AI-powered fact-checking tools like ClaimBuster (University of Texas) achieve precision rates above 80% in identifying check-worthy claims in political discourse [23], allowing human fact-checkers to focus their limited capacity on the claims that matter most. The European Digital Media Observatory (EDMO) coordinates fact-checking organizations across EU member states, and the integration of AI verification tools into this network represents one of the most promising applications of AI for democratic resilience.

A European Framework for Democratic AI

The regulatory architecture exists. The academic evidence is increasingly clear. The question is whether Europe can translate legal frameworks into operational democratic protection. I propose five priorities, grounded in the evidence reviewed above:

Five Priorities for Democratic AI in Europe
1
Close the Electoral Deepfake Gap
Reclassify AI-generated deepfakes used in electoral contexts from "limited risk" (Art. 50 transparency) to "prohibited practices" (Art. 5) during defined electoral periods. The current labeling requirement is structurally insufficient against viral distribution, as labels are stripped in resharing and never seen by most viewers.
2
Mandate Bridging-Based Alternatives
Require VLOPs to offer at least one recommender system mode that optimizes for cross-partisan bridging rather than engagement — following the Habermas Machine and Community Notes model. DSA Art. 38 already requires a non-profiling option; this extends the principle to objective function transparency.
3
Algorithmic Impact Assessments for Public Sector AI
Every EU member state should mandate public algorithmic impact assessments for AI systems used in welfare, immigration, justice, and education — the domains where the toeslagenaffaire and SyRI failures occurred. The EU AI Act classifies these as high-risk; the next step is mandatory pre-deployment public scrutiny.
4
Fund Democratic AI Infrastructure
Invest in open-source deliberation tools (Decidim, Pol.is-style platforms), AI-powered fact-checking networks (EDMO), and the ECAT institutional capacity to audit algorithmic systems at scale. Democratic AI is a public good that requires public investment.
5
The 8% confidence figure is a democratic emergency. AI literacy — understanding how algorithms curate information, how LLMs hallucinate, how deepfakes work — should be embedded in secondary education curricula across the EU. This is not media literacy as a complement; it is democratic infrastructure, as essential as the ability to read.
AI Literacy as Democratic Infrastructure

Paleolithic Emotions, Medieval Institutions, Divine Technology

Nate Persily closed one of his interventions with a phrase that condenses the entire dilemma: "We have paleolithic emotions, medieval institutions, and divine technology" [8].

AI is neither good nor bad for democracy. It is an amplifier. It amplifies the capacity to disinform — but also to verify. It amplifies the reach of manipulation — but also of citizen participation. It amplifies the speed of lies — but also of AI-assisted fact-checking. The liar's dividend degrades trust in all evidence (Schiff et al. 2024); but bridging-based AI can rebuild consensus across political divides (Habermas Machine). Opaque algorithmic governance can destroy 26,000 families' lives (toeslagenaffaire); but transparent algorithmic tools can process 4,000 citizen testimonies into actionable policy (OECD 2025).

Europe is not a bystander in this transformation. It is the jurisdiction that has built the most comprehensive regulatory architecture — AI Act, DSA, TTPA, EMFA — to govern the AI-democracy intersection. Whether this architecture proves sufficient depends on three factors:

  • Enforcement velocity. The EUR 120M X/Twitter fine signals seriousness, but the DSA's credibility will be tested by cases involving larger platforms with more sophisticated compliance teams.
  • Institutional capacity. ECAT in Seville, national Digital Services Coordinators, the AI Office — these institutions need sustained funding, technical talent, and political independence.
  • Democratic investment. The positive-use applications — deliberation tools, fact-checking networks, AI-powered civic participation — require public investment at a scale commensurate with the threat. Regulation alone is defense. Democratic AI infrastructure is offense.
The Bottom Line

The AI-democracy challenge is not primarily a technology problem. It is an institutional problem. Europe has the regulatory frameworks. It has the academic expertise (from V-Dem in Gothenburg to ECAT in Seville to Wachter at Oxford). It has the democratic traditions that give these frameworks legitimacy. What it needs now is the political will to enforce them, the investment to build democratic AI infrastructure, and the civic education to ensure that citizens can navigate an algorithmic public sphere with the same critical capacity they bring to reading a newspaper. The technology is divine. The institutions need not remain medieval.

At Saturdays.AI, we taught 30,000 people across 12 countries that AI is not magic reserved for Silicon Valley — it is a tool for anyone with curiosity and determination. That same principle applies to AI and democracy. Every citizen, in every member state, deserves to understand the algorithms that shape their political reality — and to have democratic institutions capable of holding those algorithms accountable.

Human first. AI frontier. Democracy always.

References

  1. Romania Constitutional Court, Decision of December 6, 2024. See: CNBC/Reuters, "Romanian top court annuls presidential election result." cnbc.com/2024/12/06/romanian-top-court-annuls-presidential-election-result. Atlantic Council analysis: atlanticcouncil.org (Romania election analysis)
  2. Federal Communications Commission. "FCC Makes AI-Generated Voices in Robocalls Illegal." February 8, 2024. fcc.gov/document/fcc-makes-ai-generated-voices-robocalls-illegal. On Taiwan: Microsoft Threat Intelligence, AI-enabled influence operations in Taiwan elections.
  3. Lührmann, A., Lindberg, S. et al. "Democracy Report 2025: 25 Years of Autocratization — Democracy Trumped?" V-Dem Institute, University of Gothenburg, 2025. v-dem.net/documents/44/v-dem_dr2025_highres.pdf
  4. Schneier, B. & Sanders, N. "The Apocalypse That Wasn't: AI Was Everywhere in 2024's Elections." Ash Center, Harvard Kennedy School, 2024. ash.harvard.edu (AI in 2024 elections)
  5. Simon, F. & Altay, S. "Don't Panic (Yet): Assessing the Evidence and Discourse Around Generative AI and Elections." Knight First Amendment Institute, Columbia University, 2025. knightcolumbia.org/content/dont-panic-yet
  6. Chesney, R. & Citron, D. "Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security." 107 California Law Review 1753 (2019). papers.ssrn.com (Chesney & Citron 2019)
  7. Schiff, K. J., Schiff, D. S. & Bueno, N. S. "The Liar's Dividend: Can Politicians Claim Misinformation to Evade Accountability?" American Political Science Review, pp. 1-20, 2024. doi.org/10.1017/S0003055424000285
  8. Persily, N. et al. "The Digitalist Papers." Hoover Institution, Stanford University, 2024. hoover.org/research/digitalist-papers
  9. Kidd, C. & Birhane, A. "How AI Can Distort Human Beliefs." Science, vol. 380, issue 6651, pp. 1222-1223, 2023. doi.org/10.1126/science.adi0248
  10. George, J. & Klaus, I. "AI and Democracy: Mapping the Intersections." Carnegie Endowment for International Peace, January 2026. carnegieendowment.org (AI and Democracy)
  11. Covington, P., Adams, J., & Sargin, E. "Deep Neural Networks for YouTube Recommendations." Proceedings of the 10th ACM Conference on Recommender Systems, 2016. doi.org/10.1145/2959100.2959190
  12. Ribeiro, M. H., Ottoni, R., West, R., Almeida, V. A. F., & Meira, W. "Auditing Radicalization Pathways on YouTube." Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (FAT* '20), Barcelona, Spain. doi.org/10.1145/3351095.3372879
  13. European Commission. "European Centre for Algorithmic Transparency (ECAT)." Joint Research Centre, Seville. algorithmic-transparency.ec.europa.eu. On X/Twitter enforcement: European Commission press release, 2025.
  14. Amnesty International & Platform Investico. "Xenophobic Machines: Discrimination Through Unregulated Use of Algorithms in the Dutch Childcare Benefits Scandal." 2021. See also: Dutch Parliamentary Inquiry Commission Report, "Ongekend Onrecht" (Unprecedented Injustice), 2020.
  15. District Court of The Hague. "SyRI Judgment." Case C/09/550982 / HA ZA 18-388, February 5, 2020. uitspraken.rechtspraak.nl (SyRI ruling)
  16. Wachter, S., Mittelstadt, B. & Russell, C. "Why Fairness Cannot Be Automated: Bridging the Gap Between EU Non-Discrimination Law and AI." Computer Law & Security Review, vol. 41, 2021. doi.org/10.1016/j.clsr.2021.105567
  17. European Commission. "Code of Practice on Marking and Labelling of AI-Generated Content." AI Office, 2025. digital-strategy.ec.europa.eu (Code of Practice)
  18. Cunha, G. "Reclassifying Electoral Deepfakes as Prohibited Practices under EU AI Act Article 5." SSRN Working Paper. papers.ssrn.com/abstract_id=5978654
  19. Tessler, M. H. et al. "AI Can Help Humans Find Common Ground in Democratic Deliberation." Science, vol. 386, issue 6719, 2024. See also: Google DeepMind, "The Habermas Machine." doi.org/10.1126/science.adq2852
  20. Decidim. "Free Open-Source Participatory Democracy for Cities and Organizations." decidim.org. See also: European Commission, "Conference on the Future of Europe" digital platform (powered by Decidim).
  21. OECD. "Governing with Artificial Intelligence: AI in Civic Participation and Open Government." OECD Publishing, Paris, 2025. oecd.org (Governing with AI)
  22. Dan, V. "Deepfakes as a Democratic Threat: Experimental Evidence Shows Noxious Effects That Are Reducible Through Journalistic Fact Checks." International Journal of Press/Politics (SAGE), 2025.
  23. Hassan, N. et al. "ClaimBuster: The First-Ever End-to-End Fact-Checking System." Proceedings of the VLDB Endowment, vol. 10, no. 12, 2017. doi.org/10.14778/3137765.3137815
  24. European Parliament and Council. "Regulation (EU) 2024/1689 Laying Down Harmonised Rules on Artificial Intelligence (AI Act)." eur-lex.europa.eu/eli/reg/2024/1689/oj
  25. European Parliament and Council. "Regulation (EU) 2022/2065 on a Single Market for Digital Services (Digital Services Act)." eur-lex.europa.eu/eli/reg/2022/2065/oj
  26. European Parliament and Council. "Regulation (EU) 2024/1083 Establishing a Common Framework for Media Services (European Media Freedom Act)." eur-lex.europa.eu/eli/reg/2024/1083/oj
  27. Helberger, N. "The Political Power of Platforms: How Current Attempts to Regulate Misinformation Amplify Opinion Power." Digital Journalism, vol. 8, no. 6, pp. 842-854, 2020. doi.org/10.1080/21670811.2020.1773888
  28. World Economic Forum. "Global Risks Report 2024." January 2024. weforum.org/publications/global-risks-report-2024
  29. European Commission. "European Democracy Action Plan." COM(2020) 790 final, December 2020. eur-lex.europa.eu (EDAP)
  30. EPD / European Partnership for Democracy. "AI and Elections." September 2024. epd.eu (AI and elections PDF)