Aligning Trustworthy AI with Democracy: A Dual Taxonomy of Opportunities and Risks

Oier Mentxaka, Natalia Díaz-Rodríguez, Mark Coeckelbergh, Marcos López de Prado, Emilia Gómez, David Fernández Llorca, Enrique Herrera-Viedma, Francisco Herrera

This paper introduces a structured, actionable framework for understanding how artificial intelligence (AI) intersects with democratic governance. Rather than treating AI as uniformly beneficial or harmful, the authors develop a dual taxonomy that maps both the risks AI poses to democracy and the positive contributions it can make.

The Dual Taxonomy

1. AIRDAI Risks to Democracy

Categorised into seven domains that reflect foundational democratic values:

  • Autonomy: AI may undermine personal agency through manipulation, surveillance, or opaque systems.

  • Participation: Algorithmic gatekeeping and misinformation can distort public engagement.

  • Deliberation: Recommendation systems may polarise discourse or fragment the public sphere.

  • Representation: Bias in datasets or models can marginalise groups or distort electoral fairness.

  • Transparency: AI systems are often opaque, limiting democratic oversight.

  • Accountability: Decision-making by AI can blur responsibility.

  • Trust: Erosion of trust in democratic institutions due to misuse or overreach of AI.

2. AIPDAI’s Positive Contributions to Democracy

Also mapped across similar values:

  • Enhancing participation via personalised civic engagement tools.

  • Improving deliberation through fact-checking and argument diversity.

  • Boosting efficiency and evidence-based policymaking through data analysis.

  • Reinforcing transparency, accountability, and fairness through AI-assisted auditing and oversight.

Normative & Policy Framework

The paper draws heavily on the European Union’s ethical AI governance framework, particularly the seven requirements of Trustworthy AI proposed by the EU High-Level Expert Group. These include:

  • Human agency and oversight

  • Technical robustness and safety

  • Privacy and data governance

  • Transparency

  • Diversity and fairness

  • Societal well-being

  • Accountability

Each risk in the AIRD taxonomy is aligned with corresponding mitigation strategies grounded in EU regulation and governance principles.

Purpose and Use

The framework is intended to:

  • Help researchers evaluate AI’s democratic impact systematically.

  • Equip policymakers with actionable tools for ethical oversight and regulation.

  • Guide technologists in designing AI systems aligned with democratic values.

This approach bridges ethical theory with regulatory practice, offering a conceptual and operational guide for safeguarding democracy in an AI-driven world.

Previous
Previous

A Protocol for Causal Factor Investing

Next
Next

Correcting the Factor Mirage: A Research Protocol for Causal Factor Investing