Responsible Artificial Intelligence Systems: A Roadmap to Society’s Trust Through Trustworthy AI, Auditability, Accountability, and Governance
Andrés Herrera-Poyatos, Javier Del Ser, Marcos López de Prado, Fei-Yue Wang, Enrique Herrera-Viedma, Francisco Herrera
This paper sets out a comprehensive roadmap for the design and deployment of Responsible AI Systems that can earn and sustain societal trust. It argues that AI has evolved past the experimental stage and now requires robust governance frameworks to ensure its ethical, safe, and accountable use—especially in high-risk domains.
The Four Dimensions of Responsible AI
The paper presents a holistic framework structured around four interdependent dimensions:
1. Regulatory Context
Focuses on the legal frameworks that govern AI use, including horizontal (e.g. GDPR) and vertical (e.g. health-specific) regulations. The authors emphasise the role of the EU AI Act as a model for risk-based governance and the need for harmonised legal oversight.
2. Trustworthy AI Technologies
Explores the technical foundations of responsible AI, including:
Explainability and robustness
Bias mitigation and fairness
Security and privacy-preserving methods
It highlights the importance of standards and testing for building confidence in these technologies, referencing work by ISO/IEC and the EU’s High-Level Expert Group on AI.
3. Auditability and Accountability
Auditing is presented as a bridge between technical trustworthiness and real-world oversight. The paper proposes:
Internal and external audits
Independent certification bodies
Transparency in documentation (e.g. model cards, datasheets)
Accountability mechanisms include traceability, redress processes, and clearly assigned responsibility across the AI lifecycle.
4. AI Governance
Extends beyond compliance to include organisational, societal, and global dimensions. The authors argue that effective AI governance:
Embeds ethical reflection throughout design and deployment
Promotes multi-stakeholder engagement
Encourages adaptability to evolving norms and values
Key Contributions
Roadmap for Responsible AI: The paper synthesises the four dimensions into a structured development pathway for organisations.
Ten Lessons Learned: It closes with ten practical insights, highlighting gaps in current practice and priorities for future research, such as:
The need for dynamic governance tools
Embedding responsibility across AI supply chains
Coordinating between regulatory and technical communities
Final Takeaway
Building responsible AI systems is not just a technical challenge but a multi-dimensional societal task. This paper offers both a conceptual framework and practical guidance for aligning AI with values like fairness, transparency, and accountability. Trust is not given—it must be designed into AI systems from the outset.