ADIA Lab Summer School: Responsible AI in the Generative and Agentic AI Era,
in collaboration with the University of Granada, Spain

Agenda


Agenda & Sessions – ADIA Lab Summer School 2026
Week 1

Monday, May 4

9:00
Registration
9:30 – 11:00
Welcome & Introduction to Responsible AI in the Generative and Agentic AI Era Francisco Herrera, University of Granada
Francisco Herrera Bio
Abstract
Abstract coming soon.
11:00 – 11:30
Coffee Break
11:30 – 13:00
From Generative AI to Agentic AI and Beyond Johannes Schneider, University of Liechtenstein
Johannes Schneider Bio
Abstract
In this talk we discuss how early GenAI models (like ChatGPT 3.5) were extended into modern agentic AI systems capable of planning, self-evaluation, tool use, and leveraging memory and feedback in dynamic environments. We highlight key characteristics of Agentic AI, limitations of GenAI, and how agentic approaches address them. To this end, we cover both foundations and practical system patterns that are increasingly entering industry at scale to execute complex workflows in multi-agent configurations. Furthermore, we will briefly extrapolate existing trends to discuss possible futures and broader implications for society.
13:00 – 14:00
Lunch
14:00 – 15:00
Technical and Conceptual Introduction to Agentic and Generative AI Josef Teichmann & Bastian Bergmann, ETH Zurich
Abstract
Abstract coming soon.
15:00 – 15:30
Coffee Break
15:30 – 16:30
Applied Agentic and Generative AI Josef Teichmann & Bastian Bergmann, ETH Zurich
Abstract
Abstract coming soon.
16:30 – 17:30
Introduction to Student Projects & Problem Assignments

Tuesday, May 5

9:30 – 11:00
Agentic AI in Action: Multi-Agent Systems, Reinforcement Learning and LLMs for Autonomous Decisions Paulo Novais, Universidade do Minho
Paulo Novais Bio
Abstract
This talk outlines the central vision of the Architectures for Agentic AI project, which proposes a unified framework combining multi-agent systems, reinforcement learning, and large language models to support the development of autonomous, adaptable, and interpretable systems. This work argues for a transition from isolated AI components to structured and governable cognitive ecosystems capable of sustaining coherent and responsible behaviour over extended temporal scales. Rather than merely aggregating heterogeneous techniques, the proposed framework provides an architectural model for organising intelligence within autonomous systems. It advances both technical and conceptual contributions: a clear blueprint for system design and coordination, and an accompanying set of governance and ethical guidelines intended to ensure reliability, accountability, and alignment. The architecture is conceived to be domain-agnostic while preserving theoretical rigour, thereby enabling its application across diverse socio-technical contexts.
11:00 – 11:30
Coffee Break
11:30 – 12:15
Efficient Fair Regression via Wasserstein Barycenters and Normalizing Flows Patrick Cheridito, ETH Zurich (FinsureTech Hub)
Patrick Cheridito Bio
Abstract
In many real-world applications, ensuring that predictive models do not discriminate against certain population subgroups is a critical requirement. After discussing the theoretical links between fair regression and Wasserstein barycenters, the talk introduces an efficient numerical method for fair regression using normalizing flows.
12:15 – 13:00
The Reasoning Stack: Transformers, LLMs, and Autonomous Agents Francisco Herrera, University of Granada
Francisco Herrera Bio
Abstract
Abstract coming soon.
13:00 – 14:00
Lunch
14:00 – 15:30
Agentic AI Platforms: A Practical View Juan Luis Suárez, University of Granada
Abstract
Abstract coming soon.
15:30 – 16:00
Coffee Break
16:00 – 17:30
Group Work

Wednesday, May 6

9:30 – 11:00
Google Cloud Agentic AI Platform Julia Hernández, Google Cloud Spain
Julia Hernández Bio
Abstract
A technical deep dive into building, orchestrating, and deploying production-ready AI agents in Google Cloud. This session explores the end-to-end capabilities of Vertex AI, the Agent Development Kit (ADK), Agent Engine and the Gemini model family. Attendees will learn how to architect multi-agent systems, equip agents with custom tools and memory, and securely scale enterprise agentic workflows from prototype to production.
11:00 – 11:30
Coffee Break
11:30 – 13:00
Building the Next Generation Connectivity Platform for AI Agents Merouane Debbah, Falcon Foundation
Merouane Debbah Bio
Abstract
AI is evolving from standalone models into interconnected agents that perceive, reason, communicate, and act in real time. This shift calls for a new generation of connectivity platforms designed not only for humans and devices, but also for AI agents. In this talk, I will discuss how future communication systems must move beyond data transport to support coordination, distributed intelligence, and low-latency interaction at scale. I will highlight the role of AI-native networks in enabling seamless integration of communication, computation, and learning across edge, cloud, and physical environments. The broader message is that next-generation connectivity will be a foundational platform for intelligent agents to collaborate and operate effectively.
13:00 – 14:00
Lunch
14:00 – 15:30
Agentic and Generative AI for Sustainability Luis Seco, University of Toronto
Abstract
Abstract coming soon.
15:30 – 16:00
Coffee Break
16:00 – 17:30
Group Work

Thursday, May 7

9:30 – 11:00
Opening the Black Box: Understanding and Dissecting Foundation Vision Models Concetto Spampinato, University of Catania
Concetto Spampinato Bio
Abstract
Foundation vision models have reshaped computer vision through large-scale pretraining and transformer-based architectures, achieving strong generalization across tasks. Yet, their internal mechanisms remain largely opaque. This lecture first introduces the principles behind modern foundation models in vision, including transformer architectures, large-scale pretraining, and representation dynamics. It then focuses on recent approaches to explainability and model dissection that operate in a black-box or minimally invasive setting, avoiding traditional gradient-based optimization or model retraining. We will discuss local and global explanation methods, object-level causal interventions and black-box dissection methods that uncover functional components within deep networks. The session aims to provide methodological tools to analyze and structurally understand foundation vision models beyond standard attribution techniques and with unconventional techniques.
11:00 – 11:30
Coffee Break
11:30 – 13:00
Reasoning and Sparsity in Large Language Models Torsten Hoefler, ETH Zurich
Torsten Hoefler Bio
Abstract
The growing energy and performance costs of deep learning have driven the community to reduce the size of neural networks by selectively pruning components. Similarly to their biological counterparts, sparse networks generalize just as well, if not better than, the original dense networks. Sparsity can reduce the memory footprint of regular networks to fit mobile devices, as well as shorten training time for ever growing networks. In this talk, we survey prior work on sparsity in deep learning and provide an extensive tutorial of sparsification for both inference and training. We describe approaches to remove and add elements of neural networks, different training strategies to achieve model sparsity, and mechanisms to exploit sparsity in practice. Our work distills ideas from more than 300 research papers and provides guidance to practitioners who wish to utilize sparsity today, as well as to researchers whose goal is to push the frontier forward. We include the necessary background on mathematical methods in sparsification, describe phenomena such as early structure adaptation, the intricate relations between sparsity and the training process, and show techniques for achieving acceleration on real hardware. We also define a metric of pruned parameter efficiency that could serve as a baseline for comparison of different sparse networks. We close by speculating on how sparsity can improve future workloads and outline major open problems in the field.
13:00 – 14:00
Lunch
14:00 – 15:30
Data Competition Jean Herelle
Abstract
Abstract coming soon.
15:30 – 16:00
Coffee Break
16:00 – 16:45
The Problems Arising From the Lack of Explainability in AI Governance: The RisCanvi Use Case Oier Mentxaka, University of Granada
Oier Mentxaka Bio
Abstract
In Catalonia, the RisCanvi case has become a highly representative example of how AI can be integrated into high-impact decisions within the criminal justice system. RisCanvi is a risk assessment system that helps guide decisions on imprisonment, parole and rehabilitation processes. While its early versions relied on criteria defined by experts and a more interpretable scoring scheme, in 2019 it made the leap to a statistical model (logistic regression) deployed within the prison administration’s digital infrastructure. This transition improved the predictive approach but reduced practical transparency about which variables are used and how they influence the outcome, making it difficult for professionals and affected individuals to understand, monitor and challenge the system. The session will use RisCanvi as a case study to show how a lack of explainability can translate into governance failures in high-risk systems and will present a layered intelligibility approach that combines technical explainability, user-centred design and communication tailored to different actors, in line with Trustworthy AI, the AI Act and ISO/IEC 42001.
16:45 – 17:45
Group Work
Friday, May 8 – Sunday, May 10: IEEE Conference
Week 2

Monday, May 11

9:30 – 11:00
Why Do We Need Regulation for AI? Carme Artigas, Harvard University
Carme Artigas Bio
Abstract
Abstract coming soon.
11:00 – 11:30
Coffee Break
11:30 – 13:00
The Emerging Discourse on Responsible AI in Healthcare and Pharmaceutical Drug Development: Technical, Policy, and Societal Interventions Arijit Patra, UCB
Arijit Patra Bio
Abstract
As artificial intelligence transforms drug discovery, clinical trials, and healthcare delivery, urgent questions arise about safety, equity, and governance. This talk will explore how technical advancements, like generative models for molecular design and AI-driven diagnostics, intersect with policy frameworks (e.g., FDA/EMA guidelines) and societal needs, including equitable access and patient trust. We will examine key interventions: explainable AI for transparency, federated learning for data sovereignty, and regulatory sandboxes to balance innovation with risk. The discussion will highlight case studies, from AI-designed drugs in trials to algorithmic biases in diagnostics, to reveal opportunities and pitfalls. We will, as an exercise, deliberate and debate on the strategies for integrating scientific rigor, ethical governance, and public engagement to shape AI’s role in the future of medicine and ensuring it serves global health.
13:00 – 14:00
Lunch
14:00 – 15:30
Evaluation of Generative AI: Benchmarks, Transparency, and Societal Impacts Emilia Gomez, Joint Research Centre, European Commission
Abstract
Abstract coming soon.
15:30 – 16:00
Coffee Break
16:00 – 17:30
Group Work

Tuesday, May 12

9:30 – 11:00
LLM and Agentic AI: Secure, Private, Open and Trustworthy Nils Lukas, MBZUAI
Abstract
Abstract coming soon.
11:00 – 11:30
Coffee Break
11:30 – 13:00
Explainability and Continual Learning in General-Purpose Artificial Intelligence: Neurosymbolic AI to the Rescue Natalia Díaz Rodríguez, University of Granada
Natalia Díaz Rodríguez Bio
Abstract
Recent general purpose AI systems have shown groundbreaking performance on a wide range of tasks. However, these systems can make errors, be biased, or act harmfully in certain contexts. New approaches are needed which can complement existing approaches while addressing the safety aspects. Neurosymbolic AI is a novel paradigm that offers methods that are able to learn from data but remain transparent and controllable. This is achieved by combining data-driven approaches with knowledge-driven approaches. I will discuss several challenges to attain explainability and the ability of foundational models to learn continually, and how neurosymbolic AI can help.
13:00 – 14:00
Lunch
14:00 – 15:30
Causal AI: The Missing Link Between General-Purpose Intelligence and Agency Marcos Lopez de Prado, ADIA & ADIA Lab
Abstract
Abstract coming soon.
15:30 – 16:00
Coffee Break
16:00 – 17:30
Group Work

Wednesday, May 13

9:30 – 11:00
AI in Healthcare: Hype vs Reality in the Integration of AI in Clinical Workflows Rajat Mani Thomas, Weill Cornell Medicine Qatar
Rajat Mani Thomas Bio
Abstract
Abstract coming soon.
11:00 – 11:30
Coffee Break
11:30 – 13:00
Engineering Safer Care: A Systems Thinking Perspective in AI-Driven Digital Transformation Mecit Can Emre Simsekler, Khalifa University
Abstract
Abstract coming soon.
13:00 – 14:00
Lunch
14:00 – 15:30
Low-Code and AI to Speed Up Software Development with the BESSER Platform Aaron Conrardy, Luxembourg Institute of Science and Technology (LIST)
Aaron Conrardy Bio
Abstract
In this talk, we will discuss how we tame the complexity of software development at LIST by using a low-code approach. By specifying the blueprints of the software to be built, we can (semi)automate its development, including its AI features and components (agents, neural networks, etc.). We will illustrate this approach with BESSER, our own open-source low-code platform for building all types of hybrid systems (e.g. a complete web application integrating an agent). BESSER also embraces vibe modeling, where AI is used to partially generate and refine specifications from natural-language descriptions, further democratizing software development. Following a demonstration of BESSER, participants will be invited to try out BESSER for themselves.
15:30 – 16:00
Coffee Break
16:00 – 17:30
Group Work

Thursday, May 14

9:30 – 13:00
Student Presentations
13:00
Certificates, Final Photo & Farewell Lunch

Coming Soon