ADIA Lab “Explainable AI” Summer School 2025 - Confirmed Speakers


Francisco Herrera Triguero

Professor in the Department of Computer Science and Artificial Intelligence at the University of Granada and Director of the Andalusian Research Institute in Data Science and Computational Intelligence (DaSCI). Member of the Royal Academy of Sciences (Spain).

Francisco Herrera received his M.Sc. in Mathematics in 1988 and Ph.D. in Mathematics in 1991, both from the University of Granada, Spain. He is a Professor in the Department of Computer Science and Artificial Intelligence at the University of Granada and Director of the Andalusian Research Institute in Data Science and Computational Intelligence (DaSCI). He's an academician in the Royal Academy of Engineering (Spain). 

has been the supervisor over 60 Ph.D. students. He has published more than 600 journal papers, receiving more than 170000 citations (Scholar Google, H-index 190). He has been nominated as a Highly Cited Researcher (in the fields of Computer Science and Engineering, respectively, 2014 to present, Clarivate Analytics). He acts as editorial member of a dozen of journals.  

His current research interests include among others, computational intelligence, information fusion and decision making, trustworthy artificial intelligence, general purpose artificial intelligence and data science.

Session: Explainable and Stable LLMs: Bridging Variability, Uncertainty, and Trust in Human-Centered Domains 

Large Language Models (LLMs) have achieved impressive capabilities across a wide range of tasks, yet their integration into human-centered domains—such as healthcare, education, law, and public services—raises urgent questions about trust, explainability, and behavioral consistency. This talk explores the intersection between explainable AI (XAI) and model stability in LLMs, addressing how variability across runs, prompts, and contexts undermines reliability and interpretability. We will examine recent findings on model uncertainty and behavioral drift, showing how even minor perturbations can lead to significant changes in outputs—an issue especially critical in high-stakes settings where humans rely on model recommendations to inform decisions.

To bridge these gaps, we will discuss emerging frameworks that treat explainability not just as a technical add-on but as an epistemic and governance infrastructure. The talk will highlight practical strategies for enhancing explanation faithfulness, supporting causal traceability, and aligning outputs with stakeholder needs. Special emphasis will be placed on the role of model transparency (e.g., via open-weight LLMs and small language models) as a lever for trust and accountability. Participants will gain insight into the limitations of current post-hoc methods, the trade-offs between plausibility and faithfulness, and the need for robust human-in-the-loop evaluation to ensure that LLMs are not only powerful, but also intelligible, stable, and aligned with the values of the communities they serve.

Iván Sevillano García

PhD Student, DaSCI, University of Granada, Spain

Iván Sevillano García is a researcher from Málaga specializing in explainable artificial intelligence (XAI). He holds a double degree in Computer Science and Mathematics from the University of Granada and a master’s in Data Science. Currently pursuing his PhD at the DaSCI Institute, his work focuses on enhancing the transparency and reliability of AI systems.

Iván developed REVEL, a framework for evaluating local explanations in black-box models, particularly in image classification. He also created X-SHIELD, a regularization technique that integrates explanations into model training to boost both performance and interpretability. His research extends to areas like STOOD-X, which investigates explainability in out-of-distribution detection.

Session: Lab on xAI Tools

This talk offers a concise introduction to AI Explainability 360 (AIX360), an open-source library featuring a wide range of local, global, and direct explainability algorithms for various data types. Attendees will learn how to navigate the Trusted-AI/AIX360 GitHub repository and explore example notebooks for techniques like LIME, SHAP, ProtoDash, and counterfactual explainers. The session will cover the toolkit’s taxonomy of explainability methods and introduce proxy metrics, demonstrating how to use evaluation modules to benchmark explanation quality.

Miguel Hernan

Miguel Hernán is the Director of CAUSALab, the Kolokotrones Professor of Biostatistics and Epidemiology at the Harvard T.H. Chan School of Public Health, and faculty at the Harvard-MIT Division of Health Sciences and Technology. He and his collaborators repurpose real world data into evidence for the prevention and treatment of infectious diseases, cancer, cardiovascular disease, and mental illness. This work has contributed to shape health research methodology worldwide.

Miguel teaches causal inference methods to generate and analyze data for health policy and clinical decision making. At Harvard, he has mentored dozens of trainees. His free online course Causal Diagrams and book Causal Inference: What If, co-authored with James Robins, are widely used for the training of researchers.

Miguel has received several awards, including the Rousseeuw Prize for Statistics, the Rothman Epidemiology Prize, and a MERIT award from the U.S. National Institutes of Health. He is elected Fellow of the American Association for the Advancement of Science and the American Statistical Association, member of the Advisory Board of ADIA Lab, and Associate Editor of Annals of Internal Medicine. He was Special Government Employee of the U.S. Food and Drug Administration, Editor of Epidemiology, and Associate Editor of Biometrics, American Journal of Epidemiology, and Journal of the American Statistical Association. Miguel is a Co-Founder of Adigens Health.

Session: Augmenting and Grounding LLMs

Causal inference is the type of learning from data that guides decision making. The tools referred to as Causal "AI" may assist, or eventually replace, health researchers who make causal inferences for about the treatment and prevention of disease using healthcare databases. This talk dissects the components of Causal "AI" and discusses its potential to automate causal inference research in the health sciences.

David Fernández-Llorca

Scientific Officer at the European Commission, Joint Research Centre

David Fernández-Llorca is Scientific Officer at the European Commission – Joint Research Centre, and Full Professor at the University of Alcalá. He is also founding member of the European Centre for Algorithmic Transparency (ECAT) of the European Commission. He is co-author of more than 180 publications, including up to 12 patents, and has received more than 15 research and innovation awards, including the IEEE ITS Society Young Researcher Award 2018. He has been principal researcher of more than 35 projects and has been research visitor at several institutions including York University (Toronto, Canada), Daimler Research and Development (Ulm, Germany) and Trinity College Dublin (Dublin, Ireland). Recently, he has contributed to different digital policy files, particularly to the EU AI Act. His research interests include trustworthy AI, trustworthy autonomous vehicles, AI evaluation, and algorithmic transparency.

Session 1: The Role of Explainable AI in the Context of the EU AI Act

The AI Act (Regulation (EU) 2024/1689) is the world’s first comprehensive legal framework on AI, aiming to promote trustworthy AI in Europe. Since the European Commission’s proposal in April 2021 and throughout the negotiations by the European Parliament and the Council, there has been significant debate about the role of explainable AI (xAI) in relation to the requirements for high-risk AI systems. This session will delve into the key aspects of the EU AI Act (including its sectoral implications), specifically the transparency and human oversight requirements for AI systems and general-purpose AI models. We will provide an overview of the main strengths and limitations of xAI techniques and clarify the possible interpretations of how the AI Act addresses the issue of opaque AI systems, including illustrative examples.

Session 2: Explainable AI in the Context of Autonomous Driving

Although Autonomous Vehicles (AVs) have the potential to bring significant benefits, including improved safety, mobility, and environmental sustainability, their widespread adoption entails addressing substantial technical, political and societal challenges. One crucial aspect is the utilisation of different safety-critical AI systems across multiple operational layers. The use of explainable AI (xAI) techniques in this context can enhance safety, reliability and transparency by providing a deeper understanding of the underlying algorithms and decision-making processes. Following a description of key terminology related to AVs, this session will focus on clarifying the impact of AI on different operational layers, and will delve into the ways in which xAI techniques can improve transparency and user acceptance for AVs, using illustrative examples.

Carlos Vera del Ruste

AI Senior Consultant, Minsait, Spain

Responsible AI Senior Consultant and Privacy Specialist with over 7 years of experience advising on data protection, cybersecurity, and AI governance. Currently supporting Responsible AI compliance and training initiatives at Minsait (Indra Group), following previous roles in privacy and legal advisory at CaixaBank, Indra Sistemas, and SIA. Holds certifications in risk managment and data protection, and a strong academic background in Law, IP, and New Technologies. Experienced in developing regulatory frameworks, managing privacy risks, and representing organizations before supervisory authorities.

Session: AI Regulation Compliance

As Artificial Intelligence becomes central to innovation across sectors, understanding the compliance implications of AI systems is essential. This session will provide a dual perspective: a theoretical introduction to the evolving compliance landscape, and a practical case study.

We will explore what compliance means in the context of AI, why explainability matters from both a legal and operational viewpoint, and how data scientists and legal teams can collaborate to meet regulatory and ethical standards. Key topics will include common mistakes, lessons learned, and practical approaches to building AI systems aligned with the EU AI Act requirements on transparency and accountability.

This session bridges the gap between legal theory and applied practice, offering insights into how to design AI systems trustworthy.

José Roberto Morán

Senior Researcher, macrocosm, Paris, France

José Moran is a researcher at Macrocosm and an affiliated researcher at the University of Oxford's Institute for New Economic Thinking. He works at the interface of complexity economics and statistical physics. He was trained in applied mathematics and statistical physics at École polytechnique and École Normale Supérieure, and completed his PhD in macroeconomics and statistical physics under Jean-Philippe Bouchaud and Jean-Pierre Nadal.

José has very diverse interests in complexity economics, including the study of the dynamics of wealth and income inequality, the statistics of firm growth, agent-based models, temporal networks and binary decision models. His intellectual pursuits also include agent-based modelling, where his research goes from creating reduced "toy" models or even "models of models" that enable a full understanding of the dynamics at play, to engineering large-scale macroeconomic simulations intended to serve as predictive models.

Session: Scientific Explainability in Agent-Based Modeling

Agent-based models (ABMs) are essential tools for simulating complex systems, where simple local interactions can lead to rich emergent behaviour. Unlike many mathematical models, ABMs are typically not designed for analytical tractability; understanding them scientifically often requires running simulations, identifying emergent patterns, and then developing mathematical models of the models themselves.

This workshop will explore that process, beginning with classical examples from statistical physics—such as the Ising model—and progressing to simple ABMs like Kirman and Föllmer's ant model, the Schelling model, and the minority game.

It will then examine how these methods can be applied to large-scale ABMs and how such models can be calibrated to real-world data using modern AI techniques like automatic differentiation, drawing connections to physics-informed machine learning.

Natalia Ana Díaz Rodríguez

Associate Professor, DaSCI, University of Granada, Spain

Natalia Díaz Rodríguez has a double PhD from University of Granada (Spain) and Åbo Akademi University (Finland) and is associate professor at the DaSCI Andalusian Research Institute in data science and computational intelligence (DaSCI.es) at the Dept. of Computer Science and AI of the University of Granada (Spain) since 2024. Earlier, she was Marie Curie postdoctoral researcher; and Prof. of at the Autonomous Systems and Robotics Lab at ENSTA, Institut Polytechnique Paris, INRIA Flowers team on developmental robotics, and worked on open-ended learning and continual/lifelong learning for applications in computer vision and robotics. She has worked in industry, academia and gubernamental institutions in Silicon Valley, CERN, Philips Research, University of California Santa Cruz and NASA. She cofounded the non-profit ContinualAI.org and worked doing Responsible AI Governance industry assessments and writing the guidelines for the regulatory sandbox pilot of the AI Act for the Spanish Secretary of Estate. She received multiple prizes (among others, from the Royal Academy of Engineering to Young Research Scientists) and is listed in the «Ranking of the World’s Top 2% Scientists» from Stanford University (California) that identifies the most relevant scientists in the world according to the impact of their publications' citations.

Session: xAI Techniques and Explanations: Images and Tabular Data

This talk will present several taxonomies regarding concepts and methodologies for XAI, in particular it will focus on model agnostic techniques and techinques for explaining image-based machine learning models. It will point out as well to practical toolkits and tutorials further dealing with concrete techniques according to different data types and diverse explanation audiences.

Menna El-Assady

Assistant Professor, Computer Science, ETH Zürich, Switzerland

Mennatallah El-Assady is an Assistant Professor of Computer Science at ETH Zürich, where she leads the Interactive Visualization and Intelligence Augmentation (IVIA) lab. Her interdisciplinary research combines data visualization, computational linguistics, and explainable AI, with a focus on interactive systems for human-AI collaboration.

Previously, she was a postdoctoral fellow at the ETH AI Center and held research roles in Germany and Canada. Her PhD on human-AI collaboration received the joint dissertation award of the German, Austrian, and Swiss Informatics Societies and an honorable mention for the VGTC VIS Dissertation Award.

El-Assady co-founded the LingVis.io platform and the human-AI.io framework, and is a co-organizer of workshops such as Vis4DH and VISxAI. She was named a Eurographics Junior Fellow in 2023 and received the 2024 VGTC Significant New Researcher Award and the 2023 EuroVis Early Career Award.

Juan Carlos Trujillo

Full Professor at University of Alicante, Spain

Juan C. Trujillo has five positive research evaluations (four in research activity, one in technology transfer). His work focuses on Big Data, AI, Business Intelligence, KPIs, conceptual modeling, and data warehouses. He has led several European H2020 projects—three as Principal Investigator (two at the University of Alicante and one at the spin-off Lucentia Lab, which he co-founded). He was also a senior researcher on the ERC Advanced Grant Lucretius, led by John Mylopoulos.

Trujillo has published over 80 JCR journal articles and 200+ conference papers, and ranks among the top 20 most cited researchers in his fields—#1 in data warehouses, #14 in conceptual modeling, and #19 in Business Intelligence. His H-index is 25 (Web of Science), 32 (Scopus), and 50 (Google Scholar), making him the most cited computer scientist at the University of Alicante.

In addition to academic research, he has led multiple industry projects with companies like Indra and Google. He currently leads initiatives on Big Data and Machine Learning, including the ENIA Chair (AHDERAI) and Mobility Data Spaces, both starting in 2024.

Session: Detecting and Understanding Vulnerabilities in Language Models via Mechanistic Interpretability

Large Language Models (LLMs), characterized by being trained on broad amounts of data in a self-supervised manner, have shown impressive performance across a wide range of tasks. Indeed, their generative abilities have aroused interest on the application of LLMs across a wide range of contexts. However, neural networks in general, and LLMs in particular, are known to be vulnerable to adversarial attacks, where an imperceptible change to the input can mislead the output of the model. This is a serious concern that impedes the use of LLMs on high-stakes applications, such as healthcare, where a wrong prediction can imply serious consequences. Even though there are many efforts on making LLMs more robust to adversarial attacks, there are almost no works that study how and where these vulnerabilities that make LLMs prone to adversarial attacks happen. Motivated by these facts, we explore how to localize and understand vulnerabilities, and propose a method, based on Mechanistic Interpretability (MI) techniques, to guide this process. Specifically, this method enables us to detect vulnerabilities related to a concrete task by (i) obtaining the subset of the model that is responsible for that task, (ii) generating adversarial samples for that task, and (iii) using MI techniques together with the previous samples to discover and understand the possible vulnerabilities. We showcase our method on a pretrained GPT-2 Small model carrying out the task of predicting 3-letter acronyms to demonstrate its effectiveness on locating and understanding concrete vulnerabilities of the model.

Luis Seco

Director of the Mathematical Finance Program, Professor of Mathematics at the University of Toronto and Director of Risklab, Toronto, Canada

Prof. Luis Seco is Director of the Mathematical Finance Program and Professor of Mathematics at the University of Toronto, as well as Director of RiskLab, a research lab focused on quantitative finance and asset management. His current work centres on sustainability and climate risk, combining artificial intelligence and finance to tackle global challenges. He chairs the Centre for Sustainable Development at the Fields Institute and is an affiliate at the Vector Institute for AI.

Prof. Seco has authored papers on AI and environmental scoring, and is now applying machine learning to CO₂ emissions and carbon markets. He was named an ADIA Lab Fellow in 2022.

A strong advocate for university-industry collaboration, he received the NSERC Synergy Award for Innovation in 2007 and was appointed Knight of the Order of Civil Merit by the Government of Spain in 2011. He co-founded Sigma Analysis & Management Ltd., managing institutional investments in liquid alternatives for two decades.

Today, he partners with international pension and sovereign wealth funds, and institutions like RiskLab Centre Inc. and JUMP S.a.r.l., to drive innovation in education, finance, and sustainability. His academic journey began at Princeton, with roles at Caltech and the University of Toronto, and adjunct positions in Beijing, Munich, Zurich, Kutaisi, and Miami.

Session: Complexity, Sustainability and Causal AI: Science in the 21st Century

The 21st century presents science with unprecedented challenges: planetary-scale risks, systemic interdependencies, and the urgent need for sustainable transformation. This paper explores how the convergence of complexity science, sustainability imperatives, and causal AI can reshape scientific inquiry and policy design. We argue that addressing ecological, social, and technological crises requires a shift from predictive to explanatory modeling—from statistical association to causal understanding. Causal AI, when embedded within complex systems thinking, enables more robust, transparent, and actionable insights, particularly in domains such as climate mitigation, health systems, and circular economies. At the same time, sustainability demands a normative rethinking of AI development itself, emphasizing energy efficiency, fairness, and epistemic responsibility. Framed within this triad, we outline a vision of science that is not only computationally advanced but ethically grounded and future-oriented.

Bergman Bastian

Executive Director, ETH FinsureTech Hub, D-MATH

Bastian is responsible for the operations and strategic development of the Hub with a focus on education initiatives and outreach. He is lecturer with a focus on emerging technologies and the responsible use of it and the philosophy of AI and science.

Josef Teichmann

Professor of Mathematics, Stochastic Finance Group & FinsureTech Hub, D-MATH

In recent work Josef Teichmann and his co-​authors develop machine learning tools for the financial industry. Deep hedging, for instance, is a project conducted jointly with investment bankers, where generic hedging tasks are solved by cutting edge machine learning technology in a fully realistic market environment, i.e. in the presence of market frictions and trading constraints. Further projects include deep calibration, deep simulation, and deep prediction. Theoretical foundations from approximation theory and stochastic analysis accompany successful concrete implementations to make such approaches eligible for industry applications.

Florian Krach

PostDoc ETH Zurich, Stochastic Finance Group & FinsureTech Hub, D-MATH

Florian Ofenheimer-Krach is a Postdoctoral Researcher in Josef Teichmann's group. His work focuses on the intersection of machine learning and finance, as well as exploring fundamental research questions in machine learning through a mathematical lens.

Session 1: ML, AI and Explainability

We introduce basic concepts of ML and AI, e.g. universal approximation, training algorithms, and regularization, from a mathematical perspective and we present several showcases. We analyze the paradigm changes which accompany machine learning techniques and discuss several aspects of explainability.

Session 2: Rethinking AI: Reasoning, Language and Explainability

The rapid adoption of LLMs and generative AI systems has outpaced our conceptual frameworks for explainability, interpretability, and reasoning. While much of the discourse centers on model transparency and post hoc justification techniques, deeper philosophical questions remain underexplored: What counts as a genuine explanation in AI? Can models that lack intentionality or semantic grounding be said to "reason"? This session draws from philosophy of science and epistemology to interrogate foundational assumptions in current approaches to AI explainability. We explore how classical notions of explanation interact—or clash—with the statistical, sub-symbolic nature of LLMs. Students will understand that integrating philosophical insights is essential not only for clarifying the limits of AI-generated explanations, but for guiding the development of systems that aim to interface meaningfully with human reasoning.

Session 3: An instance of explainable AI

We discuss explainable AI in the use case of time series prediction, studying the Neural Jump ODE model. This neural network based model is specifically designed for the task of time series prediction. Theoretical guarantees imply that this model converges to the optimal prediction, which is given by the conditional expectation. This makes the model explainable, since we can predict its behavior by studying the conditional expectation. An important aspect to consider is the metric in which we have convergence, which is inherently connected to the distribution of the training data. Therefore, studying this metric with a view to the training data will show us the limits of explainability. Several extensions of the model can be studied and understood through the lens of explainability. In particular, we can investigate variance prediction, learning from noisy observations, long-term predictions and predicting input-output processes.

Maria Garcia Puyol

GenAI Field Solutions Architect @ Google Cloud, Spain

María García Puyol holds a Master's degree in Telecommunications Engineering from the University of Málaga and a PhD in Electronic Engineering from the Technical University of Munich.

María joined Google in 2014 at its main headquarters in Mountain View, California, where she spent 10 years improving positioning services on Android. Following her return to Málaga, María joined Google Cloud, where she works developing artificial intelligence solutions. She is currently a software engineer on the Automotive AI Agent team.

In 2018, María was recognized as one of the 35 Innovators Under 35 by MIT Technology Review in Europe in the technology category for her work developing ELS (Emergency Location Service), the Android emergency location service that helps find people in emergency situations more quickly.

Session: Augmenting and Grounding LLMs with Information Retrieval

Move beyond hallucination: Empower your LLMs to deliver accurate, context-aware responses by seamlessly integrating robust information retrieval pipelines.

Alicia Troncoso

Full Professor, Data Science & Big Data Lab, Universidad Pablo de Olavide, Spain

Prof. Alicia Troncoso is a Full Professor of Computer Science at Pablo de Olavide University (UPO) in Seville, Spain, where she leads the Data Science and Big Data Laboratory. Her research focuses on artificial intelligence, data mining, machine learning, and their applications in real-world problems. She has published extensively in international journals and conferences and has led numerous national and European research projects. Prof. Troncoso is also active in promoting women in technology and serves on various scientific committees and editorial boards. Her work bridges academic excellence and practical impact, particularly in data-driven decision-making.

Session: Explainability in Time Series Forecasting

This seminar explores the increasingly important field of explainability in time series forecasting. The first part focuses on foundational concepts in explainable AI (XAI), highlighting the unique challenges posed by temporal data, including autocorrelation, lag dependencies, and multivariate dynamics. It offers a comparative overview of major explainability techniques emphasizing how forecasting tasks differ from traditional classification or regression in terms of interpretability needs. Key topics include local versus global explanations, the interpretive role of lagged features, and current difficulties in evaluating explanation quality across time-dependent models.

The second part introduces a recent methodological advancement titled "A New Metric Based on Association Rules to Assess Feature-Attribution Explainability Techniques for Time Series Forecasting." Central to this segment is RExQUAL, a model-independent metric that evaluates explanation quality using feature attribution and association rule mining. The seminar details how features identified by explainability methods are used to generate association rules and how new metrics—global support and global confidence—are applied to assess their reliability. Through empirical comparisons  the novel RULEx technique, RExQUAL is presented as a robust, quantitative framework for benchmarking and improving explainability in forecasting applications.

João Manuel Portela da Gama

Professor Emeritus, Faculty of Economics, University of Porto, and INESC TEC, Portugal

João Gama is an Emeritus Professor at the School of Economics, University of Porto, Portugal. He received his Ph.D. in Computer Science from the University of Porto in 2000. He taught Informatics and data sciences at the School of Economics for more than 30 years. He was Director of the Master in Data Analytics for 12 years. He is EurAI Fellow, IEEE Fellow, and Fellow of the Asia-Pacific AI Association. He is member of the board of directors of the LIAAD, a group belonging to INESC Tec. His main scientific contributions are in the area of learning from data streams, where he has an extensive list of publications. He is the Editor-in-Chief of the International Journal of Data Science and Analytics, published by Springer.

Session: Explaining Rare Events and Anomalies - Explainable Predictive Maintenance

Explainability in learning from data streams is a hot topic in machine learning and data mining. In this talk, we present our work in predictive learning, discuss the application of data stream techniques to predictive maintenance, and propose a two-layer neuro-symbolic approach to explain black-box models. The explanations are oriented toward equipment failures.

The system can present global explanations for the black box model and local explanations for why the black box model predicts a failure. We evaluate the proposed system in a real-world case study of Metro do Porto and explain its benefits.

Jean Herelle

Founder and CEO of CrunchDAO, Abu Dhabi & New York City

Jean Herelle is founder and CEO of CrunchDAO, a decentralized research company applying collective intelligence and machine learning to quantitative finance. His work focuses on building collaborative forecasting systems that aggregate models and predictions from data scientists worldwide. Jean integrates statistical learning, time series analysis, and meta-modeling techniques to open competitions to drive innovation in systematic forecasting.

Session: Introduction to ADIA Lab & CrunchDAO Data Competition

In this session, Jean Herelle will guide participants through the practical steps of submitting an initial machine learning model for detecting structural breaks in the ADIA Lab structural break data science Challenge. The workshop will focus on collaborative iteration, helping attendees refine their models through feedback and experimentation all the way to contributing to the challenge. The goal is to provide a hands-on introduction to structural break and to explore how collective approaches can accelerate progress in machine learning.

Roberto Confalonieri

Associate Professor at the Department of Mathematics of the University of Padua, Italy

He is an Associate Professor in the Department of Mathematics at the University of Padua, Italy. Previously, he was an Assistant Professor at the Free University of Bozen-Bolzano (2017–2022) and led the eXplainable AI team at Alpha (2018–2019), Telefónica Research’s European Moonshot projects company. In 2017–2018, he served as Co-Director and Technology Transfer Manager of the Smart Data Factory, the university's technology transfer centre, where he led several industry collaborations and research projects.

Throughout his academic career, he has held postdoctoral positions at institutions across Europe, including UPC BarcelonaTech, University of Barcelona, IRIT, Goldsmiths College, and IIIA-CSIC. He earned his Ph.D. in Artificial Intelligence (with distinction) from the Polytechnic University of Catalonia in 2011 and a degree in Computer Science from the University of Bologna in 2005.

His research focuses on Artificial Intelligence, with particular interests in neuro-symbolic AI, trustworthy and explainable AI, and knowledge representation. He is Senior Editor of Cognitive Systems Research (Elsevier) and Associate Editor of the Neurosymbolic AI journal (IOS Press).

Session: Perspectives on xAI

In this talk, we will explore the current state-of-the-art in Explainable AI and discuss future research directions, with particular emphasis on knowledge-enhanced approaches to XAI

Serge Thill

Serge Thill, Director, Donders Centre for Cognition, Donders Institute for Brain, Cognition, and Behavior, Radboud University, Netherlands

Serge Thill is an associate professor in Artificial Intelligence at Radboud University Nijmegen (Netherlands), where he is also the Director of the Donders Centre for Cognition. He is the chair of the Department for Human-Centred Intelligent Systems and leads a research group on foundational aspects of intelligent technology. He has a background in cognitive science and research interests in cognitive systems as well as human interactions with automated systems.

Session: Interactions between Humans and xAI

In this session, we will take more of a cognitive science perspective to understand how humans interact with intelligent technology, and how explanations may factor into these. We will discuss how design and intentional stances apply to intelligent technology, especially in understanding  the difference between intrinsic and ascribed abilities. We will consider (over)trust but also rapid disuse, the importance of ensuring that humans reflect on the decisions that AI systems make, and, last but not least, what they might actually be looking for in an explanation.

José Javier Valle Alonso

Advanced Mathematics Senior Scientist at Repsol Technology Lab, Spain.

Since the creation of the Advanced Mathematics discipline in 2018, he has been working on the modeling of industrial assets within various public and private collaborations. Currently, he focuses on developing the reliability of artificial intelligence applied to the industry.

Additionally, Jose Javier Valle Alonso is part of the team that leads the ENIA IAFER chair at the University of Granada, where research and promotion of explainable artificial intelligence are conducted.

Freddy José Perozo Rondón

Senior Scientist in Artificial Intelligence at Repsol Technology Lab, Spain.

With over 25 years of experience in industrial technological innovation, he specializes in developing and implementing advanced neural architectures that transform complex data into practical insights for the energy sector. He has focused on Deep Learning models that optimize critical industrial processes, with particular emphasis on explainable AI techniques. Additionally, Freddy José Perozo Rondón is the author of numerous scientific publications in fields ranging from reinforcement learning and multivariate analysis to industrial automation systems. He holds a PhD in Telecommunications Engineering from the University of Valladolid and has previously held leadership positions in academic environments where he drove significant technological innovation projects.

Session: xAI - Transparency and Trust in the Strategic Industry

In recent years, the concept of Explainable Artificial Intelligence (XAI) has gained importance within the industry sector. This talk will explore the significance of XAI in increasing transparency, accountability, and trust in AI systems deployed in critical strategic industries. With the growing reliance on AI for decision-making processes, it is crucial to understand and interpret the mechanisms behind AI models to ensure they align with industrial standards and safety considerations. Attendees will gain insights into how XAI can drive innovation while maintaining rigorous safety and compliance standards, ultimately contributing to more reliable and robust AI systems in the strategic industry landscape.