Studying at the University of Verona
Here you can find information on the organisational aspects of the Programme, lecture timetables, learning activities and useful contact details for your time at the University, from enrolment to graduation.
Study Plan
The Study Plan includes all modules, teaching and learning activities that each student will need to undertake during their time at the University.
Please select your Study Plan based on your enrollment year.
1° Year
Modules | Credits | TAF | SSD |
---|
Modules | Credits | TAF | SSD |
---|
Modules | Credits | TAF | SSD |
---|
1 module among the following
2 modules among the following
2 modules among the following
2 modules among the following
Legend | Type of training activity (TTA)
TAF (Type of Educational Activity) All courses and activities are classified into different types of educational activities, indicated by a letter.
Explainable AI (2023/2024)
Teaching code
4S010683
Academic staff
Coordinator
Credits
6
Language
English
Scientific Disciplinary Sector (SSD)
INF/01 - INFORMATICS
Period
Semester 1 dal Oct 2, 2023 al Jan 26, 2024.
Courses Single
Authorized
Learning objectives
This course builds on knowledge about statistical/machine/deep learning methods and aims at providing means to understand the “why” and the “how” of their outcomes. After introducing the basic concepts, a taxonomy of the existing methods will be provided, then the main state-of-the-art approaches for neurosymbolic AI will be illustrated. The theoretical part will be complemented by practical sessions where the concepts that have been acquired will be put in practice considering specific case-studies.
At the end of the course the students will have acquired fundamental skills about explainability, interpretability, randomness and causality; the knowledge of the main methods for interpretability (intrinsic methods, post-hoc, model-specific, model-agnostic, local, global, etc.), of the related properties (sensibility, implementation invariance, separability, stability, completeness, correctness, compactness), of the main types of explanations and their properties (accuracy, fidelity, consistency, stability, comprehensibility, certainty and relevance), and of the main visualization methods (activation maps, LRP, GradCam). Additionally, students will need to demonstrate knowledge of state-of-the-art approaches to neuro-symbolic artificial intelligence, with main focus on: standard deep learning; symbolic solvers that use neural networks as sub-routines for state estimation; hybrid systems with neural network and symbolic system specialized on complementary tasks with interaction through input/output; symbolic knowledge compiled in the training set of a neural network; neural computing systems that contain symbolic reasoning systems (type 1 and 2 reasoning).
Examination methods
To pass the exam, students must demonstrate:
- to have understood the theoretical and methodological aspects of the teaching
- to know how to apply the acquired knowledge to solve application problems presented in the form of exercises, questions and projects.
Prerequisites and basic notions
Fundamentals of machine and deep learning
Program
Program Part 1
- Explainable AI introduction: motivations, definitions and tools
- Causal analysis of time series: from Granger's causality to efficient conditional independence testing with PCMCI
- Causal analysis labs: intro to tigramite for causal analysis of timeseries, and application to offline anomaly detection for a real system
- Explainable planning: from the planning domain definition language to logic programming implementation
- Answer Set Programming (ASP) for explainable planning
- Inductive Logic Programming (ILP) under the answer set semantics
- ASP lab: Clingo solver for autonomous planning in a simple domain
- ILP lab: learning planning specifications from observations
- Seminar lectures on neurosymbolic AI: deep learning meets logic programming and non-monotonic reasoning (Prof. Mohan Sridharan, Univ. of Birmingham, UK)
Program Part 2
This part concerns the most widespread methods and tools for explainability and interpretability available at the state-of-the-art. It is a guided tour starting from the multivariate linear models and getting to deep networks.
Introduction (main concepts, taxonomy, showcases)
Interpretable models
- Linear regression, Logistic regression, GLM, Decision trees….
- Global model agnostic methods
- Partial Dependence Plots (PDP), Accumulated Local Effects (ALE), Feature engineering
Local Model Agnostic methods
- Local surrogate (LIME), Shapley Adaptive Explanations (SHAP)
Focus on Deep Neural Networks
- Recap on deep learning
- Feature learning and visualization (connection with Visual Intelligence)
- Pixel attribution and Saliency maps
- Gradient-based methods (Integrated gradients, occlusions)
- Layerwise Relevance Propagation
Validation of XAI outcomes
- Association studies
- Proxies
Each lesson will be complemented by a hands-on session during the Lab.
Didactic methods
In person lectures
Learning assessment procedures
The exam will consist of the discussion of a project.
Evaluation criteria
To pass the exam, students must demonstrate that they: - Have understood the theoretical and methodological aspects covered by the teaching - Know how to apply the knowledge acquired to solve application problems presented in the form of exercises, questions and projects.
Criteria for the composition of the final grade
The final grade will be the average of the grades of the two modules.
Exam language
Inglese