Scientific Disciplinary Sector (SSD)
INF/01 - INFORMATICS
The teaching is organized as follows:
The corse aims to provide the theoretical and practical foundations for integrating data from, possibly, heterogeneous sources and the subsequent phase of extraction of summary information/knowledge. By completing the course, the students will be able to tackle complex data mining problems by designing and implementing a full pipeline that allows its user to integrate the necessary data sources, select and apply the adequate data mining techniques for solving a specific data mining problem, and evaluate its performances. Given a data mining problem, coming from a real-world domain ranging from industry to healthcare, the course enables the students to design, apply and test original solutions or or modifications of existing ones, for solving it and evaluate the feasibility of the proposed solution in a real environment.
Functional Dependencies (FD):
concepts and applications of FDs, forcing and verifying FDs in PostgreSQL
Approximate Functional Dependencies (AFD):
introducing approximation in FDs as confidence measure. Knowledge extraction using AFD: examples. AFD analysis.
Algorithms for extracting AFDs:
minimal AFDs: definition, semantics and analysis. Theoretical Lower Bounds on the number of minimal AFD: the curse of cardinality. Basic algorthm for extracting minimal AFD. Compact representations of
sets of extracte AFDs. Randomized algorithms for extracting minimal AFDs:
theory and implementation.
Approximation in presence of measures:
Delta Functional Dependencies (DFDs) : definition, application, and verification. Analysis of DFDs extracted from the biomedical domain. Approximated DFDs
definition, applications and analysis in the biomedical domain (examples). Algorithm for verifying single ADFD restricted to the case of 2 measures (2ADFD):
complexity, implementation. Extraction of minimal 2ADFD from data.
Association Rules (ARs):
definition, examples in the biomedical domain. Extraction of di AR: support and confidence. Theoretical analysis: the curse of cardinality. Frequent Itemsets (FIs): definition, role in the extraction
of ARs, and algorithm for vandidates generation. ARs extraction from sets of FIs. Sets of FIs: minimal sets, closed sets.
Strategies for exploring FIs lattices. Alternatives to standard extraction algorithm using specific data structures (hash trees, FP-trees). Evaluation of association patterns: drawbacks of the support/confidence framework. Examples of paradoxes. alternative measures for association pattern analysis:
definition and examples.
Extraction Transformation and Loading (ETL):
definition, functions, role inside a data warehouse, data flows. Basic entities of ETL procedures and how they work: Job, Transformations, Job, Step, Transformation Step. Conceptual modelling of ETL procedures in Business Process Model and Notation (BPMN). Modelling examples: case studies. Embedding external procedures into ETL procedures: comunication, staging and managing of errors. API (Application Programming Interface) usage inside ETL procedures. Short description of XPATH constructs and how to use them. Screen scraping of websites in ETL procedures by using XPATH. Using Business Intellingence tools to realize ETL procedures.
introduction to the concept of Entropy. Decision Trees in the biomedical context. The Iterative Dichotomiser 3 (ID3) classifier: algorithm, examples and implementation. Measures discretization. Using ID3 for discretizing measures:
problems, modification and implementation. Temporal analysis applications.
Reporting and OLAP (Online Analytical Processing):
Interactive reporting systems: querying large databases, parametrization of the reports. Dynamic retrieval of report information by using ETL transformations. Modelling analysis using OLAP cubes and their implementation: case studies. Using Business Intellingence tools to realize dynamic/interactive reports and OLAP cubes
Distributed Data Mining:
elements of distributed computing, split a data mining problem for solving it in a distributed fashion,
model and implement a ditributed system for data mining. How to use NoSQL databases for
Probabilistic Analysis of Processes:
Qualitative analysis of a process using process mining and process discovery
techniques. Extraction and trasformation of processes into
probabilistic models (Markov Chains, Markov Decision Processes).
Tools for probabilistic analysis of systems (PRISM model checker).
DJ Hand, H Mannila, P Smyth
Principles of data mining
MIT Press Cambridge, MA, USA ©2001
Roland Bouman, Jos van Dongen
Pentaho Solutions: Business Intelligence and Data Warehousing with Pentaho and MySQL
Wiley Publishing, Inc.
The elements of statistical learning. Data mining, inference, and prediction.
T. Hastie, R. Tibshirani, J. Friedman.
example data (in .csv format) for completing the exercises proposed during classes;
implementation of the procedures introduced during the course;
Jupyter notebooks and docker containers for easily run the algorithm explained during the lectures.
Visualizza la bibliografia con Leganto, strumento che il Sistema Bibliotecario mette a disposizione per recuperare i testi in programma d'esame in modo semplice e innovativo.
The exam modality aims to verify the autonomy and the skills of the student in applying the concepts provided during the course for realizing a full end-to-end pipeline for a given Data Mining problem.
The exam consists of an interview on the implementation
of two projects assigned during classes, one for each macro-topic of the course:
1) ETL and OLAP Analysis
2) Data Mining;
The two projects must be realized as a team or as an individual. Moreover, a necessary but not sufficient condition for passing the exam is that both the implementations of the projects must be complete. In particular, each project will be evaluated on a scale going from 1 to 15 included, the final grade is given by the sum of the two individual project grades.
There is no difference in the exam modality among students that attended the course and students that did not.