Anomaly detection is the task of automatically determining instances that are dissimilar to others considered as normals. Such instances are called anomalies. Automatic detection of anomalies allows users to save time examining lots of normal cases in order to find outliers and finds application in several fields, such as surveillance and medicine. However, the task of validating why an instance is anomalous is not easy. Having an explanation of why the instance is classified as anomaly could help the user focus only on truly important ones . In the last decade, algorithms that could explain the outcome of a deep learning model have been introduced. These are called explanation methods and their goal is to make the model they are applied to interpretable, thus ensuring the user’s trust in the model . Very recently, some efforts into applying explanation methods to explain the outcome of anomaly detection methods have been made [3, 4], but it is still a field that needs to be explored.
The goal of this thesis is to study the recent advances in the field of explanation methods, with particular emphasis on the application of these methods to explain anomalies detected by complex models in a real problem.
- Initial research on explainable artificial intelligence and explanation methods for deep learning models;
- Research on anomaly detection models;
- Implementation of state-of-the-art explanation methods;
- Application of these methods to an anomaly detection model for a real problem.
Who we’re looking for
Students that are about to get their Master Degree in: computer science, computer engineering, mechatronic engineering, mathematical engineering, mathematics, physics, informatics.
- Proficiency in at least one programming language (Python, Lua, Matlab, C++, Java), Python is preferred;
- Basic knowledge of machine learning and Deep Learning algorithms.
Duration of this Projects: 6-8 months