Artificial Neural Networks are biologically-inspired programming paradigm which enables a computer to learn from observational data .
In the form of Deep Neural Networks (DNNs), they have achieved outstanding performance in a great number of different areas, from computer vision  to videogames .
The main drawback when applying DNNs in the real world is their lack of explainability. In other words, the DNN acts as a black box and it does not provide detailed information about why it reaches a certain classification/regression decision.
Recently there have been many efforts to design explanation algorithms, both for generic machine learning algorithms [4, 5, 6] or specific for DNNs .
The purpose of this thesis is to analyze and compare the recent explanation algorithms for DNNs. The student will conduct extended experiments on DNNs used for industrial applications.
- Acquire strong knowledge about the most recent DNN architectures and training procedures;
- Investigate, analyze and compare the recently proposed explanation algorithms;
- Conduct extended experiments to explain the predictions of DNNs used in industrial projects.
Duration of this Project: 5-6 months.
Competencies to be acquired
The candidate will acquire:
- Expertise on Deep Learning;
- Critical thinking about the interpretation of black-box machine learning algorithms
Who we’re looking for
Students that are about to get their Master Degree in mathematical engineering or computer science or computer engineering or electronic engineering or mathematics or physics or physics of complex systems.
- Proficiency in at least one programming language (Python, Lua, Matlab, C++, Java);
- Basic knowledge of machine learning, in particular, supervised learning;
- Good knowledge of linear algebra.