This repository contains the code to the paper entitled Feature Importance Measure for Non-linear Learning Algorithms
- the paper https://arxiv.org/abs/1611.07567 was presented at NIPS 2016 in the workshow for Interpretable ML for Complex Systems by Marina Marie-Claire Vidovic
MFI
is an algorithm that was developed to explain arbitrary classifiers in two ways:
-
model-based explanation: what has the classifier learned in total?
-
instance-based explanation: Given a specific data point (instance), which are the important features of this data point that drive the classifier prediction?
Download or clone:
https://github.com/mcvidomi/MFI.git
run demo.py
. It will take about 1 min.
USPS (United States Postal Service - handwritten digits) dataset will be downloaded in data/usps.
An SVM with an RBF kernel is trained on the data. Afterwards MFI
is computed for the instance-based explanation (exemplarly 4 digits were chosen) and for the model-based explanation.
The first row shows the raw digits.
The second row shows the shape of the digits over the MFI
results, respectively. On the top of the image the prediction score of the classifier is plotted - <0: decide for '3' and >0: decide for '8'.
The result heat map shows the important pixel for the classifier to decide for a '3' instead of an '8'.