DeepNNK: Explaining deep models and their generalization using polytope interpolation

Published in arXiv Preprints, 2020

Modern machine learning systems based on neural networks have shown great success in learning complex data patterns while being able to make good predictions on unseen data points. However, the limited interpretability of these systems hinders further progress and application to several domains in the real world. This predicament is exemplified by time consuming model selection and the difficulties faced in predictive explainability, especially in the presence of adversarial examples. In this paper, we take a step towards better understanding of neural networks by introducing a local polytope interpolation method. The proposed Deep Non Negative Kernel regression (NNK) interpolation framework is non parametric, theoretically simple and geometrically intuitive. We demonstrate instance based explainability for deep learning models and develop a method to identify models with good generalization properties using leave one out estimation. Finally, we draw a rationalization to adversarial and generative examples which are inevitable from an interpolation view of machine learning.

Download paper here

@article{shekkizhar2020deepnnk,
  title={DeepNNK: Explaining deep models and their generalization using polytope interpolation},
  author={Shekkizhar, Sarath and Ortega, Antonio},
  journal={arXiv preprint arXiv:2007.10505},
  year={2020}
}