Model selection and explainability in neural networks using a polytope interpolation framework

Published in Asilomar Conference on Signals, Systems, and Computers, 2021

Modern machine learning systems based on neural networks have shown great success in learning complex data patterns while being able to make good predictions on unseen data points. However, the limited understanding of these systems hinders further progress and application to several domains in the real world. This predicament is exemplified by time-consuming model selection and the difficulties faced in predictive explainability, especially in the presence of adversarial examples. In this paper, we take a step towards better understanding of neural networks by introducing a local polytope interpolation method. The proposed Deep Non Negative Kernel regression (NNK) interpolation framework is non-parametric, theoretically simple, and geometrically intuitive. We demonstrate instance based explainability and develop a method to identify models with good generalization properties using leave one out estimation.

@article{shekkizhar2020deepnnk,
  title={DeepNNK: Explaining deep models and their generalization using polytope interpolation},
  author={Shekkizhar, Sarath and Ortega, Antonio},
  journal={arXiv preprint arXiv:2007.10505},
  year={2020}
}