Why have a Unified Predictive Uncertainty? Disentangling it using Deep Split Ensembles.

作者: Pattie Maes , Utkarsh Sarawgi , Rishab Khincha , Wazeer Zulfikar

DOI:

关键词: HeteroscedasticityMixture modelMachine learningArtificial neural networkComputer scienceFlexibility (engineering)Bayesian probabilityBenchmark (computing)Multivariate normal distributionArtificial intelligenceBlack box

摘要: Understanding and quantifying uncertainty in black box Neural Networks (NNs) is critical when deployed real-world settings such as healthcare. Recent works using Bayesian non-Bayesian methods have shown how a unified predictive can be modelled for NNs. Decomposing this to disentangle the granular sources of heteroscedasticity data provides rich information about its underlying causes. We propose conceptually simple approach, deep split ensemble, uncertainties multivariate Gaussian mixture model. The NNs are trained with clusters input features, estimates per cluster. evaluate our approach on series benchmark regression datasets, while also comparing methods. Extensive analyses dataset shits empirical rule highlight inherently well-calibrated models. Our work further demonstrates applicability multi-modal setting Alzheimer's shows ensembles hidden modality-specific biases. minimal changes required training procedure, high flexibility group features into makes it readily deployable useful. source code available at https URL

参考文章(3)
Alex Graves, Practical Variational Inference for Neural Networks neural information processing systems. ,vol. 24, pp. 2348- 2356 ,(2011)
Koray Kavukcuoglu, Daan Wierstra, Charles Blundell, Julien Cornebise, Weight Uncertainty in Neural Network international conference on machine learning. pp. 1613- 1622 ,(2015)
Risto Miikkulainen, Xin Qiu, Elliot Meyerson, Quantifying Point-Prediction Uncertainty in Neural Networks via Residual Estimation with an I/O Kernel international conference on learning representations. ,(2020)