作者: Pattie Maes , Utkarsh Sarawgi , Rishab Khincha , Wazeer Zulfikar
DOI:
关键词: Heteroscedasticity 、 Mixture model 、 Machine learning 、 Artificial neural network 、 Computer science 、 Flexibility (engineering) 、 Bayesian probability 、 Benchmark (computing) 、 Multivariate normal distribution 、 Artificial intelligence 、 Black box
摘要: Understanding and quantifying uncertainty in black box Neural Networks (NNs) is critical when deployed real-world settings such as healthcare. Recent works using Bayesian non-Bayesian methods have shown how a unified predictive can be modelled for NNs. Decomposing this to disentangle the granular sources of heteroscedasticity data provides rich information about its underlying causes. We propose conceptually simple approach, deep split ensemble, uncertainties multivariate Gaussian mixture model. The NNs are trained with clusters input features, estimates per cluster. evaluate our approach on series benchmark regression datasets, while also comparing methods. Extensive analyses dataset shits empirical rule highlight inherently well-calibrated models. Our work further demonstrates applicability multi-modal setting Alzheimer's shows ensembles hidden modality-specific biases. minimal changes required training procedure, high flexibility group features into makes it readily deployable useful. source code available at https URL