作者: P.J.G Lisboa , A Vellido , H Wong
DOI: 10.1016/S0893-6080(00)00022-8
关键词:
摘要: The Bayesian evidence framework has become a standard of good practice for neural network estimation class conditional probabilities. In this approach the probability is marginalised over distribution weights, which usually approximated by an analytical expression that moderates output towards midrange. paper, it shown calibration considerably improved marginalising to prior distribution. Moreover, marginalisation midrange can seriously bias estimates probabilities calculated from framework. This especially case in modelling censored data.