作者: Wray Buntine
DOI:
关键词:
摘要: This paper describes how a competitive tree learning algorithm can be derived from first principles. The approximates the Bayesian decision theoretic solution to task. Comparative experiments with and several mature AI statistical families of algorithms currently in use show is consistently as good or better, although sometimes at computational cost. Using same strategy, we design for many other supervised model tasks given just probabilistic representation kind knowledge learned. As an illustration, second networks data. Implications incremental multiple models are also discussed.