摘要: The NML (normalized maximum likelihood) universal model has certain minmax optimal properties but it two shortcomings: the normalizing coefficient can be evaluated in a closed form only for special classes, and does not define random process so that cannot used prediction. We present conditional model, which similar to those of regular model. However, unlike NML, defines It also admits recursive evaluation data compression. is much easier evaluate, instance, tree machines than integral square root Fisher information For Bernoulli distributions, gives predictive probability, behaves like Krichevsky-Trofimov actually slightly better extremely skewed strings. some agrees with probability found earlier by Takimoto Warmuth, as solution different more restrictive problem. calculate CNML models generalized Gaussian regression models, particular cases where loss function quadratic, show achieves asymptotic optimality terms mean ideal code length. Moreover, quadratic loss, represents fitting errors noise rather prediction errors, shown smaller what achieved well so-called plug-in or MDL