摘要: Often in supervised learning numerical attributes require special treatment and do not fit the scheme as well one could hope. Nevertheless, they are common practical tasks and, therefore, need to be taken into account. We characterize well-behavedness of an evaluation function, a property that guarantees optimal multi-partition arbitrary domain defined on boundary points. Well-behavedness reduces number candidate cut points examined multisplitting attributes. Many commonly used attribute functions possess this propertys we demonstrate cumulative Information Gain Training Set Error non-cumulative Ratio Normalized Distance Measure all well-behaved. also devise method finding multisplits efficiently by examining minimum point combinations is required produce partitions which with respect well-behaved function. Our empirical experiments validate utility multisplitting: it produces constantly better than alternative approaches only requires comparable time. In top-down induction decision trees choice function has more decisive effect result partitioning strategys optimizing value most does raise accuracy produced trees. our tests construction time using was, average, twice greedy multisplitting, its part average binary splitting.