作者: G. Chakraborty , N. Shiratori , S. Noguchi
DOI: 10.1109/IJCNN.1993.714175
关键词: Overtraining 、 Mathematics 、 Classifier (UML) 、 Hidden layer 、 Radial basis function 、 Finite set 、 Artificial intelligence 、 Class membership 、 Neural network classifier 、 Pattern recognition
摘要: The task of any supervised classifier is to assign optimum boundaries in the input space, for different class membership. This done using informations from available set known samples. mapping sample position space further used classify unknown generally a finite set. A boundary exactly defined by those usually not best new We end up with an overfitted boundary, i.e. overtrained classifier, resulting poor classification therefore need smooth be able generalize Depending on number samples and dimension actual solution, there will certain amount smoothness, which generalization. In this paper, we focus problem. introduce some practical ways arrive at regards single hidden layer neural network radial basis function.