摘要: We propose a new method for the construction of nearest prototype classifiers which is based on Gaussian mixture ansatz and can be interpreted as an annealed version learning vector quantization (LVQ). The algorithm performs gradient descent cost-function minimizing classification error training set. investigate properties assess its performance several toy data sets optical letter task. Results show 1) that annealing in dispersion parameter kernels improves accuracy; 2) results are better than those obtained with standard (LVQ 2.1, LVQ 3) equal numbers prototypes; width improved capability. Additionally, principled approach provides explanation number features (heuristic) methods.