作者: Washington Mio , Xiuwen Liu , Yiming Wu
DOI:
关键词: Dimensionality reduction 、 Multidimensional analysis 、 Subspace topology 、 Feature selection 、 Contextual image classification 、 Gradient method 、 Artificial neural network 、 Machine learning 、 Cognitive neuroscience of visual object recognition 、 Linear subspace 、 Pattern recognition (psychology) 、 Computer science 、 Artificial intelligence 、 Support vector machine
摘要: Learning data representations is a fundamental challenge in modeling neural processes and plays an important role applications such as object recognition. Optimal component analysis (OCA) formulates the problem framework of optimization on Grassmann manifold stochastic gradient method used to estimate optimal basis. OCA has been successfully applied image classification problems arising variety contexts. However, search space typically very high dimensional, often requires expensive computational cost. In multi-stage OCA, we first hierarchically project onto several low-dimensional subspaces using standard techniques, then learning performed from lowest highest levels learn about subspace that for discrimination based K-nearest neighbor classifier. One main advantages lies fact it greatly improves efficiency algorithm without sacrificing recognition performance, thus enhancing its applicability practical problems. addition nearest classifier, illustrate effectiveness learned conjunction with classifiers networks support vector machines.