Training a 3-node neural network is NP-complete

作者: Ronald L. Rivest , Avrim Blum

DOI: 10.5555/93025.93033

关键词:

摘要: We show for many simple two-layer networks whose nodes compute linear threshold functions of their inputs that training is NP-complete. For any algorithm one these there will be some sets data on which it performs poorly, either by running more than an amount time polynomial in the input length, or producing sub-optimal weights. Thus, differ fundamentally from perceptron a worst-case computational sense.

参考文章(13)
Eduardo Sontag, Sigmoids distinguish better than Heavisides Neural Computation. ,(1989)
T. J. Sejnowski, Parallel networks that learn to pronounce English text Complex Systems. ,vol. 1, pp. 145- 168 ,(1987)
Gerald Tesauro, Bob Janssens, Scaling relationships in back-propagation learning Complex Systems. ,vol. 2, pp. 39- 44 ,(1988)
Avi Wigderson, Improving the performance guarantee for approximate graph coloring Journal of the ACM. ,vol. 30, pp. 729- 735 ,(1983) , 10.1145/2157.2158
A. Blum, An O(n0.4)-approximation algorithm for 3-coloring Proceedings of the twenty-first annual ACM symposium on Theory of computing - STOC '89. pp. 535- 542 ,(1989) , 10.1145/73007.73058
P. Raghavan, Learning in threshold networks conference on learning theory. pp. 19- 27 ,(1988) , 10.5555/93025.93035
M. Kearns, M. Li, L. Pitt, L. Valiant, On the learnability of Boolean formulae symposium on the theory of computing. pp. 285- 295 ,(1987) , 10.1145/28395.28426
Nimrod Megiddo, On the complexity of polyhedral separability Discrete & Computational Geometry. ,vol. 3, pp. 325- 337 ,(1988) , 10.1007/BF02187916