作者: Ronald L. Rivest , Avrim Blum
DOI: 10.5555/93025.93033
关键词:
摘要: We show for many simple two-layer networks whose nodes compute linear threshold functions of their inputs that training is NP-complete. For any algorithm one these there will be some sets data on which it performs poorly, either by running more than an amount time polynomial in the input length, or producing sub-optimal weights. Thus, differ fundamentally from perceptron a worst-case computational sense.