作者: Jun Wang , B. Malakooti
DOI: 10.1016/S0893-6080(09)80019-1
关键词:
摘要: In the majority of existing supervised learning paradigms, a neural network is trained by minimizing an error function using rule. The commonly used rules are gradient-based such as popular backpropagation algorithm. This paper addresses important issue on minimization in networks rules. characterizes asymptotic properties training errors for various forms and discusses their practical implications designing via remarks examples. analytical results presented this reveal dependency quality rank samples associated steady activation stales. also complexity achieving zero error.