作者: R. Matthew Kretchmar , Peter M. Young , Charles W. Anderson , Douglas C. Hittle , Michael L. Anderson
DOI: 10.1002/RNC.670
关键词:
摘要: Robust control theory is used to design stable controllers in the presence of uncertainties. This provides powerful closed-loop robustness guarantees, but can result that are conservative with regard performance. Here we present an approach learning a better controller through observing actual controlled behaviour. A neural network placed parallel robust and trained reinforcement optimize performance over time. By analysing nonlinear time-varying aspects via uncertainty models, procedure results guaranteed remain even as being trained. The behaviour this demonstrated analysed on two tasks. Results show at intermediate stages system without constraints goes period unstable avoided when included. Copyright © 2001 John Wiley & Sons, Ltd.