Improving performance using robust recurrent reinforcement learning control

作者: Michael R. Buehner , Charles W. Anderson , Peter M. Young , Keith A. Bush , Douglas C. Hittle

DOI: 10.23919/ECC.2007.7068459

关键词:

摘要: A recurrent neural network (RNN) is used inside the feedback loop to improve closed-loop tracking performance of a nonlinear plant. An actor-critic reinforcement learning algorithm optimize RNN actor as plant operates in real-time. Integral Quadratic Constraints (IQCs) are guarantee robust stability system learns online. Using IQCs, we can deal with both model uncertainty and elements single unified framework. The provides dynamic capabilities that feed-forward could not, which results controller able provide enhanced control.

参考文章(2)
Ronald J. Williams, David Zipser, A learning algorithm for continually running fully recurrent neural networks Neural Computation. ,vol. 1, pp. 270- 280 ,(1989) , 10.1162/NECO.1989.1.2.270
R. Matthew Kretchmar, Peter M. Young, Charles W. Anderson, Douglas C. Hittle, Michael L. Anderson, Christopher C. Delnero, Robust reinforcement learning control with static and dynamic stability International Journal of Robust and Nonlinear Control. ,vol. 11, pp. 1469- 1500 ,(2001) , 10.1002/RNC.670