Concepts and methods for discrete and continuous time control under uncertainty

作者: Wolfgang J. Runggaldier

DOI: 10.1016/S0167-6687(98)00006-7

关键词: Dynamic programmingMathematicsTime controlMathematical optimizationControl (linguistics)Stochastic controlApproximate solutionFinite horizon

摘要: Abstract We present some concepts and solution methods for finite horizon control problems under uncertainty in discrete as well continuous time. discuss exact approximate mention possible applications. The is mainly of a stochastic nature, but also other forms are considered.

参考文章(25)
John R. Birge, Roger J.-B. Wets, Designing approximation schemes for stochastic optimization problems, in particular for stochastic programs with recourse Mathematical Programming Studies. ,vol. 27, pp. 54- 102 ,(1986) , 10.1007/BFB0121114
Jürg Kohlas, Paul-André Monney, A Mathematical Theory of Hints Lecture Notes in Economics and Mathematical Systems. ,(1995) , 10.1007/978-3-662-01674-9
K. Hinderer, ON APPROXIMATE SOLUTIONS OF FINITE-STAGE DYNAMIC PROGRAMS Dynamic Programming and its Applications#R##N#Proceedings of the International Conference on Dynamic Programming and its Applications, University of British Columbia, Vancouver, British Columbia, Canada, April 14–16, 1977. pp. 289- 317 ,(1978) , 10.1016/B978-0-12-568150-6.50022-2
Martin L. Puterman, Dynamic Programming and Its Application Academic Press, Inc.. ,(1979)
Wendell H. Fleming, Logarithmic Transformations and Stochastic Control Springer Berlin Heidelberg. pp. 131- 141 ,(1982) , 10.1007/BFB0004532
W.J Runggaldier, Onésimo Hernández Lerma, Monotone approximations for convex stochastic control problems Reporte interno - CINVESTAV. pp. 1- 47 ,(1992)