Details

Stochastic Dynamic Programming and the Control of Queueing Systems


Stochastic Dynamic Programming and the Control of Queueing Systems


Wiley Series in Probability and Statistics, Band 504 1. Aufl.

von: Linn I. Sennott

171,99 €

Verlag: Wiley
Format: PDF
Veröffentl.: 25.09.2009
ISBN/EAN: 9780470317877
Sprache: englisch
Anzahl Seiten: 354

DRM-geschütztes eBook, Sie benötigen z.B. Adobe Digital Editions und eine Adobe ID zum Lesen.

Beschreibungen

A path-breaking account of Markov decision processes-theory and computation<br> <br> This book's clear presentation of theory, numerous chapter-end problems, and development of a unified method for the computation of optimal policies in both discrete and continuous time make it an excellent course text for graduate students and advanced undergraduates. Its comprehensive coverage of important recent advances in stochastic dynamic programming makes it a valuable working resource for operations research professionals, management scientists, engineers, and others.<br> <br> Stochastic Dynamic Programming and the Control of Queueing Systems presents the theory of optimization under the finite horizon, infinite horizon discounted, and average cost criteria. It then shows how optimal rules of operation (policies) for each criterion may be numerically determined. A great wealth of examples from the application area of the control of queueing systems is presented. Nine numerical programs for the computation of optimal policies are fully explicated.<br> <br> The Pascal source code for the programs is available for viewing and downloading on the Wiley Web site at www.wiley.com/products/subject/mathematics. The site contains a link to the author's own Web site and is also a place where readers may discuss developments on the programs or other aspects of the material. The source files are also available via ftp at ftp://ftp.wiley.com/public/sci_tech_med/stochastic<br> <br> Stochastic Dynamic Programming and the Control of Queueing Systems features:<br> * Path-breaking advances in Markov decision process techniques, brought together for the first time in book form<br> * A theorem/proof format (proofs may be omitted without loss of continuity)<br> * Development of a unified method for the computation of optimal rules of system operation<br> * Numerous examples drawn mainly from the control of queueing systems<br> * Detailed discussions of nine numerical programs<br> * Helpful chapter-end problems<br> * Appendices with complete treatment of background material
Optimization Criteria.<br> <br> Finite Horizon Optimization.<br> <br> Infinite Horizon Discounted Cost Optimization.<br> <br> An Inventory Model.<br> <br> Average Cost Optimization for Finite State Spaces.<br> <br> Average Cost Optimization Theory for Countable State Spaces.<br> <br> Computation of Average Cost Optimal Policies for Infinite State Spaces.<br> <br> Optimization Under Actions at Selected Epochs.<br> <br> Average Cost Optimization of Continuous Time Processes.<br> <br> Appendices.<br> <br> Bibliography.<br> <br> Index.
There is much to appreciate about his book. It is well written and thoughtfully organized and nicely integrates theory with computation. Its orientation toward SDP theory for buffer control will certainly interest scientists seeking solutions to communication network problems. (Technometrics, August 2000, Vol. 42, No. 3)
Linn I. Sennott, PhD, is Professor of Mathematics at Illinois State University.
A path-breaking account of Markov decision processes-theory and computation<br> <br> This book's clear presentation of theory, numerous chapter-end problems, and development of a unified method for the computation of optimal policies in both discrete and continuous time make it an excellent course text for graduate students and advanced undergraduates. Its comprehensive coverage of important recent advances in stochastic dynamic programming makes it a valuable working resource for operations research professionals, management scientists, engineers, and others.<br> <br> Stochastic Dynamic Programming and the Control of Queueing Systems presents the theory of optimization under the finite horizon, infinite horizon discounted, and average cost criteria. It then shows how optimal rules of operation (policies) for each criterion may be numerically determined. A great wealth of examples from the application area of the control of queueing systems is presented. Nine numerical programs for the computation of optimal policies are fully explicated.<br> <br> The Pascal source code for the programs is available for viewing and downloading on the Wiley Web site at www.wiley.com/products/subject/mathematics. The site contains a link to the author's own Web site and is also a place where readers may discuss developments on the programs or other aspects of the material. The source files are also available via ftp at ftp://ftp.wiley.com/public/sci_tech_med/stochastic<br> <br> Stochastic Dynamic Programming and the Control of Queueing Systems features:<br> * Path-breaking advances in Markov decision process techniques, brought together for the first time in book form<br> * A theorem/proof format (proofs may be omitted without loss of continuity)<br> * Development of a unified method for the computation of optimal rules of system operation<br> * Numerous examples drawn mainly from the control of queueing systems<br> * Detailed discussions of nine numerical programs<br> * Helpful chapter-end problems<br> * Appendices with complete treatment of background material

Diese Produkte könnten Sie auch interessieren:

Statistics for Microarrays
Statistics for Microarrays
von: Ernst Wit, John McClure
PDF ebook
90,99 €