Details

Approximate Dynamic Programming


Approximate Dynamic Programming

Solving the Curses of Dimensionality
Wiley Series in Probability and Statistics, Band 842 2. Aufl.

von: Warren B. Powell

130,99 €

Verlag: Wiley
Format: PDF
Veröffentl.: 09.09.2011
ISBN/EAN: 9781118029152
Sprache: englisch
Anzahl Seiten: 656

DRM-geschütztes eBook, Sie benötigen z.B. Adobe Digital Editions und eine Adobe ID zum Lesen.

Beschreibungen

<b>Praise for the <i>First Edition</i></b> <p>"Finally, a book devoted to dynamic programming and written using the language of operations research (OR)! This beautiful book fills a gap in the libraries of OR specialists and practitioners."<br /> —<b><i>Computing Reviews</i></b></p> <p><b>This new edition showcases a focus on modeling and computation for complex classes of approximate dynamic programming problems</b></p> <p>Understanding approximate dynamic programming (ADP) is vital in order to develop practical and high-quality solutions to complex industrial problems, particularly when those problems involve making decisions in the presence of uncertainty. <i>Approximate Dynamic Programming</i>, Second Edition uniquely integrates four distinct disciplines—Markov decision processes, mathematical programming, simulation, and statistics—to demonstrate how to successfully approach, model, and solve a wide range of real-life problems using ADP.</p> <p>The book continues to bridge the gap between computer science, simulation, and operations research and now adopts the notation and vocabulary of reinforcement learning as well as stochastic search and simulation optimization. The author outlines the essential algorithms that serve as a starting point in the design of practical solutions for real problems. The three curses of dimensionality that impact complex problems are introduced and detailed coverage of implementation challenges is provided. The <i>Second Edition</i> also features:</p> <ul> <li> <p>A new chapter describing four fundamental classes of policies for working with diverse stochastic optimization problems: myopic policies, look-ahead policies, policy function approximations, and policies based on value function approximations</p> </li> <li> <p>A new chapter on policy search that brings together stochastic search and simulation optimization concepts and introduces a new class of optimal learning strategies</p> </li> <li> <p>Updated coverage of the exploration exploitation problem in ADP, now including a recently developed method for doing active learning in the presence of a physical state, using the concept of the knowledge gradient</p> </li> <li> <p>A new sequence of chapters describing statistical methods for approximating value functions, estimating the value of a fixed policy, and value function approximation while searching for optimal policies</p> </li> </ul> <p>The presented coverage of ADP emphasizes models and algorithms, focusing on related applications and computation while also discussing the theoretical side of the topic that explores proofs of convergence and rate of convergence. A related website features an ongoing discussion of the evolving fields of approximation dynamic programming and reinforcement learning, along with additional readings, software, and datasets.</p> <p>Requiring only a basic understanding of statistics and probability, <i>Approximate Dynamic Programming</i>, Second Edition is an excellent book for industrial engineering and operations research courses at the upper-undergraduate and graduate levels. It also serves as a valuable reference for researchers and professionals who utilize dynamic programming, stochastic programming, and control theory to solve problems in their everyday work.</p>
<b>Preface to the Second Edition xi</b> <p><b>Preface to the First Edition xv</b></p> <p><b>Acknowledgments xvii</b></p> <p><b>1 The Challenges of Dynamic Programming 1</b></p> <p>1.1 A Dynamic Programming Example: A Shortest Path Problem, 2</p> <p>1.2 The Three Curses of Dimensionality, 3</p> <p>1.3 Some Real Applications, 6</p> <p>1.4 Problem Classes, 11</p> <p>1.5 The Many Dialects of Dynamic Programming, 15</p> <p>1.6 What Is New in This Book?, 17</p> <p>1.7 Pedagogy, 19</p> <p>1.8 Bibliographic Notes, 22</p> <p><b>2 Some Illustrative Models 25</b></p> <p>2.1 Deterministic Problems, 26</p> <p>2.2 Stochastic Problems, 31</p> <p>2.3 Information Acquisition Problems, 47</p> <p>2.4 A Simple Modeling Framework for Dynamic Programs, 50</p> <p>2.5 Bibliographic Notes, 54</p> <p>Problems, 54</p> <p><b>3 Introduction to Markov Decision Processes 57</b></p> <p>3.1 The Optimality Equations, 58</p> <p>3.2 Finite Horizon Problems, 65</p> <p>3.3 Infinite Horizon Problems, 66</p> <p>3.4 Value Iteration, 68</p> <p>3.5 Policy Iteration, 74</p> <p>3.6 Hybrid Value-Policy Iteration, 75</p> <p>3.7 Average Reward Dynamic Programming, 76</p> <p>3.8 The Linear Programming Method for Dynamic Programs, 77</p> <p>3.9 Monotone Policies*, 78</p> <p>3.10 Why Does It Work?**, 84</p> <p>3.11 Bibliographic Notes, 103</p> <p>Problems, 103</p> <p><b>4 Introduction to Approximate Dynamic Programming 111</b></p> <p>4.1 The Three Curses of Dimensionality (Revisited), 112</p> <p>4.2 The Basic Idea, 114</p> <p>4.3 <i>Q</i>-Learning and SARSA, 122</p> <p>4.4 Real-Time Dynamic Programming, 126</p> <p>4.5 Approximate Value Iteration, 127</p> <p>4.6 The Post-Decision State Variable, 129</p> <p>4.7 Low-Dimensional Representations of Value Functions, 144</p> <p>4.8 So Just What Is Approximate Dynamic Programming?, 146</p> <p>4.9 Experimental Issues, 149</p> <p>4.10 But Does It Work?, 155</p> <p>4.11 Bibliographic Notes, 156</p> <p>Problems, 158</p> <p><b>5 Modeling Dynamic Programs 167</b></p> <p>5.1 Notational Style, 169</p> <p>5.2 Modeling Time, 170</p> <p>5.3 Modeling Resources, 174</p> <p>5.4 The States of Our System, 178</p> <p>5.5 Modeling Decisions, 187</p> <p>5.6 The Exogenous Information Process, 189</p> <p>5.7 The Transition Function, 198</p> <p>5.8 The Objective Function, 206</p> <p>5.9 A Measure-Theoretic View of Information**, 211</p> <p>5.10 Bibliographic Notes, 213</p> <p>Problems, 214</p> <p><b>6 Policies 221</b></p> <p>6.1 Myopic Policies, 224</p> <p>6.2 Lookahead Policies, 224</p> <p>6.3 Policy Function Approximations, 232</p> <p>6.4 Value Function Approximations, 235</p> <p>6.5 Hybrid Strategies, 239</p> <p>6.6 Randomized Policies, 242</p> <p>6.7 How to Choose a Policy?, 244</p> <p>6.8 Bibliographic Notes, 247</p> <p>Problems, 247</p> <p><b>7 Policy Search 249</b></p> <p>7.1 Background, 250</p> <p>7.2 Gradient Search, 253</p> <p>7.3 Direct Policy Search for Finite Alternatives, 256</p> <p>7.4 The Knowledge Gradient Algorithm for Discrete Alternatives, 262</p> <p>7.5 Simulation Optimization, 270</p> <p>7.6 Why Does It Work?**, 274</p> <p>7.7 Bibliographic Notes, 285</p> <p>Problems, 286</p> <p><b>8 Approximating Value Functions 289</b></p> <p>8.1 Lookup Tables and Aggregation, 290</p> <p>8.2 Parametric Models, 304</p> <p>8.3 Regression Variations, 314</p> <p>8.4 Nonparametric Models, 316</p> <p>8.5 Approximations and the Curse of Dimensionality, 325</p> <p>8.6 Why Does It Work?**, 328</p> <p>8.7 Bibliographic Notes, 333</p> <p>Problems, 334</p> <p><b>9 Learning Value Function Approximations 337</b></p> <p>9.1 Sampling the Value of a Policy, 337</p> <p>9.2 Stochastic Approximation Methods, 347</p> <p>9.3 Recursive Least Squares for Linear Models, 349</p> <p>9.4 Temporal Difference Learning with a Linear Model, 356</p> <p>9.5 Bellman’s Equation Using a Linear Model, 358</p> <p>9.6 Analysis of TD(0), LSTD, and LSPE Using a Single State, 364</p> <p>9.7 Gradient-Based Methods for Approximate Value Iteration*, 366</p> <p>9.8 Least Squares Temporal Differencing with Kernel Regression*, 371</p> <p>9.9 Value Function Approximations Based on Bayesian Learning*, 373</p> <p>9.10 Why Does It Work*, 376</p> <p>9.11 Bibliographic Notes, 379</p> <p>Problems, 381</p> <p><b>10 Optimizing While Learning 383</b></p> <p>10.1 Overview of Algorithmic Strategies, 385</p> <p>10.2 Approximate Value Iteration and <i>Q</i>-Learning Using Lookup Tables, 386</p> <p>10.3 Statistical Bias in the Max Operator, 397</p> <p>10.4 Approximate Value Iteration and <i>Q</i>-Learning Using Linear Models, 400</p> <p>10.5 Approximate Policy Iteration, 402</p> <p>10.6 The Actor–Critic Paradigm, 408</p> <p>10.7 Policy Gradient Methods, 410</p> <p>10.8 The Linear Programming Method Using Basis Functions, 411</p> <p>10.9 Approximate Policy Iteration Using Kernel Regression*, 413</p> <p>10.10 Finite Horizon Approximations for Steady-State Applications, 415</p> <p>10.11 Bibliographic Notes, 416</p> <p>Problems, 418</p> <p><b>11 Adaptive Estimation and Stepsizes 419</b></p> <p>11.1 Learning Algorithms and Stepsizes, 420</p> <p>11.2 Deterministic Stepsize Recipes, 425</p> <p>11.3 Stochastic Stepsizes, 433</p> <p>11.4 Optimal Stepsizes for Nonstationary Time Series, 437</p> <p>11.5 Optimal Stepsizes for Approximate Value Iteration, 447</p> <p>11.6 Convergence, 449</p> <p>11.7 Guidelines for Choosing Stepsize Formulas, 451</p> <p>11.8 Bibliographic Notes, 452</p> <p>Problems, 453</p> <p><b>12 Exploration Versus Exploitation 457</b></p> <p>12.1 A Learning Exercise: The Nomadic Trucker, 457</p> <p>12.2 An Introduction to Learning, 460</p> <p>12.3 Heuristic Learning Policies, 464</p> <p>12.4 Gittins Indexes for Online Learning, 470</p> <p>12.5 The Knowledge Gradient Policy, 477</p> <p>12.6 Learning with a Physical State, 482</p> <p>12.7 Bibliographic Notes, 492</p> <p>Problems, 493</p> <p><b>13 Value Function Approximations for Resource Allocation Problems 497</b></p> <p>13.1 Value Functions versus Gradients, 498</p> <p>13.2 Linear Approximations, 499</p> <p>13.3 Piecewise-Linear Approximations, 501</p> <p>13.4 Solving a Resource Allocation Problem Using Piecewise-Linear Functions, 505</p> <p>13.5 The SHAPE Algorithm, 509</p> <p>13.6 Regression Methods, 513</p> <p>13.7 Cutting Planes*, 516</p> <p>13.8 Why Does It Work?**, 528</p> <p>13.9 Bibliographic Notes, 535</p> <p>Problems, 536</p> <p><b>14 Dynamic Resource Allocation Problems 541</b></p> <p>14.1 An Asset Acquisition Problem, 541</p> <p>14.2 The Blood Management Problem, 547</p> <p>14.3 A Portfolio Optimization Problem, 557</p> <p>14.4 A General Resource Allocation Problem, 560</p> <p>14.5 A Fleet Management Problem, 573</p> <p>14.6 A Driver Management Problem, 580</p> <p>14.7 Bibliographic Notes, 585</p> <p>Problems, 586</p> <p><b>15 Implementation Challenges 593</b></p> <p>15.1 Will ADP Work for Your Problem?, 593</p> <p>15.2 Designing an ADP Algorithm for Complex Problems, 594</p> <p>15.3 Debugging an ADP Algorithm, 596</p> <p>15.4 Practical Issues, 597</p> <p>15.5 Modeling Your Problem, 602</p> <p>15.6 Online versus Offline Models, 604</p> <p>15.7 If It Works, Patent It!, 606</p> <p><b>Bibliography 607</b></p> <p><b>Index 623</b></p>
<b>WARREN B. POWELL</b>, PhD, is Professor of Operations Research and Financial Engineering at Princeton University, where he is founder and Director of CASTLE Laboratory, a research unit that works with industrial partners to test new ideas found in operations research. The recipient of the 2004 INFORMS Fellow Award, Dr. Powell has authored more than 160 published articles on stochastic optimization, approximate dynamicprogramming, and dynamic resource management.
<b>Praise for the <i>First Edition</i></b> <p>"Finally, a book devoted to dynamic programming and written using the language of operations research (OR)! This beautiful book fills a gap in the libraries of OR specialists and practitioners."<br /> —<b><i>Computing Reviews</i></b></p> <p><b>This new edition showcases a focus on modeling and computation for complex classes of approximate dynamic programming problems</b></p> <p>Understanding approximate dynamic programming (ADP) is vital in order to develop practical and high-quality solutions to complex industrial problems, particularly when those problems involve making decisions in the presence of uncertainty. <i>Approximate Dynamic Programming</i>, Second Edition uniquely integrates four distinct disciplines—Markov decision processes, mathematical programming, simulation, and statistics—to demonstrate how to successfully approach, model, and solve a wide range of real-life problems using ADP.</p> <p>The book continues to bridge the gap between computer science, simulation, and operations research and now adopts the notation and vocabulary of reinforcement learning as well as stochastic search and simulation optimization. The author outlines the essential algorithms that serve as a starting point in the design of practical solutions for real problems. The three curses of dimensionality that impact complex problems are introduced and detailed coverage of implementation challenges is provided. The <i>Second Edition</i> also features:</p> <ul> <li> <p>A new chapter describing four fundamental classes of policies for working with diverse stochastic optimization problems: myopic policies, look-ahead policies, policy function approximations, and policies based on value function approximations</p> </li> <li> <p>A new chapter on policy search that brings together stochastic search and simulation optimization concepts and introduces a new class of optimal learning strategies</p> </li> <li> <p>Updated coverage of the exploration exploitation problem in ADP, now including a recently developed method for doing active learning in the presence of a physical state, using the concept of the knowledge gradient</p> </li> <li> <p>A new sequence of chapters describing statistical methods for approximating value functions, estimating the value of a fixed policy, and value function approximation while searching for optimal policies</p> </li> </ul> <p>The presented coverage of ADP emphasizes models and algorithms, focusing on related applications and computation while also discussing the theoretical side of the topic that explores proofs of convergence and rate of convergence. A related website features an ongoing discussion of the evolving fields of approximation dynamic programming and reinforcement learning, along with additional readings, software, and datasets.</p> <p>Requiring only a basic understanding of statistics and probability, <i>Approximate Dynamic Programming</i>, Second Edition is an excellent book for industrial engineering and operations research courses at the upper-undergraduate and graduate levels. It also serves as a valuable reference for researchers and professionals who utilize dynamic programming, stochastic programming, and control theory to solve problems in their everyday work.</p>

Diese Produkte könnten Sie auch interessieren:

Statistics for Microarrays
Statistics for Microarrays
von: Ernst Wit, John McClure
PDF ebook
90,99 €