Details

Multi-Agent Machine Learning


Multi-Agent Machine Learning

A Reinforcement Approach
1. Aufl.

von: H. M. Schwartz

103,99 €

Verlag: Wiley
Format: PDF
Veröffentl.: 25.08.2014
ISBN/EAN: 9781118884478
Sprache: englisch
Anzahl Seiten: 256

DRM-geschütztes eBook, Sie benötigen z.B. Adobe Digital Editions und eine Adobe ID zum Lesen.

Beschreibungen

<p>The book begins with a chapter on traditional methods of supervised learning, covering recursive least squares learning, mean square error methods, and stochastic approximation. Chapter 2 covers single agent reinforcement learning. Topics include learning value functions, Markov games, and TD learning with eligibility traces. Chapter 3 discusses two player games including two player matrix games with both pure and mixed strategies. Numerous algorithms and examples are presented. Chapter 4 covers learning in multi-player games, stochastic games, and Markov games, focusing on learning multi-player grid games—two player grid games, Q-learning, and Nash Q-learning. Chapter 5 discusses differential games, including multi player differential games, actor critique structure, adaptive fuzzy control and fuzzy interference systems, the evader pursuit game, and the defending a territory games. Chapter 6 discusses new ideas on learning within robotic swarms and the innovative idea of the evolution of personality traits.<br /> <br /> • Framework for understanding a variety of methods and approaches in multi-agent machine learning.<br /> <br /> • Discusses methods of reinforcement learning such as a number of forms of multi-agent Q-learning<br /> <br /> • Applicable to research professors and graduate students studying electrical and computer engineering, computer science, and mechanical and aerospace engineering</p>
<p><i>Preface ix</i></p> <p><b>Chapter 1 A Brief Review of Supervised Learning 1</b></p> <p>1.1 Least Squares Estimates 1</p> <p>1.2 Recursive Least Squares 5</p> <p>1.3 Least Mean Squares 6</p> <p>1.4 Stochastic Approximation 10</p> <p>References 11</p> <p><b>Chapter 2 Single-Agent Reinforcement Learning 12</b></p> <p>2.1 Introduction 12</p> <p>2.2 n-Armed Bandit Problem 13</p> <p>2.3 The Learning Structure 15</p> <p>2.4 The Value Function 17</p> <p>2.5 The Optimal Value Functions 18</p> <p>2.5.1 The Grid World Example 20</p> <p>2.6 Markov Decision Processes 23</p> <p>2.7 Learning Value Functions 25</p> <p>2.8 Policy Iteration 26</p> <p>2.9 Temporal Difference Learning 28</p> <p>2.10 TD Learning of the State-Action Function 30</p> <p>2.11 Q-Learning 32</p> <p>2.12 Eligibility Traces 33</p> <p>References 37</p> <p><b>Chapter 3 Learning in Two-Player Matrix Games 38</b></p> <p>3.1 Matrix Games 38</p> <p>3.2 Nash Equilibria in Two-Player Matrix Games 42</p> <p>3.3 Linear Programming in Two-Player Zero-Sum Matrix Games 43</p> <p>3.4 The Learning Algorithms 47</p> <p>3.5 Gradient Ascent Algorithm 47</p> <p>3.6 WoLF-IGA Algorithm 51</p> <p>3.7 Policy Hill Climbing (PHC) 52</p> <p>3.8 WoLF-PHC Algorithm 54</p> <p>3.9 Decentralized Learning in Matrix Games 57</p> <p>3.10 Learning Automata 59</p> <p>3.11 Linear Reward–Inaction Algorithm 59</p> <p>3.12 Linear Reward–Penalty Algorithm 60</p> <p>3.13 The Lagging Anchor Algorithm 60</p> <p>3.14 LR−I Lagging Anchor Algorithm 62</p> <p>3.14.1 Simulation 68</p> <p>References 70</p> <p><b>Chapter 4 Learning in Multiplayer Stochastic Games 73</b></p> <p>4.1 Introduction 73</p> <p>4.2 Multiplayer Stochastic Games 75</p> <p>4.3 Minimax-Q Algorithm 79</p> <p>4.3.1 2 ×2 Grid Game 80</p> <p>4.4 Nash Q-Learning 87</p> <p>4.4.1 The Learning Process 95</p> <p>4.5 The Simplex Algorithm 96</p> <p>4.6 The Lemke–Howson Algorithm 100</p> <p>4.7 Nash-Q Implementation 107</p> <p>4.8 Friend-or-Foe Q-Learning 111</p> <p>4.9 Infinite Gradient Ascent 112</p> <p>4.10 Policy Hill Climbing 114</p> <p>4.11 WoLF-PHC Algorithm 114</p> <p>4.12 Guarding a Territory Problem in a Grid World 117</p> <p>4.12.1 Simulation and Results 119</p> <p>4.13 Extension of LR−I Lagging Anchor Algorithm to Stochastic Games 125</p> <p>4.14 The Exponential Moving-Average Q-Learning (EMA Q-Learning) Algorithm 128</p> <p>4.15 Simulation and Results Comparing EMA Q-Learning to Other Methods 131</p> <p>4.15.1 Matrix Games 131</p> <p>4.15.2 Stochastic Games 134</p> <p>References 141</p> <p><b>Chapter 5 Differential Games 144</b></p> <p>5.1 Introduction 144</p> <p>5.2 A Brief Tutorial on Fuzzy Systems 146</p> <p>5.2.1 Fuzzy Sets and Fuzzy Rules 146</p> <p>5.2.2 Fuzzy Inference Engine 148</p> <p>5.2.3 Fuzzifier and Defuzzifier 151</p> <p>5.2.4 Fuzzy Systems and Examples 152</p> <p>5.3 Fuzzy Q-Learning 155</p> <p>5.4 Fuzzy Actor–Critic Learning 159</p> <p>5.5 Homicidal Chauffeur Differential Game 162</p> <p>5.6 Fuzzy Controller Structure 165</p> <p>5.7 Q()-Learning Fuzzy Inference System 166</p> <p>5.8 Simulation Results for the Homicidal Chauffeur 171</p> <p>5.9 Learning in the Evader–Pursuer Game with Two Cars 174</p> <p>5.10 Simulation of the Game of Two Cars 177</p> <p>5.11 Differential Game of Guarding a Territory 180</p> <p>5.12 Reward Shaping in the Differential Game of Guarding a Territory 184</p> <p>5.13 Simulation Results 185</p> <p>5.13.1 One Defender Versus One Invader 185</p> <p>5.13.2 Two Defenders Versus One Invader 191</p> <p>References 197</p> <p><b>Chapter 6 Swarm Intelligence and the Evolution of Personality Traits 200</b></p> <p>6.1 Introduction 200</p> <p>6.2 The Evolution of Swarm Intelligence 200</p> <p>6.3 Representation of the Environment 201</p> <p>6.4 Swarm-Based Robotics in Terms of Personalities 203</p> <p>6.5 Evolution of Personality Traits 206</p> <p>6.6 Simulation Framework 207</p> <p>6.7 A Zero-Sum Game Example 208</p> <p>6.7.1 Convergence 208</p> <p>6.7.2 Simulation Results 214</p> <p>6.8 Implementation for Next Sections 216</p> <p>6.9 Robots Leaving a Room 218</p> <p>6.10 Tracking a Target 221</p> <p>6.11 Conclusion 232</p> <p><i>References 233</i></p> <p><i>Index 237</i></p>
<p>“This is an interesting book both as research reference as well as teaching material for Master and PhD students.”  (<i>Zentralblatt MATH</i>, 1 April 2015)</p> <p> </p> <p>.</p>
<b>Howard M. Schwartz, PhD, </b>received his B.Eng. Degree from McGill University, Montreal, Canada in une 1981 and his MS Degree and PhD Degree from MIT, Cambridge, USA in 1982 and 1987 respectively. He is currently a professor in systems and computer engineering at Carleton University, Canada. His research interests include adaptive and intelligent control systems, robotic, artificial intelligence, system modelling, system identification, and state estimation.
The book begins with a chapter on traditional methods of supervised learning, covering recursive least squares learning, mean square error methods, and stochastic approximation. Chapter 2 covers single agent reinforcement learning. Topics include learning value functions, Markov games, and TD learning with eligibility traces. Chapter 3 discusses two player games including two player matrix games with both pure and mixed strategies. Numerous algorithms and examples are presented. Chapter 4 covers learning in multi-player games, stochastic games, and Markov games, focusing on learning multi-player grid games—two player grid games, Q-learning, and Nash Q-learning. Chapter 5 discusses differential games, including multi player differential games, actor critique structure, adaptive fuzzy control and fuzzy interference systems, the evader pursuit game, and the defending a territory games. Chapter 6 discusses new ideas on learning within robotic swarms and the innovative idea of the evolution of personality traits.

Diese Produkte könnten Sie auch interessieren:

Bandwidth Efficient Coding
Bandwidth Efficient Coding
von: John B. Anderson
EPUB ebook
114,99 €
Digital Communications with Emphasis on Data Modems
Digital Communications with Emphasis on Data Modems
von: Richard W. Middlestead
PDF ebook
171,99 €
Bandwidth Efficient Coding
Bandwidth Efficient Coding
von: John B. Anderson
PDF ebook
114,99 €