Details

Robot Learning by Visual Observation


Robot Learning by Visual Observation


1. Aufl.

von: Aleksandar Vakanski, Farrokh Janabi-Sharifi

105,99 €

Verlag: Wiley
Format: EPUB
Veröffentl.: 13.01.2017
ISBN/EAN: 9781119091998
Sprache: englisch
Anzahl Seiten: 208

DRM-geschütztes eBook, Sie benötigen z.B. Adobe Digital Editions und eine Adobe ID zum Lesen.

Beschreibungen

<p><b>This book presents programming by demonstration for robot learning from observations with a focus on the trajectory level of task abstraction<br /><br /></b></p> <ul> <li>Discusses methods for optimization of task reproduction, such as reformulation of task planning as a constrained optimization problem</li> <li>Focuses on regression approaches, such as Gaussian mixture regression, spline regression, and locally weighted regression</li> <li>Concentrates on the use of vision sensors for capturing motions and actions during task demonstration by a human task expert</li> </ul>
<p>Preface xi</p> <p>List of Abbreviations xv</p> <p><b>1 Introduction 1</b></p> <p>1.1 Robot Programming Methods 2</p> <p>1.2 Programming by Demonstration 3</p> <p>1.3 Historical Overview of Robot PbD 4</p> <p>1.4 PbD System Architecture 6</p> <p>1.4.1 Learning Interfaces 8</p> <p>1.4.1.1 Sensor-Based Techniques 10</p> <p>1.4.2 Task Representation and Modeling 13</p> <p>1.4.2.1 Symbolic Level 14</p> <p>1.4.2.2 Trajectory Level 16</p> <p>1.4.3 Task Analysis and Planning 18</p> <p>1.4.3.1 Symbolic Level 18</p> <p>1.4.3.2 Trajectory Level 19</p> <p>1.4.4 Program Generation and Task Execution 20</p> <p>1.5 Applications 21</p> <p>1.6 Research Challenges 25</p> <p>1.6.1 Extracting the Teacher’s Intention from Observations 26</p> <p>1.6.2 Robust Learning from Observations 27</p> <p>1.6.2.1 Robust Encoding of Demonstrated Motions 27</p> <p>1.6.2.2 Robust Reproduction of PbD Plans 29</p> <p>1.6.3 Metrics for Evaluation of Learned Skills 29</p> <p>1.6.4 Correspondence Problem 30</p> <p>1.6.5 Role of the Teacher in PbD 31</p> <p>1.7 Summary 32</p> <p>References 33</p> <p><b>2 Task Perception 43</b><br /><br />2.1 Optical Tracking Systems 43</p> <p>2.2 Vision Cameras 44</p> <p>2.3 Summary 46</p> <p>References 46</p> <p><b>3 Task Representation 49</b></p> <p>3.1 Level of Abstraction 50</p> <p>3.2 Probabilistic Learning 51</p> <p>3.3 Data Scaling and Aligning 51</p> <p>3.3.1 Linear Scaling 52</p> <p>3.3.2 Dynamic Time Warping (DTW) 52</p> <p>3.4 Summary 55</p> <p>References 55</p> <p><b>4 Task Modeling 57</b></p> <p>4.1 Gaussian Mixture Model (GMM) 57</p> <p>4.2 Hidden Markov Model (HMM) 59</p> <p>4.2.1 Evaluation Problem 61</p> <p>4.2.2 Decoding Problem 62</p> <p>4.2.3 Training Problem 62</p> <p>4.2.4 Continuous Observation Data 63</p> <p>4.3 Conditional Random Fields (CRFs) 64</p> <p>4.3.1 Linear Chain CRF 65</p> <p>4.3.2 Training and Inference 66</p> <p>4.4 Dynamic Motion Primitives (DMPs) 68</p> <p>4.5 Summary 70</p> <p>References 70</p> <p><b>5 Task Planning 73</b></p> <p>5.1 Gaussian Mixture Regression 73</p> <p>5.2 Spline Regression 74</p> <p>5.2.1 Extraction of Key Points as Trajectories Features 75</p> <p>5.2.2 HMM-Based Modeling and Generalization 80</p> <p>5.2.2.1 Related Work 80</p> <p>5.2.2.2 Modeling 81</p> <p>5.2.2.3 Generalization 83</p> <p>5.2.2.4 Experiments 87</p> <p>5.2.2.5 Comparison with Related Work 100</p> <p>5.2.3 CRF Modeling and Generalization 107</p> <p>5.2.3.1 Related Work 107</p> <p>5.2.3.2 Feature Functions Formation 107</p> <p>5.2.3.3 Trajectories Encoding and Generalization 109</p> <p>5.2.3.4 Experiments 111</p> <p>5.2.3.5 Comparisons with Related Work 115</p> <p>5.3 Locally Weighted Regression 117</p> <p>5.4 Gaussian Process Regression 121</p> <p>5.5 Summary 122</p> <p>References 123</p> <p><b>6 Task Execution 129</b></p> <p>6.1 Background and Related Work 129</p> <p>6.2 Kinematic Robot Control 132</p> <p>6.3 Vision-Based Trajectory Tracking Control 134</p> <p>6.3.1 Image-Based Visual Servoing (IBVS) 134</p> <p>6.3.2 Position-Based Visual Servoing (PBVS) 135</p> <p>6.3.3 Advanced Visual Servoing Methods 141</p> <p>6.4 Image-Based Task Planning 141</p> <p>6.4.1 Image-Based Learning Environment 141</p> <p>6.4.2 Task Planning 142</p> <p>6.4.3 Second-Order Conic Optimization 143</p> <p>6.4.4 Objective Function 144</p> <p>6.4.5 Constraints 146</p> <p>6.4.5.1 Image-Space Constraints 146</p> <p>6.4.5.2 Cartesian Space Constraints 149</p> <p>6.4.5.3 Robot Manipulator Constraints 150</p> <p>6.4.6 Optimization Model 152</p> <p>6.5 Robust Image-Based Tracking Control 156</p> <p>6.5.1 Simulations 157</p> <p>6.5.1.1 Simulation 1 158</p> <p>6.5.1.2 Simulation 2 161</p> <p>6.5.2 Experiments 162</p> <p>6.5.2.1 Experiment 1 166</p> <p>6.5.2.2 Experiment 2 173</p> <p>6.5.2.3 Experiment 3 173</p> <p>6.5.3 Robustness Analysis and Comparisons with Other Methods 173</p> <p>6.6 Discussion 183</p> <p>6.7 Summary 185</p> <p>References 185</p> <p>Index 000</p>
<p><b>ALEKSANDAR VAKANSKI</b> is a Clinical Assistant Professor in Industrial Technology at the University of Idaho, Idaho Falls, USA. He received a Ph.D. degree from the Department of Mechanical and Industrial Engineering at Ryerson University, Toronto, Canada, in 2013. The scope of his research interests encompasses the fields of robotics and mechatronics, artificial intelligence, computer vision, and control systems.</p> <p><b>FARROKH JANABI-SHARIFI</b> is a Professor of Mechanical and Industrial Engineering and the Director of Robotics, Mechatronics and Automation Laboratory (RMAL) at Ryerson University, Toronto, Canada. He is currently a Technical Editor of <i>IEEE/ASME Transactions on Mechatronics</i>, an Associate Editor of <i>The International Journal of Optomechatronics</i>, and an Editorial Member of <i>The Journal of Robotics</i> and <i>The Open Cybernetics and Systematics Journal</i>. His research interests include optomechatronic systems with the focus on image-guided control and planning.
<p><b>This book presents an overview of the methodology for robot learning from visual observations of human demonstrated tasks, with a focus on learning at a trajectory level of task abstraction</b></p> <p>The content of <i>Robot Learning by Visual Observation</i> is divided into chapters that address methods for tackling the individual steps in robotic observational learning. The book describes methods for mathematical modeling of a set of human-demonstrated trajectories, such as hidden Markov models, conditional random fields, Gaussian mixture models, and dynamic motion primitives. The authors further present methods for generation of a trajectory for task reproduction by a robot based on generalization of the set of demonstrated trajectories. In addition, the book <ul><li>Discusses methods for optimization of robotic observational learning from demonstrations, such as formulation of the task planning step as a constrained optimization problem</li> <li>Focuses on regression approaches for task reproduction, such as spline regression and locally weighted regression</li> <li>Concentrates on the use of vision sensors for capturing motions and actions demonstrated by a human task expert, as well as addresses the use of vision sensors for robust task execution by a robot learner</li></ul> <p>In times of a growing worldwide demand for automation and robotics applications, as well as an aging population and a shrinking work force, the development of robots with capacity to learn by observation and abilities for visual perception of the environment with vision sensors emerges as an important means to mitigate the aforementioned problems. The book is a valuable reference for university professors, graduate students, robotics enthusiasts, and companies that seek to develop robots with such abilities.

Diese Produkte könnten Sie auch interessieren:

Strategies to the Prediction, Mitigation and Management of Product Obsolescence
Strategies to the Prediction, Mitigation and Management of Product Obsolescence
von: Bjoern Bartels, Ulrich Ermel, Peter Sandborn, Michael G. Pecht
PDF ebook
116,99 €