Details

Learning in Energy-Efficient Neuromorphic Computing: Algorithm and Architecture Co-Design


Learning in Energy-Efficient Neuromorphic Computing: Algorithm and Architecture Co-Design


IEEE Press 1. Aufl.

von: Nan Zheng, Pinaki Mazumder

109,99 €

Verlag: Wiley
Format: PDF
Veröffentl.: 18.10.2019
ISBN/EAN: 9781119507390
Sprache: englisch
Anzahl Seiten: 296

DRM-geschütztes eBook, Sie benötigen z.B. Adobe Digital Editions und eine Adobe ID zum Lesen.

Beschreibungen

<p><b>Explains current co-design and co-optimization methodologies for building hardware neural networks and algorithms for machine learning applications</b> </p> <p>This book focuses on how to build energy-efficient hardware for neural networks with learning capabilities—and provides co-design and co-optimization methodologies for building hardware neural networks that can learn. Presenting a complete picture from high-level algorithm to low-level implementation details, <i>Learning in Energy-Efficient Neuromorphic Computing: Algorithm and Architecture Co-Design</i> also covers many fundamentals and essentials in neural networks (e.g., deep learning), as well as hardware implementation of neural networks.</p> <p>The book begins with an overview of neural networks. It then discusses algorithms for utilizing and training rate-based artificial neural networks. Next comes an introduction to various options for executing neural networks, ranging from general-purpose processors to specialized hardware, from digital accelerator to analog accelerator. A design example on building energy-efficient accelerator for adaptive dynamic programming with neural networks is also presented. An examination of fundamental concepts and popular learning algorithms for spiking neural networks follows that, along with a look at the hardware for spiking neural networks. Then comes a chapter offering readers three design examples (two of which are based on conventional CMOS, and one on emerging nanotechnology) to implement the learning algorithm found in the previous chapter. The book concludes with an outlook on the future of neural network hardware.</p> <ul> <li>Includes cross-layer survey of hardware accelerators for neuromorphic algorithms</li> <li>Covers the co-design of architecture and algorithms with emerging devices for much-improved computing efficiency</li> <li>Focuses on the co-design of algorithms and hardware, which is especially critical for using emerging devices, such as traditional memristors or diffusive memristors, for neuromorphic computing</li> </ul> <p><i>Learning in Energy-Efficient Neuromorphic Computing: Algorithm and Architecture Co-Design</i> is an ideal resource for researchers, scientists, software engineers, and hardware engineers dealing with the ever-increasing requirement on power consumption and response time. It is also excellent for teaching and training undergraduate and graduate students about the latest generation neural networks with powerful learning capabilities. </p>
<p>Preface xi</p> <p>Acknowledgment xix</p> <p><b>1 Overview 1</b></p> <p>1.1 History of Neural Networks 1</p> <p>1.2 Neural Networks in Software 2</p> <p>1.2.1 Artificial Neural Network 2</p> <p>1.2.2 Spiking Neural Network 3</p> <p>1.3 Need for Neuromorphic Hardware 3</p> <p>1.4 Objectives and Outlines of the Book 5</p> <p>References 8</p> <p><b>2 Fundamentals and Learning of Artificial Neural Networks 11</b></p> <p>2.1 Operational Principles of Artificial Neural Networks 11</p> <p>2.1.1 Inference 11</p> <p>2.1.2 Learning 13</p> <p>2.2 Neural Network Based Machine Learning 16</p> <p>2.2.1 Supervised Learning 17</p> <p>2.2.2 Reinforcement Learning 20</p> <p>2.2.3 Unsupervised Learning 22</p> <p>2.2.4 Case Study: Action-Dependent Heuristic Dynamic Programming 23</p> <p>2.2.4.1 Actor-Critic Networks 24</p> <p>2.2.4.2 On-Line Learning Algorithm 25</p> <p>2.2.4.3 Virtual Update Technique 27</p> <p>2.3 Network Topologies 31</p> <p>2.3.1 Fully Connected Neural Networks 31</p> <p>2.3.2 Convolutional Neural Networks 32</p> <p>2.3.3 Recurrent Neural Networks 35</p> <p>2.4 Dataset and Benchmarks 38</p> <p>2.5 Deep Learning 41</p> <p>2.5.1 Pre-Deep-Learning Era 41</p> <p>2.5.2 The Rise of Deep Learning 41</p> <p>2.5.3 Deep Learning Techniques 42</p> <p>2.5.3.1 Performance-Improving Techniques 42</p> <p>2.5.3.2 Energy-Efficiency-Improving Techniques 46</p> <p>2.5.4 Deep Neural Network Examples 50</p> <p>References 53</p> <p><b>3 Artificial Neural Networks in Hardware 61</b></p> <p>3.1 Overview 61</p> <p>3.2 General-Purpose Processors 62</p> <p>3.3 Digital Accelerators 63</p> <p>3.3.1 A Digital ASIC Approach 63</p> <p>3.3.1.1 Optimization on Data Movement and Memory Access 63</p> <p>3.3.1.2 Scaling Precision 71</p> <p>3.3.1.3 Leveraging Sparsity 76</p> <p>3.3.2 FPGA-Based Accelerators 80</p> <p>3.4 Analog/Mixed-Signal Accelerators 82</p> <p>3.4.1 Neural Networks in Conventional Integrated Technology 82</p> <p>3.4.1.1 In/Near-Memory Computing 82</p> <p>3.4.1.2 Near-Sensor Computing 85</p> <p>3.4.2 Neural Network Based on Emerging Non-volatile Memory 88</p> <p>3.4.2.1 Crossbar as a Massively Parallel Engine 89</p> <p>3.4.2.2 Learning in a Crossbar 91</p> <p>3.4.3 Optical Accelerator 93</p> <p>3.5 Case Study: An Energy-Efficient Accelerator for Adaptive Dynamic Programming 94</p> <p>3.5.1 Hardware Architecture 95</p> <p>3.5.1.1 On-Chip Memory 95</p> <p>3.5.1.2 Datapath 97</p> <p>3.5.1.3 Controller 99</p> <p>3.5.2 Design Examples 101</p> <p>References 108</p> <p><b>4 Operational Principles and Learning in Spiking Neural Networks 119</b></p> <p>4.1 Spiking Neural Networks 119</p> <p>4.1.1 Popular Spiking Neuron Models 120</p> <p>4.1.1.1 Hodgkin-Huxley Model 120</p> <p>4.1.1.2 Leaky Integrate-and-Fire Model 121</p> <p>4.1.1.3 Izhikevich Model 121</p> <p>4.1.2 Information Encoding 122</p> <p>4.1.3 Spiking Neuron versus Non-Spiking Neuron 123</p> <p>4.2 Learning in Shallow SNNs 124</p> <p>4.2.1 ReSuMe 124</p> <p>4.2.2 Tempotron 125</p> <p>4.2.3 Spike-Timing-Dependent Plasticity 127</p> <p>4.2.4 Learning Through Modulating Weight-Dependent STDP in Two-Layer Neural Networks 131</p> <p>4.2.4.1 Motivations 131</p> <p>4.2.4.2 Estimating Gradients with Spike Timings 131</p> <p>4.2.4.3 Reinforcement Learning Example 135</p> <p>4.3 Learning in Deep SNNs 146</p> <p>4.3.1 SpikeProp 146</p> <p>4.3.2 Stack of Shallow Networks 147</p> <p>4.3.3 Conversion from ANNs 148</p> <p>4.3.4 Recent Advances in Backpropagation for Deep SNNs 150</p> <p>4.3.5 Learning Through Modulating Weight-Dependent STDP in Multilayer Neural Networks 151</p> <p>4.3.5.1 Motivations 151</p> <p>4.3.5.2 Learning Through Modulating Weight-Dependent STDP 151</p> <p>4.3.5.3 Simulation Results 158</p> <p>References 167</p> <p><b>5 Hardware Implementations of Spiking Neural Networks 173</b></p> <p>5.1 The Need for Specialized Hardware 173</p> <p>5.1.1 Address-Event Representation 173</p> <p>5.1.2 Event-Driven Computation 174</p> <p>5.1.3 Inference with a Progressive Precision 175</p> <p>5.1.4 Hardware Considerations for Implementing the Weight-Dependent STDP Learning Rule 181</p> <p>5.1.4.1 Centralized Memory Architecture 182</p> <p>5.1.4.2 Distributed Memory Architecture 183</p> <p>5.2 Digital SNNs 186</p> <p>5.2.1 Large-Scale SNN ASICs 186</p> <p>5.2.1.1 SpiNNaker 186</p> <p>5.2.1.2 TrueNorth 187</p> <p>5.2.1.3 Loihi 191</p> <p>5.2.2 Small/Moderate-Scale Digital SNNs 192</p> <p>5.2.2.1 Bottom-Up Approach 192</p> <p>5.2.2.2 Top-Down Approach 193</p> <p>5.2.3 Hardware-Friendly Reinforcement Learning in SNNs 194</p> <p>5.2.4 Hardware-Friendly Supervised Learning in Multilayer SNNs 199</p> <p>5.2.4.1 Hardware Architecture 199</p> <p>5.2.4.2 CMOS Implementation Results 205</p> <p>5.3 Analog/Mixed-Signal SNNs 210</p> <p>5.3.1 Basic Building Blocks 210</p> <p>5.3.2 Large-Scale Analog/Mixed-Signal CMOS SNNs 211</p> <p>5.3.2.1 CAVIAR 211</p> <p>5.3.2.2 BrainScaleS 214</p> <p>5.3.2.3 Neurogrid 215</p> <p>5.3.3 Other Analog/Mixed-Signal CMOS SNN ASICs 216</p> <p>5.3.4 SNNs Based on Emerging Nanotechnologies 216</p> <p>5.3.4.1 Energy-Efficient Solutions 217</p> <p>5.3.4.2 Synaptic Plasticity 218</p> <p>5.3.5 Case Study: Memristor Crossbar Based Learning in SNNs 220</p> <p>5.3.5.1 Motivations 220</p> <p>5.3.5.2 Algorithm Adaptations 222</p> <p>5.3.5.3 Non-idealities 231</p> <p>5.3.5.4 Benchmarks 238</p> <p>References 238</p> <p><b>6 Conclusions 247</b></p> <p>6.1 Outlooks 247</p> <p>6.1.1 Brain-Inspired Computing 247</p> <p>6.1.2 Emerging Nanotechnologies 249</p> <p>6.1.3 Reliable Computing with Neuromorphic Systems 250</p> <p>6.1.4 Blending of ANNs and SNNs 251</p> <p>6.2 Conclusions 252</p> <p>References 253</p> <p><b>A Appendix 257</b></p> <p>A.1 Hopfield Network 257</p> <p>A.2 Memory Self-Repair with Hopfield Network 258</p> <p>References 266</p> <p>Index 269</p>
<p><b>NAN ZHENG, PhD,</b> received a B. S. degree in Information Engineering from Shanghai Jiao Tong University, China, in 2011, and an M. S. and PhD in Electrical Engineering from the University of Michigan, Ann Arbor, USA, in 2014 and 2018, respectively. His research interests include low-power hardware architectures, algorithms and circuit techniques with an emphasis on machine-learning applications. <p><b>PINAKI MAZUMDER, PhD,</b> is a professor in the Department of Electrical Engineering and Computer Science at The University of Michigan, USA. His research interests include CMOS VLSI design, semiconductor memory systems, CAD tools and circuit designs for emerging technologies including quantum MOS, spintronics, spoof plasmonics, and resonant tunneling devices.
<p><b>Explains current co-design and co-optimization methodologies for building hardware neural networks and algorithms for machine learning applications</b> <p>This book focuses on how to build energy-efficient hardware for neural networks with learning capabilities???and provides co-design and co-optimization methodologies for building hardware neural networks that can learn. Presenting a complete picture from high-level algorithm to low-level implementation details, <i>Learning in Energy-Efficient Neuromorphic Computing: Algorithm and Architecture Co-Design</i> also covers many fundamentals and essentials in neural networks (e.g. deep learning), as well as hardware implementation of neural networks. <p>The book begins with an overview of neural networks followed by a discussion of algorithms for utilizing and training rate-based artificial neural networks. Next comes an introduction to various options for executing neural networks, ranging from general-purpose processors to specialized hardware, from digital accelerator to analog accelerator. A design example on building energy- efficient accelerator for adaptive dynamic programming with neural networks is also presented. An examination of fundamental concepts and popular learning algorithms for spiking neural networks follows, along with a look at the hardware for spiking neural networks. Then comes a chapter offering readers three design examples (two of which are based on conventional CMOS, and one on emerging nanotechnology) to implement the learning algorithm found in the previous chapter. The book concludes with an outlook on the future of neural network hardware. <ul> <li>Includes a cross-layer survey of hardware accelerators for neuromorphic algorithms</li> <li>Covers the co-design of architecture and algorithms with emerging devices for much-improved computing efficiency</li> <li>Focuses on the co-design of algorithms and hardware, which is especially critical for using emerging devices such as traditional memristors or diffusive memristors, for neuromorphic computing</li> </ul> <p><i>Learning in Energy-Efficient Neuromorphic Computing: Algorithm and Architecture Co-Design</i> is an ideal resource for researchers, scientists, software engineers, and hardware engineers dealing with the ever-increasing demands on power consumption and response time. It is also excellent for teaching and training undergraduate and graduate students about the latest generation of neural networks with powerful learning capabilities.

Diese Produkte könnten Sie auch interessieren:

Bandwidth Efficient Coding
Bandwidth Efficient Coding
von: John B. Anderson
PDF ebook
114,99 €
Bandwidth Efficient Coding
Bandwidth Efficient Coding
von: John B. Anderson
EPUB ebook
114,99 €