图书简介
Explains current co-design and co-optimization methodologies for building hardware neural networks and algorithms for machine learning applications This book focuses on how to build energy-efficient hardware for neural networks with learning capabilities-and provides co-design and co-optimization methodologies for building hardware neural networks that can learn. Presenting a complete picture from high-level algorithm to low-level implementation details, Learning in Energy-Efficient Neuromorphic Computing: Algorithm and Architecture Co-Design also covers many fundamentals and essentials in neural networks (e.g., deep learning), as well as hardware implementation of neural networks. The book begins with an overview of neural networks. It then discusses algorithms for utilizing and training rate-based artificial neural networks. Next comes an introduction to various options for executing neural networks, ranging from general-purpose processors to specialized hardware, from digital accelerator to analog accelerator. A design example on building energy-efficient accelerator for adaptive dynamic programming with neural networks is also presented. An examination of fundamental concepts and popular learning algorithms for spiking neural networks follows that, along with a look at the hardware for spiking neural networks. Then comes a chapter offering readers three design examples (two of which are based on conventional CMOS, and one on emerging nanotechnology) to implement the learning algorithm found in the previous chapter. The book concludes with an outlook on the future of neural network hardware. Includes cross-layer survey of hardware accelerators for neuromorphic algorithms Covers the co-design of architecture and algorithms with emerging devices for much-improved computing efficiency Focuses on the co-design of algorithms and hardware, which is especially critical for using emerging devices, such as traditional memristors or diffusive memristors, for neuromorphic computing Learning in Energy-Efficient Neuromorphic Computing: Algorithm and Architecture Co-Design is an ideal resource for researchers, scientists, software engineers, and hardware engineers dealing with the ever-increasing requirement on power consumption and response time. It is also excellent for teaching and training undergraduate and graduate students about the latest generation neural networks with powerful learning capabilities.
Preface xi Acknowledgment xix 1 Overview 1 1.1 History of Neural Networks 1 1.2 Neural Networks in Software 2 1.2.1 Artificial Neural Network 2 1.2.2 Spiking Neural Network 3 1.3 Need for Neuromorphic Hardware 3 1.4 Objectives and Outlines of the Book 5 References 8 2 Fundamentals and Learning of Artificial Neural Networks 11 2.1 Operational Principles of Artificial Neural Networks 11 2.1.1 Inference 11 2.1.2 Learning 13 2.2 Neural Network Based Machine Learning 16 2.2.1 Supervised Learning 17 2.2.2 Reinforcement Learning 20 2.2.3 Unsupervised Learning 22 2.2.4 Case Study: Action-Dependent Heuristic Dynamic Programming 23 2.2.4.1 Actor-Critic Networks 24 2.2.4.2 On-Line Learning Algorithm 25 2.2.4.3 Virtual Update Technique 27 2.3 Network Topologies 31 2.3.1 Fully Connected Neural Networks 31 2.3.2 Convolutional Neural Networks 32 2.3.3 Recurrent Neural Networks 35 2.4 Dataset and Benchmarks 38 2.5 Deep Learning 41 2.5.1 Pre-Deep-Learning Era 41 2.5.2 The Rise of Deep Learning 41 2.5.3 Deep Learning Techniques 42 2.5.3.1 Performance-Improving Techniques 42 2.5.3.2 Energy-Efficiency-Improving Techniques 46 2.5.4 Deep Neural Network Examples 50 References 53 3 Artificial Neural Networks in Hardware 61 3.1 Overview 61 3.2 General-Purpose Processors 62 3.3 Digital Accelerators 63 3.3.1 A Digital ASIC Approach 63 3.3.1.1 Optimization on Data Movement and Memory Access 63 3.3.1.2 Scaling Precision 71 3.3.1.3 Leveraging Sparsity 76 3.3.2 FPGA-Based Accelerators 80 3.4 Analog/Mixed-Signal Accelerators 82 3.4.1 Neural Networks in Conventional Integrated Technology 82 3.4.1.1 In/Near-Memory Computing 82 3.4.1.2 Near-Sensor Computing 85 3.4.2 Neural Network Based on Emerging Non-volatile Memory 88 3.4.2.1 Crossbar as a Massively Parallel Engine 89 3.4.2.2 Learning in a Crossbar 91 3.4.3 Optical Accelerator 93 3.5 Case Study: An Energy-Efficient Accelerator for Adaptive Dynamic Programming 94 3.5.1 Hardware Architecture 95 3.5.1.1 On-Chip Memory 95 3.5.1.2 Datapath 97 3.5.1.3 Controller 99 3.5.2 Design Examples 101 References 108 4 Operational Principles and Learning in Spiking Neural Networks 119 4.1 Spiking Neural Networks 119 4.1.1 Popular Spiking Neuron Models 120 4.1.1.1 Hodgkin-Huxley Model 120 4.1.1.2 Leaky Integrate-and-Fire Model 121 4.1.1.3 Izhikevich Model 121 4.1.2 Information Encoding 122 4.1.3 Spiking Neuron versus Non-Spiking Neuron 123 4.2 Learning in Shallow SNNs 124 4.2.1 ReSuMe 124 4.2.2 Tempotron 125 4.2.3 Spike-Timing-Dependent Plasticity 127 4.2.4 Learning Through Modulating Weight-Dependent STDP in Two-Layer Neural Networks 131 4.2.4.1 Motivations 131 4.2.4.2 Estimating Gradients with Spike Timings 131 4.2.4.3 Reinforcement Learning Example 135 4.3 Learning in Deep SNNs 146 4.3.1 SpikeProp 146 4.3.2 Stack of Shallow Networks 147 4.3.3 Conversion from ANNs 148 4.3.4 Recent Advances in Backpropagation for Deep SNNs 150 4.3.5 Learning Through Modulating Weight-Dependent STDP in Multilayer Neural Networks 151 4.3.5.1 Motivations 151 4.3.5.2 Learning Through Modulating Weight-Dependent STDP 151 4.3.5.3 Simulation Results 158 References 167 5 Hardware Implementations of Spiking Neural Networks 173 5.1 The Need for Specialized Hardware 173 5.1.1 Address-Event Representation 173 5.1.2 Event-Driven Computation 174 5.1.3 Inference with a Progressive Precision 175 5.1.4 Hardware Considerations for Implementing the Weight-Dependent STDP Learning Rule 181 5.1.4.1 Centralized Memory Architecture 182 5.1.4.2 Distributed Memory Architecture 183 5.2 Digital SNNs 186 5.2.1 Large-Scale SNN ASICs 186 5.2.1.1 SpiNNaker 186 5.2.1.2 TrueNorth 187 5.2.1.3 Loihi 191 5.2.2 Small/Moderate-Scale Digital SNNs 192 5.2.2.1 Bottom-Up Approach 192 5.2.2.2 Top-Down Approach 193 5.2.3 Hardware-Friendly Reinforcement Learning in SNNs 194 5.2.4 Hardware-Friendly Supervised Learning in Multilayer SNNs 199 5.2.4.1 Hardware Architecture 199 5.2.4.2 CMOS Implementation Results 205 5.3 Analog/Mixed-Signal SNNs 210 5.3.1 Basic Building Blocks 210 5.3.2 Large-Scale Analog/Mixed-Signal CMOS SNNs 211 5.3.2.1 CAVIAR 211 5.3.2.2 BrainScaleS 214 5.3.2.3 Neurogrid 215 5.3.3 Other Analog/Mixed-Signal CMOS SNN ASICs 216 5.3.4 SNNs Based on Emerging Nanotechnologies 216 5.3.4.1 Energy-Efficient Solutions 217 5.3.4.2 Synaptic Plasticity 218 5.3.5 Case Study: Memristor Crossbar Based Learning in SNNs 220 5.3.5.1 Motivations 220 5.3.5.2 Algorithm Adaptations 222 5.3.5.3 Non-idealities 231 5.3.5.4 Benchmarks 238 References 238 6 Conclusions 247 6.1 Outlooks 247 6.1.1 Brain-Inspired Computing 247 6.1.2 Emerging Nanotechnologies 249 6.1.3 Reliable Computing with Neuromorphic Systems 250 6.1.4 Blending of ANNs and SNNs 251 6.2 Conclusions 252 References 253 A Appendix 257 A.1 Hopfield Network 257 A.2 Memory Self-Repair with Hopfield Network 258 References 266 Index 269
Trade Policy 买家须知
- 关于产品:
- ● 正版保障:本网站隶属于中国国际图书贸易集团公司,确保所有图书都是100%正版。
- ● 环保纸张:进口图书大多使用的都是环保轻型张,颜色偏黄,重量比较轻。
- ● 毛边版:即书翻页的地方,故意做成了参差不齐的样子,一般为精装版,更具收藏价值。
关于退换货:
- 由于预订产品的特殊性,采购订单正式发订后,买方不得无故取消全部或部分产品的订购。
- 由于进口图书的特殊性,发生以下情况的,请直接拒收货物,由快递返回:
- ● 外包装破损/发错货/少发货/图书外观破损/图书配件不全(例如:光盘等)
并请在工作日通过电话400-008-1110联系我们。
- 签收后,如发生以下情况,请在签收后的5个工作日内联系客服办理退换货:
- ● 缺页/错页/错印/脱线
关于发货时间:
- 一般情况下:
- ●【现货】 下单后48小时内由北京(库房)发出快递。
- ●【预订】【预售】下单后国外发货,到货时间预计5-8周左右,店铺默认中通快递,如需顺丰快递邮费到付。
- ● 需要开具发票的客户,发货时间可能在上述基础上再延后1-2个工作日(紧急发票需求,请联系010-68433105/3213);
- ● 如遇其他特殊原因,对发货时间有影响的,我们会第一时间在网站公告,敬请留意。
关于到货时间:
- 由于进口图书入境入库后,都是委托第三方快递发货,所以我们只能保证在规定时间内发出,但无法为您保证确切的到货时间。
- ● 主要城市一般2-4天
- ● 偏远地区一般4-7天
关于接听咨询电话的时间:
- 010-68433105/3213正常接听咨询电话的时间为:周一至周五上午8:30~下午5:00,周六、日及法定节假日休息,将无法接听来电,敬请谅解。
- 其它时间您也可以通过邮件联系我们:customer@readgo.cn,工作日会优先处理。
关于快递:
- ● 已付款订单:主要由中通、宅急送负责派送,订单进度查询请拨打010-68433105/3213。
本书暂无推荐
本书暂无推荐