High Energy Efficiency Neural Network Processor with Combined Digital and Computing-in-Memory Architecture

High Energy Efficiency Neural Network Processor with Combined Digital and Computing-in-Memory Architecture PDF Author: Jinshan Yue
Publisher: Springer Nature
ISBN: 9819734770
Category :
Languages : en
Pages : 128

Get Book Here

Book Description

High Energy Efficiency Neural Network Processor with Combined Digital and Computing-in-Memory Architecture

High Energy Efficiency Neural Network Processor with Combined Digital and Computing-in-Memory Architecture PDF Author: Jinshan Yue
Publisher: Springer Nature
ISBN: 9819734770
Category :
Languages : en
Pages : 128

Get Book Here

Book Description


Efficient Processing of Deep Neural Networks

Efficient Processing of Deep Neural Networks PDF Author: Vivienne Sze
Publisher: Springer Nature
ISBN: 3031017668
Category : Technology & Engineering
Languages : en
Pages : 254

Get Book Here

Book Description
This book provides a structured treatment of the key principles and techniques for enabling efficient processing of deep neural networks (DNNs). DNNs are currently widely used for many artificial intelligence (AI) applications, including computer vision, speech recognition, and robotics. While DNNs deliver state-of-the-art accuracy on many AI tasks, it comes at the cost of high computational complexity. Therefore, techniques that enable efficient processing of deep neural networks to improve key metrics—such as energy-efficiency, throughput, and latency—without sacrificing accuracy or increasing hardware costs are critical to enabling the wide deployment of DNNs in AI systems. The book includes background on DNN processing; a description and taxonomy of hardware architectural approaches for designing DNN accelerators; key metrics for evaluating and comparing different designs; features of DNN processing that are amenable to hardware/algorithm co-design to improve energy efficiency and throughput; and opportunities for applying new technologies. Readers will find a structured introduction to the field as well as formalization and organization of key concepts from contemporary work that provide insights that may spark new ideas.

High Performance Computing for Big Data

High Performance Computing for Big Data PDF Author: Chao Wang
Publisher: CRC Press
ISBN: 1351651579
Category : Computers
Languages : en
Pages : 360

Get Book Here

Book Description
High-Performance Computing for Big Data: Methodologies and Applications explores emerging high-performance architectures for data-intensive applications, novel efficient analytical strategies to boost data processing, and cutting-edge applications in diverse fields, such as machine learning, life science, neural networks, and neuromorphic engineering. The book is organized into two main sections. The first section covers Big Data architectures, including cloud computing systems, and heterogeneous accelerators. It also covers emerging 3D IC design principles for memory architectures and devices. The second section of the book illustrates emerging and practical applications of Big Data across several domains, including bioinformatics, deep learning, and neuromorphic engineering. Features Covers a wide range of Big Data architectures, including distributed systems like Hadoop/Spark Includes accelerator-based approaches for big data applications such as GPU-based acceleration techniques, and hardware acceleration such as FPGA/CGRA/ASICs Presents emerging memory architectures and devices such as NVM, STT- RAM, 3D IC design principles Describes advanced algorithms for different big data application domains Illustrates novel analytics techniques for Big Data applications, scheduling, mapping, and partitioning methodologies Featuring contributions from leading experts, this book presents state-of-the-art research on the methodologies and applications of high-performance computing for big data applications. About the Editor Dr. Chao Wang is an Associate Professor in the School of Computer Science at the University of Science and Technology of China. He is the Associate Editor of ACM Transactions on Design Automations for Electronics Systems (TODAES), Applied Soft Computing, Microprocessors and Microsystems, IET Computers & Digital Techniques, and International Journal of Electronics. Dr. Chao Wang was the recipient of Youth Innovation Promotion Association, CAS, ACM China Rising Star Honorable Mention (2016), and best IP nomination of DATE 2015. He is now on the CCF Technical Committee on Computer Architecture, CCF Task Force on Formal Methods. He is a Senior Member of IEEE, Senior Member of CCF, and a Senior Member of ACM.

Deep In-memory Architectures for Machine Learning

Deep In-memory Architectures for Machine Learning PDF Author: Mingu Kang
Publisher: Springer Nature
ISBN: 3030359719
Category : Technology & Engineering
Languages : en
Pages : 181

Get Book Here

Book Description
This book describes the recent innovation of deep in-memory architectures for realizing AI systems that operate at the edge of energy-latency-accuracy trade-offs. From first principles to lab prototypes, this book provides a comprehensive view of this emerging topic for both the practicing engineer in industry and the researcher in academia. The book is a journey into the exciting world of AI systems in hardware.

Approximate Computing

Approximate Computing PDF Author: Weiqiang Liu
Publisher: Springer Nature
ISBN: 3030983471
Category : Technology & Engineering
Languages : en
Pages : 607

Get Book Here

Book Description
This book explores the technological developments at various levels of abstraction, of the new paradigm of approximate computing. The authors describe in a single-source the state-of-the-art, covering the entire spectrum of research activities in approximate computing, bridging device, circuit, architecture, and system levels. Content includes tutorials, reviews and surveys of current theoretical/experimental results, design methodologies and applications developed in approximate computing for a wide scope of readership and specialists. Serves as a single-source reference to state-of-the-art of approximate computing; Covers broad range of topics, from circuits to applications; Includes contributions by leading researchers, from academia and industry.

Learning in Energy-Efficient Neuromorphic Computing: Algorithm and Architecture Co-Design

Learning in Energy-Efficient Neuromorphic Computing: Algorithm and Architecture Co-Design PDF Author: Nan Zheng
Publisher: John Wiley & Sons
ISBN: 1119507391
Category : Computers
Languages : en
Pages : 300

Get Book Here

Book Description
Explains current co-design and co-optimization methodologies for building hardware neural networks and algorithms for machine learning applications This book focuses on how to build energy-efficient hardware for neural networks with learning capabilities—and provides co-design and co-optimization methodologies for building hardware neural networks that can learn. Presenting a complete picture from high-level algorithm to low-level implementation details, Learning in Energy-Efficient Neuromorphic Computing: Algorithm and Architecture Co-Design also covers many fundamentals and essentials in neural networks (e.g., deep learning), as well as hardware implementation of neural networks. The book begins with an overview of neural networks. It then discusses algorithms for utilizing and training rate-based artificial neural networks. Next comes an introduction to various options for executing neural networks, ranging from general-purpose processors to specialized hardware, from digital accelerator to analog accelerator. A design example on building energy-efficient accelerator for adaptive dynamic programming with neural networks is also presented. An examination of fundamental concepts and popular learning algorithms for spiking neural networks follows that, along with a look at the hardware for spiking neural networks. Then comes a chapter offering readers three design examples (two of which are based on conventional CMOS, and one on emerging nanotechnology) to implement the learning algorithm found in the previous chapter. The book concludes with an outlook on the future of neural network hardware. Includes cross-layer survey of hardware accelerators for neuromorphic algorithms Covers the co-design of architecture and algorithms with emerging devices for much-improved computing efficiency Focuses on the co-design of algorithms and hardware, which is especially critical for using emerging devices, such as traditional memristors or diffusive memristors, for neuromorphic computing Learning in Energy-Efficient Neuromorphic Computing: Algorithm and Architecture Co-Design is an ideal resource for researchers, scientists, software engineers, and hardware engineers dealing with the ever-increasing requirement on power consumption and response time. It is also excellent for teaching and training undergraduate and graduate students about the latest generation neural networks with powerful learning capabilities.

Neuromorphic Engineering

Neuromorphic Engineering PDF Author: Elishai Ezra Tsur
Publisher: CRC Press
ISBN: 1000421295
Category : Computers
Languages : en
Pages : 340

Get Book Here

Book Description
The brain is not a glorified digital computer. It does not store information in registers, and it does not mathematically transform mental representations to establish perception or behavior. The brain cannot be downloaded to a computer to provide immortality, nor can it destroy the world by having its emerged consciousness traveling in cyberspace. However, studying the brain's core computation architecture can inspire scientists, computer architects, and algorithm designers to think fundamentally differently about their craft. Neuromorphic engineers have the ultimate goal of realizing machines with some aspects of cognitive intelligence. They aspire to design computing architectures that could surpass existing digital von Neumann-based computing architectures' performance. In that sense, brain research bears the promise of a new computing paradigm. As part of a complete cognitive hardware and software ecosystem, neuromorphic engineering opens new frontiers for neuro-robotics, artificial intelligence, and supercomputing applications. The book presents neuromorphic engineering from three perspectives: the scientist, the computer architect, and the algorithm designer. It zooms in and out of the different disciplines, allowing readers with diverse backgrounds to understand and appreciate the field. Overall, the book covers the basics of neuronal modeling, neuromorphic circuits, neural architectures, event-based communication, and the neural engineering framework.

Resistive Random Access Memory (RRAM)

Resistive Random Access Memory (RRAM) PDF Author: Shimeng Yu
Publisher: Springer Nature
ISBN: 3031020308
Category : Technology & Engineering
Languages : en
Pages : 71

Get Book Here

Book Description
RRAM technology has made significant progress in the past decade as a competitive candidate for the next generation non-volatile memory (NVM). This lecture is a comprehensive tutorial of metal oxide-based RRAM technology from device fabrication to array architecture design. State-of-the-art RRAM device performances, characterization, and modeling techniques are summarized, and the design considerations of the RRAM integration to large-scale array with peripheral circuits are discussed. Chapter 2 introduces the RRAM device fabrication techniques and methods to eliminate the forming process, and will show its scalability down to sub-10 nm regime. Then the device performances such as programming speed, variability control, and multi-level operation are presented, and finally the reliability issues such as cycling endurance and data retention are discussed. Chapter 3 discusses the RRAM physical mechanism, and the materials characterization techniques to observe the conductive filaments and the electrical characterization techniques to study the electronic conduction processes. It also presents the numerical device modeling techniques for simulating the evolution of the conductive filaments as well as the compact device modeling techniques for circuit-level design. Chapter 4 discusses the two common RRAM array architectures for large-scale integration: one-transistor-one-resistor (1T1R) and cross-point architecture with selector. The write/read schemes are presented and the peripheral circuitry design considerations are discussed. Finally, a 3D integration approach is introduced for building ultra-high density RRAM array. Chapter 5 is a brief summary and will give an outlook for RRAM’s potential novel applications beyond the NVM applications.

Processing-in-Memory for AI

Processing-in-Memory for AI PDF Author: Joo-Young Kim
Publisher: Springer Nature
ISBN: 3030987817
Category : Technology & Engineering
Languages : en
Pages : 168

Get Book Here

Book Description
This book provides a comprehensive introduction to processing-in-memory (PIM) technology, from its architectures to circuits implementations on multiple memory types and describes how it can be a viable computer architecture in the era of AI and big data. The authors summarize the challenges of AI hardware systems, processing-in-memory (PIM) constraints and approaches to derive system-level requirements for a practical and feasible PIM solution. The presentation focuses on feasible PIM solutions that can be implemented and used in real systems, including architectures, circuits, and implementation cases for each major memory type (SRAM, DRAM, and ReRAM).

Advanced Memory Technology

Advanced Memory Technology PDF Author: Ye Zhou
Publisher: Royal Society of Chemistry
ISBN: 183916994X
Category : Technology & Engineering
Languages : en
Pages : 752

Get Book Here

Book Description
Advanced memory technologies are impacting the information era, representing a vibrant research area of huge interest in the electronics industry. The demand for data storage, computing performance and energy efficiency is increasing exponentially and will exceed the capabilities of current information technologies. Alternatives to traditional silicon technology and novel memory principles are expected to meet the need of modern data-intensive applications such as “big data” and artificial intelligence (AI). Functional materials or methodologies may find a key role in building novel, high speed and low power consumption computing and data storage systems. This book covers functional materials and devices in the data storage areas, alongside electronic devices with new possibilities for future computing, from neuromorphic next generation AI to in-memory computing. Summarizing different memory materials and devices to emphasize the future applications, graduate students and researchers can systematically learn and understand the design, materials characteristics, device operation principles, specialized device applications and mechanisms of the latest reported memory materials and devices.