Algorithms and Implementations for Practical and Energy-effecient Polar Decoders

Algorithms and Implementations for Practical and Energy-effecient Polar Decoders PDF Author: Furkan Ercan
Publisher:
ISBN:
Category :
Languages : en
Pages :

Get Book Here

Book Description
"Polar codes are a class of error-correcting codes that can provably achieve the channel capacity and have simple encoding and decoding mechanisms. Due to their attractive properties, the interest in polar codes has been increasing rapidly in recent years and they have been adopted for use in the $5^{\text{th}}$ generation (5G) wireless systems standard. Specifically, they have been chosen as the coding scheme for the control channel of enhanced mobile broadband (eMBB) use case, and they are being considered for other use cases within 5G. Successive cancellation (SC) decoding is the primary decoding algorithm of polar codes and has low implementation complexity. The two main problems of SC decoding is its mediocre error-correction performance at practical codeword lengths and its long latency due to its sequential nature. To overcome the latency problem, fast decoding techniques have been introduced to speed up the decoding process by an order of magnitude. Secondly, several SC-based decoding algorithms have been proposed to improve the decoding performance, such as SC-List (SCL) and SC-Flip (SCF) decoding. SCL decoding uses parallel SC decoders to improve error-correction performance and therefore suffers from high implementation complexity. On the other hand, the SCF decoding algorithm uses multiple iterations of SC decoding to improve error-correction performance and maintains a similar implementation complexity to that of SC decoding. Therefore, SCF is a promising low-complexity alternative to SCL decoding.This thesis covers several improvements for SC and SCF-based polar decoders. First, we describe how to utilize the hardware resources of fast SC decoding more efficiently and show how to improve the throughput. Second, we propose a partitioned decoding scheme for the SCF algorithm that is able to improve the error-correction performance and reduce the average number of iterations. Third, we describe how to implement energy-efficient polar decoders using fast SC and fast SCF algorithms. We propose the first fast SCF decoder in hardware and show that an energy-efficient approach with improved throughput is possible. Then, we describe the Thresholded SCF (TSCF) algorithm, which has improved error-correction performance and less computational complexity than the conventional SCF algorithm. We implement fast decoding techniques to create the Fast-TSCF decoder that is able to outperform decoders of similar performance in terms of throughput and area efficiency. Finally, we describe many simplifications and optimizations for the Dynamic SCF (DSCF) decoding algorithm, which is known for its significantly improved error-correction performance but has impractical computations. We replace its transcendental computations with simple approximations, introduce fast decoding techniques, reduce its computational complexity by using a theoretical framework, and demonstrate with hardware implementation. The proposed practical DSCF implementation is able to match the error-correction performance and throughput of SCL-based decoders with large list sizes and stands as a low-complexity alternative"--

Algorithms and Implementations for Practical and Energy-effecient Polar Decoders

Algorithms and Implementations for Practical and Energy-effecient Polar Decoders PDF Author: Furkan Ercan
Publisher:
ISBN:
Category :
Languages : en
Pages :

Get Book Here

Book Description
"Polar codes are a class of error-correcting codes that can provably achieve the channel capacity and have simple encoding and decoding mechanisms. Due to their attractive properties, the interest in polar codes has been increasing rapidly in recent years and they have been adopted for use in the $5^{\text{th}}$ generation (5G) wireless systems standard. Specifically, they have been chosen as the coding scheme for the control channel of enhanced mobile broadband (eMBB) use case, and they are being considered for other use cases within 5G. Successive cancellation (SC) decoding is the primary decoding algorithm of polar codes and has low implementation complexity. The two main problems of SC decoding is its mediocre error-correction performance at practical codeword lengths and its long latency due to its sequential nature. To overcome the latency problem, fast decoding techniques have been introduced to speed up the decoding process by an order of magnitude. Secondly, several SC-based decoding algorithms have been proposed to improve the decoding performance, such as SC-List (SCL) and SC-Flip (SCF) decoding. SCL decoding uses parallel SC decoders to improve error-correction performance and therefore suffers from high implementation complexity. On the other hand, the SCF decoding algorithm uses multiple iterations of SC decoding to improve error-correction performance and maintains a similar implementation complexity to that of SC decoding. Therefore, SCF is a promising low-complexity alternative to SCL decoding.This thesis covers several improvements for SC and SCF-based polar decoders. First, we describe how to utilize the hardware resources of fast SC decoding more efficiently and show how to improve the throughput. Second, we propose a partitioned decoding scheme for the SCF algorithm that is able to improve the error-correction performance and reduce the average number of iterations. Third, we describe how to implement energy-efficient polar decoders using fast SC and fast SCF algorithms. We propose the first fast SCF decoder in hardware and show that an energy-efficient approach with improved throughput is possible. Then, we describe the Thresholded SCF (TSCF) algorithm, which has improved error-correction performance and less computational complexity than the conventional SCF algorithm. We implement fast decoding techniques to create the Fast-TSCF decoder that is able to outperform decoders of similar performance in terms of throughput and area efficiency. Finally, we describe many simplifications and optimizations for the Dynamic SCF (DSCF) decoding algorithm, which is known for its significantly improved error-correction performance but has impractical computations. We replace its transcendental computations with simple approximations, introduce fast decoding techniques, reduce its computational complexity by using a theoretical framework, and demonstrate with hardware implementation. The proposed practical DSCF implementation is able to match the error-correction performance and throughput of SCL-based decoders with large list sizes and stands as a low-complexity alternative"--

High-Speed Decoders for Polar Codes

High-Speed Decoders for Polar Codes PDF Author: Pascal Giard
Publisher: Springer
ISBN: 3319597825
Category : Computers
Languages : en
Pages : 108

Get Book Here

Book Description
A new class of provably capacity achieving error-correction codes, polar codes are suitable for many problems, such as lossless and lossy source coding, problems with side information, multiple access channel, etc. The first comprehensive book on the implementation of decoders for polar codes, the authors take a tutorial approach to explain the practical decoder implementation challenges and trade-offs in either software or hardware. They also demonstrate new trade-offs in latency, throughput, and complexity in software implementations for high-performance computing and GPGPUs, and hardware implementations using custom processing elements, full-custom application-specific integrated circuits (ASICs), and field-programmable-gate arrays (FPGAs). Presenting a good overview of this research area and future directions, High-Speed Decoders for Polar Codes is perfect for any researcher or SDR practitioner looking into implementing efficient decoders for polar codes, as well as students and professors in a modern error correction class. As polar codes have been accepted to protect the control channel in the next-generation mobile communication standard (5G) developed by the 3GPP, the audience includes engineers who will have to implement decoders for such codes and hardware engineers designing the backbone of communication networks.

Efficient Encoders and Decoders for Polar Codes

Efficient Encoders and Decoders for Polar Codes PDF Author: Gabi Sarkis
Publisher:
ISBN:
Category :
Languages : en
Pages :

Get Book Here

Book Description
"Error-correcting codes enable reliable and efficient data communication and storage and have become an indispensable part of information processing systems. Polar codes are the latest discovery in the quest for more powerful error correction. They are the first codes with an explicit construction to provably achieve the symmetric capacity of memoryless channels. Moreover, this performance is realizable using the low complexity successive-cancellation decoding algorithm. Despite their attractive theoretical properties, polar codes suffer from two major issues hindering practical implementations: a slow decoding algorithm and mediocre error-correction performance at moderate code lengths. Solutions to these problems in the literature have been mutually exclusive. Decoding speed can be increased, but at the cost of degrading error-correction capability. On the other hand, the error-correction performance can be greatly improved using a list decoding algorithm, which incurs a large cost in both decoding speed and memory requirements. This incompatibility in solutions must be resolved before polar codes become practical. This thesis presents novel, compatible solutions to these problems. It introduces a new decoding algorithm that has the same error-correction performance as successive cancellation, but offers significantly lower latency and higher throughput. A corresponding decoder implementation is shown to be an order of magnitude faster than the state-of-the-art in the literature. Next, the speed of successive-cancellation list decoders for polar codes is improved without degrading error-correction performance. The resulting software decoders implementing the proposed algorithm offer throughput and error-correction performance exceeding the best in the literature and meeting the requirements for the 802.11n WiFi standard. This work also brings to light another beneficial property of polar codes that had not been studied before. It presents encoders and decoders that can operate on polar codes of any length and rate, while maintaining low implementation complexity and fast operating speed. Such implementations are important in systems that must adapt to varying channel conditions. Finally, two methods are introduced that improve error-correction performance without incurring the memory overhead of list decoding. The first targets systems where re-transmission is impossible or highly undesirable. The second improves the performance of software decoders using polar codes with rates very close to the channel capacity." --

Fast, Flexible, and Area-efficient Decoders for Polar Codes

Fast, Flexible, and Area-efficient Decoders for Polar Codes PDF Author: Seyyed Ali Hashemi
Publisher:
ISBN:
Category :
Languages : en
Pages :

Get Book Here

Book Description
"Polar codes have received a great deal of attention in the past few years to the extent that they are selected to be included in the 5th Generation of Wireless Communications Standard (5G). Specifically, polar codes were selected as the coding scheme for the Enhanced Mobile Broadband (eMBB) control channel which requires codes of short length. The main bottleneck in the deployment of polar codes in 5G is the design of a decoder which can achieve good error-correction performance, with low hardware implementation cost and high throughput. Successive-Cancellation (SC) decoding was the first algorithm under which polar codes could achieve capacity when the code length is very high. However, for finite practical code lengths, SC decoding falls short in providing a reasonable error-correction performance because of its sub-optimality with respect to the Maximum-Likelihood (ML) decoder. Sphere Decoding (SD) is an algorithm that can achieve the performance of ML decoding with a very high complexity. In order to close the gap between SC and ML decoding, Successive-Cancellation List (SCL) decoding keeps a list of candidates and selects the one with the best Path Metric (PM). Although SCL provides a good error-correction performance, it comes at the cost of higher complexity and lower throughput. In this thesis, we first propose a low complexity SD algorithm which provides a good trade-off between the error-correction performance and the complexity of the decoder for polar codes of short lengths. We then propose algorithms to speed up the SCL decoders. We prove that while these algorithms have much higher throughput than the conventional SCL decoder, they incur no error-correction performance loss. We further propose several techniques to reduce the area occupation in the hardware implementation of SC and SCL decoders by reducing their memory requirements. We solve the flexibility issue of fast SC-based decoders and introduce a completely rate-flexible scheme. Hardware architectures for the proposed algorithms are presented and comparisons with state of the art are made. Finally, we evaluate the performance of polar codes in 5G and we show that polar codes can be used in practical applications by proposing a blind detection scheme with polar codes." --

Towards Practical Software Stack Decoding of Polar Codes

Towards Practical Software Stack Decoding of Polar Codes PDF Author: Harsh Aurora
Publisher:
ISBN:
Category :
Languages : en
Pages :

Get Book Here

Book Description
"Error correcting codes are essential in realizing reliable communication over noisy channels. Polar codes are a recent class of linear block error correcting codes, and are the first of their kind to have an explicit construction and asymptotically achieve the symmetric channel capacity over binary-input discrete memoryless channels. They have recently been adopted into the 5G standard in the eMBB control channel. The successive cancellation list decoding algorithm yields near-optimal decoding performance at the cost of high implementation complexity. The successive cancellation stack algorithm has been shown to provide similar decoding performance at a much lower computational complexity, but suffers from a large memory requirement that scales quadratically with the code length, rendering it impractical in most applications. This thesis presents several approaches to increase the practicality of the successive cancellation stack decoding algorithm in software implementations. First, multiple copies of decoder memory are replaced with a single memory, and the stack sorting step is replaced by a linear search. While this comes at the cost of an increase in computational complexity, results show that the large memory requirement and sorting are amongst primary culprits in the mediocre throughput performance of the software stack algorithm. Simulations run on a modern CPU clocked at 3.2 GHz show the throughput increase from 14 Kbps to 6.3 Mbps for a polar code of length 1024. This idea is then extended to allow for a tunable number of decoder memories instantiated, mitigating the increase in computational complexity while providing modest increase in throughput. Third, an early termination criterion is investigated that is shown to reduce the number of bit estimates by up to 58%. Finally, the benefits of the fast simplified successive cancellation list decoder are extended to the stack algorithm, resulting in the first reported implementation of a fast simplified successive cancellation stack decoder that reports a throughput of up to 20.44 Mbps." --

Guessing Random Additive Noise Decoding

Guessing Random Additive Noise Decoding PDF Author: Syed Mohsin Abbas
Publisher: Springer Nature
ISBN: 3031316630
Category : Computers
Languages : en
Pages : 157

Get Book Here

Book Description
This book gives a detailed overview of a universal Maximum Likelihood (ML) decoding technique, known as Guessing Random Additive Noise Decoding (GRAND), has been introduced for short-length and high-rate linear block codes. The interest in short channel codes and the corresponding ML decoding algorithms has recently been reignited in both industry and academia due to emergence of applications with strict reliability and ultra-low latency requirements . A few of these applications include Machine-to-Machine (M2M) communication, augmented and virtual Reality, Intelligent Transportation Systems (ITS), the Internet of Things (IoTs), and Ultra-Reliable and Low Latency Communications (URLLC), which is an important use case for the 5G-NR standard. GRAND features both soft-input and hard-input variants. Moreover, there are traditional GRAND variants that can be used with any communication channel, and specialized GRAND variants that are developed for a specific communication channel. This book presents a detailed overview of these GRAND variants and their hardware architectures. The book is structured into four parts. Part 1 introduces linear block codes and the GRAND algorithm. Part 2 discusses the hardware architecture for traditional GRAND variants that can be applied to any underlying communication channel. Part 3 describes the hardware architectures for specialized GRAND variants developed for specific communication channels. Lastly, Part 4 provides an overview of recently proposed GRAND variants and their unique applications. This book is ideal for researchers or engineers looking to implement high-throughput and energy-efficient hardware for GRAND, as well as seasoned academics and graduate students interested in the topic of VLSI hardware architectures. Additionally, it can serve as reading material in graduate courses covering modern error correcting codes and Maximum Likelihood decoding for short codes.

Springer Handbook of Optical Networks

Springer Handbook of Optical Networks PDF Author: Biswanath Mukherjee
Publisher: Springer Nature
ISBN: 3030162508
Category : Technology & Engineering
Languages : en
Pages : 1169

Get Book Here

Book Description
This handbook is an authoritative, comprehensive reference on optical networks, the backbone of today’s communication and information society. The book reviews the many underlying technologies that enable the global optical communications infrastructure, but also explains current research trends targeted towards continued capacity scaling and enhanced networking flexibility in support of an unabated traffic growth fueled by ever-emerging new applications. The book is divided into four parts: Optical Subsystems for Transmission and Switching, Core Networks, Datacenter and Super-Computer Networking, and Optical Access and Wireless Networks. Each chapter is written by world-renown experts that represent academia, industry, and international government and regulatory agencies. Every chapter provides a complete picture of its field, from entry-level information to a snapshot of the respective state-of-the-art technologies to emerging research trends, providing something useful for the novice who wants to get familiar with the field to the expert who wants to get a concise view of future trends.

Energy-efficient Decoding of Low-density Parity-check Codes

Energy-efficient Decoding of Low-density Parity-check Codes PDF Author: Kevin Cushon
Publisher:
ISBN:
Category :
Languages : en
Pages :

Get Book Here

Book Description
"Low-density parity-check (LDPC) codes are a type of error correcting code that are frequently used in high-performance communications systems, due to their ability to approach the theoretical limits of error correction. However, their iterative soft-decision decoding algorithms suffer from high computational complexity, energy consumption, and auxiliary circuit implementation difficulties. It is of particular interest to develop energy-efficient LDPC decoders in order to decrease cost of operation, increase battery life in portable devices, lessen environmental impact, and increase the range of applications for these powerful codes.In this dissertation, we propose four new LDPC decoder designs with the primary goal of improving energy efficiency over previous designs. First, we present a bidirectional interleaver based on transmission gates, which reduces wiring complexity and associated parasitic energy losses. Second, we present an iterative decoder design based on pulse-width modulated min-sum (PWM-MS). We demonstrate that the pulse width message format reduces switching activity, computational complexity, and energy consumption compared to other recent LDPC decoder designs. Third, wepresent decoders based on differential binary (DB) algorithms. We also propose an improved differential binary (IDB) decoding algorithm, which greatly increases throughput and reduces energy consumption compared to recent decoders ofsimilar error correction capability. Finally, we present decoders based on gear-shift algorithms, which use multiple decoding rules to minimize energy consumption. We propose gear-shift pulse-width (GSP) and IDB with GSP (IGSP) algorithms, and demonstrate that they achieve superior energy efficiency without compromising error correction performance." --

Algorithms and Architectures for Efficient Low Density Parity Check (LDPC) Decoder Hardware

Algorithms and Architectures for Efficient Low Density Parity Check (LDPC) Decoder Hardware PDF Author: Tinoosh Mohsenin
Publisher:
ISBN: 9781124509181
Category :
Languages : en
Pages :

Get Book Here

Book Description
Many emerging and future communication applications require a significant amount of high throughput data processing and operate with decreasing power budgets. This need for greater energy efficiency and improved performance of electronic devices demands a joint optimization of algorithms, architectures, and implementations. Low Density Parity Check (LDPC) decoding has received significant attention due to its superior error correction performance, and has been adopted by recent communication standards such as 10GBASE-T 10 Gigabit Ethernet. Currently high performance LDPC decoders are designed to be dedicated blocks within a System-on-Chip (SoC) and require many processing nodes. These nodes require a large set of interconnect circuitry whose delay and power are wire-dominated circuits. Therefore, low clock rates and increased area are a common result of the codes' inherent irregular and global communication patterns. As the delay and energy costs caused by wires are likely to increase in future fabrication technologies new solutions dealing with future VLSI challenges must be considered. Three novel message-passing decoding algorithms, Split-Row, Multi-Splitand Split-Row Threshold are introduced, which significantly reduce processor logical complexity and local and global interconnections. One conventional and four Split-Row Threshold LDPC decoders compatible with the 10GBASE-T standard are implemented in 65 nm CMOS and presented along with their trade-offs in error correction performance, wire interconnect complexity, decoder area, power dissipation, and speed. For additional power saving, an adaptive wordwidth decoding algorithm is proposed which switches between a 6-bit Normal Mode and a reduced 3-bit Low Power Mode depending on the SNR and decoding iteration. A 16-way Split-Row Threshold with adaptive wordwidth implementation achieves improvements in area, throughput and energy efficiency of 3.9x, 2.6x, and 3.6x respectively, compared to a MinSum Normalized implementation, with an SNR loss of 0.25 dB at BER = 10−7. The decoder occupies a die area of 5.10 mm2, operates up to 185 MHz at 1.3 V, and attains an average throughput of 85.7 Gbps with early-termination. Low power operation at 0.6 V gives a worst case throughput of 9.3 Gbps--above the 6.4 Gbps 10GBASE-T requirement, and an average power of 31 mW.

Resource Efficient LDPC Decoders

Resource Efficient LDPC Decoders PDF Author: Vikram Chandrasetty
Publisher: Academic Press
ISBN: 9780128112557
Category : Technology & Engineering
Languages : en
Pages : 0

Get Book Here

Book Description
This book takes a practical hands-on approach to developing low complexity algorithms and transforming them into working hardware. It follows a complete design approach - from algorithms to hardware architectures - and addresses some of the challenges associated with their design, providing insight into implementing innovative architectures based on low complexity algorithms. The reader will learn: Modern techniques to design, model and analyze low complexity LDPC algorithms as well as their hardware implementation How to reduce computational complexity and power consumption using computer aided design techniques All aspects of the design spectrum from algorithms to hardware implementation and performance trade-offs