Low Density Parity Check (LDPC)
What is LDPC?
Low-Density Parity Check (LDPC) codes represent a class of error correcting codes that may be employed for providing error correction of transmission errors in communication systems. Using LDPC codes, channel capacities that are close to the theoretical Shannon Limit can be achieved.
Originally known as Gallager codes, they were first proposed by R.G. Gallager in 1962, but they were too complex for practical implementation of the required decoding techniques at the time of their conception.
They were then left largely untouched for decades, until being re-discovered in the 1990's, from which point they were introduced into a wide range of wired and wireless communications standards, including digital video broadcasting, powerline networking, WiFi, and 5G-New Radio (5G-NR).
How does LDPC work?
The simplest coding scheme is the single parity check code. This involves the addition of a single extra bit to a binary message, the value of which depends on the bits in the message. In an even parity code, the additional bit added to each message makes the XOR sum of the bits in the codeword is 0. Whilst this coding scheme gives a limited amount of protection against errors, it is not powerful enough to correct them.
Hamming codes are linear error correcting codes, achieved by adding multiple parity bits, which offer the next level up of error protection - Hamming codes can detect one-bit and two-bit errors, or correct one-bit errors without detection of uncorrected errors.
LDPC codes are also linear error correcting codes, but the big advantage of these is that they deliver channel capacities close to the Shannon Limit whilst benefiting from a number of appealing features that make them very attractive for implementation.
The LDPC decoding algorithm uses iterative belief propagation decoding, and can be implemented using low-complexity calculations, resulting in a relatively low design and implementation cost for the processing hardware. The inherent parallelism of the LDPC decoding algorithm also maps intuitively to distributed parallel computation units for high-throughput and low-latency implementations.
In 1948, Claude Shannon defined how many error-free bits/second can theoretically be sent through a noisy, bandlimited channel. This is often referred to as the Shannon Limit. Shannon proved that:
C = B log2(1+S/N)
- C is the theoretical upper bound on the net bit rate, excluding error-correction codes
- B is the bandwidth of the channel in Hz
- S is the average received signal power over the bandwidth measured in watts
- N is the average power of the noise and interference over the bandwidth, measured in watts
If there were such a thing as a noise-free analogue channel, it would be possible to transmit unlimited amounts of error-free data over it. Real channels, however, are subject to limitations imposed by bandwidth and noise.
The key uses and applications of LDPC.
Low - density parity check (LDPC) code is a linear error-correcting block code, suitable for error correction in large block sizes transmitted via very noisy channels. Largely due to its close-to Shannon-Limit channel capacity performance, it has now been introduced into a range of standards, including:
- 3GPP 5G-NR data channel
- DOCSIS 3.1 Cable modem standard
- IEEE 802.11n WiFi
- DVB-S2/T2/V2 Digital Video Broadcasting Standards
- G.hn ITU-T standard for power-line networking
- 802.3an 10Gbps ethernet over twisted pair
- IEEE 802.16e WiMAX
How does LDPC compare with other high-performance error coding schemes?
Similar to LDPC, Turbo codes perform close to the Shannon Limit. When they were discovered, the relatively low complexity for the performance offered put Turbo codes at the core of 3G and 4G. However, when the block size is large, or the code rate is high, LDPC decoders have better performance than Turbo codes. Moreover, because LDPC was initially invented in 1963, by the 1990s this technology was largely available patent free, a very attractive differentiator when compared to Turbo codes.
Also, in contrast to turbo codes, there is a wide variety of possible algorithms and levels of parallelisation that may be considered for the design of LDPC decoders, presenting designers with a range of options to consider, depending on the desired characteristics.
Polar codes also perform close to the Shannon limit, and are superior to LDPC for short-block lengths. They have been selected for the control channel of 5G-NR. LDPC has better performance and lower computational complexity for higher block lengths and has been chosen for the 5G-NR data channel.
For more information, please download our Survey of Turbo, LDPC, and Polar Decoder ASIC Implementations:
Implementing LDPC for 5G applications
As mentioned, while the design of the individual processing components for LDPC is relatively simple, the design of a complete LDPC decoder is subject to a complex interaction of different system requirements, such as:
- Processing throughput,
- Processing latency,
- Hardware resource requirements,
- Error correction capability,
- Processing energy efficiency,
- Bandwidth efficiency and flexibility.
- The architecture,
- The LDPC code employed,
- The parallelism,
- The algorithm used, and the number of decoding iterations.
LDPC codecs for 5G infrastructure applications are often implemented using a Field-Programmable Gate Array (FPGA) device, which facilitates rapid prototyping and fast parallel logic processing. The implementational characteristics of these FPGA-based LDPC decoders are increasingly informing the holistic design of communication systems. Another very practical use of FPGA-based LDPC decoders is in the research environment, where the very fast and highly parallel logic resources available on an FPGA are very useful for measuring the Bit Error Rate (BER) performance of various codes. More explicitly, simulations that would take days on a computer can be completed in only hours when using a custom FPGA implementation. However, the majority of available FPGA-based LDPC decoder designs are tied specifically to one code. This means that they would require a great deal of extra design work to modify the architecture every time a new LDPC code is required.