In this post we present ‘Using Trellis Coded Modulation Techniques to Decrease Bit Error Rate Without Bandwidth Compromise’. The content of this blog is taken directly from our paper which you can download here.

The challenge

In Nutaq’s OFDM reference design HD video quality is sent over the air between two radio antennas. Digitally, the transmission corresponds to a throughput rate of roughly 9.6 Mbps, using a QAM-64 modulation. To achieve a low bit error rate (BER) using such a high throughput modulation, a high signal to noise ratio (SNR) is needed.

The most intuitive way to increase such a ratio is to increase the received power, either by increasing the transmitted power and/or by using amplifiers at the reception. However, in practice, sometimes this is not a viable option (ex: limited battery power in cellular phones). It is known that error correction codes (ECC) are a good way to achieve a target BER, at a lower SNR ratio. So, ECC can give us the possibility to reduce the BER at a certain SNR, or to achieve a certain BER, at a lower SNR, or, in other words, at a lower transmitted/received power. That gain, however, does not come without a cost.

Linear block codes, convolutional codes, turbo codes and even LDPC (low-density parity-check) codes are all ECC that come with a coding rate below unity. The coding rate of a code is expressed as follows:

R = k / n

where k is the number of information bits and n is the actual number of bits sent over a certain medium. In other words, to keep an information bit rate of 9.6 Mbps, the actual bit rate of the system would need to be 9.6 Mbps, divided by the coding rate R (which will bring the data rate of the system to a number greater than 9.6 Mbps).

However, in practical applications like in the Nutaq FPGAbased OFDM reference design, we are presented with realworld physical constraints (ex: limited FPGA resources, timing constraints, hardware implementation complexity, ADCs sampling rate, etc.). These constraints can actually put a limit on the date rate of the system.

By supposing that the system is limited at 9.6 Mbps, we can achieve a maximum information bit rate of 9.6 Mbps multiplied by the coding rate R (which is below 1).

In other words, it is likely that HD video quality won’t be achievable under these conditions. That being said, is it possible to implement a coding technique which will offer us an actual coding gain for this system, without sacrificing video quality? The answer is yes: by using a coding scheme called Trellis Coded Modulation (TCM).

Trellis Coded Modulation

Trellis coded modulation, or TCM, was invented by Gottfried Ungerboeck in 1976, as a method to improve the reliability of a digital transmission system without bandwidth expansion or reduction of data rate. Basically, TCM is the joint effort of a convolutional coder and an M-QAM/M-PSK modulator. The following example illustrates well the concept of TCM:

In Figure 1 a), we can see that by using the QPSK modulation, we are sending 2 bits per symbol, per T seconds.

In Figure 1 b), we are using a convolutional coder of rate 2/3, to achieve some sequence coding. Since we now need to send 3 bits to be able to decode the actual 2 bits of information, a QPSK modulation would bring the symbol rate to 1.5 per T seconds, thus increasing the required bandwidth of the system. By using a higher constellation order (8-PSK), we are able to send the 3 bits, without any bandwidth expansion.

Standard QPSK modulator, b) 8-PSK-TCM modulator

 Figure 1: a) Standard QPSK modulator, b) 8-PSK-TCM modulator

The innovation of the TCM approach arises from the fact that the convolutional coding and the modulation are treated as one operation. In a system using convolutional coding only, we do the coding then do the modulation without any regards to the bandwidth expansion. At the receiver, demodulation would select the estimated received symbol in the constellation then use a Viterbi decoder with hamming distance as a metric, to select the Maximum Likelihood transmitted sequence (known also by the name of hard decision based Viterbi decoding). In TCM, the modulation part is done as shown in the previous figure (coding and mapping done as one operation) and at the receiver, the demodulation and the decoding is done at the same time, using a soft decision based Viterbi decoder. It’s the same Viterbi algorithm but now the Euclidian distances from the different constellation symbols are used as a metric, to make a decision on the Maximum Likelihood transmitted symbol sequence.

However, if we go back to Figure 1, the 8-PSK symbols neighbors are closer to each other compared to QPSK, which intuitively means that in the presence of complex AWGN, a given BER will be achievable at a higher SNR, compared to the QPSK modulation scheme. The question now is: Can a TCM coding scheme gives us a coding gain high enough so that this TCM scheme would work?

To answer this question, we need to evaluate the coding gain of such an approach. The present white paper will show, as an example, the coding gain achieved by using a QAM-16 constellation mapping with a 2/3 rate convolutional coder, compared to an uncoded 8-PSK modulation scheme. The 2/3 convolutional coder used will be the following:

Figure 2: A good 2/3 rate systematic convolutional coder for TCM

Figure 3: A good 2/3 rate systematic convolutional coder for TCM, with its uncoded bit

Based on this block diagram, one could ask oneself: “Why not use a 3/4 rate convolutional coder instead of using a 2/3 convolutional coder and leave one bit uncoded?”

This approach is the actual approach which was proposed by Ungerboeck and this turns out to be the key to larger coding gains for TCM schemes. The goal is to let the uncoded bit (b3) take care of itself by using a specific constellation mapping, created using a technique called Set Partitioning. We need to introduce that technique before going further.

 

 

The content of this blog is taken directly from our new whitepaper which you can download here.