What Is the FEC Function of the Optical Module

    What Is the FEC Function of the Optical Module

    Table of Contents

    What Is the FEC Function of the Optical Module

    FEC, Forward Error Correction, is the forward error correction function. In information theory, when the channel is unreliable or has strong noise interference, the application of FEC coding can increase the reliability of communication. The core idea of FEC coding is to encode the initial data at the sending end, and to detect and correct the errors of the received information at the receiving end, so as to avoid retransmission of information. It has the advantage of gain coding technology. 

    With the development of optical communication systems with longer distances, larger capacities, and higher speeds, especially when the single-wave rate evolves from 40G to 100G, or even beyond 100G, the transmission of chromatic dispersion, nonlinear effects, polarization mode dispersion, etc. in optical fibers The effect will seriously affect the further improvement of the transmission rate and transmission distance. To this end, industry experts continue to research and develop FEC code patterns with better performance, so that they can obtain higher net coding gain (NCG) and better error correction performance to meet the needs of the rapid development of optical communication systems.

    As far as the current development trend of optical modules is concerned, it mainly develops in the direction of high speed and long distance, but as the transmission rate increases, the transmission distance of the signal will be limited by many factors, such as chromatic dispersion, nonlinear effects, polarization mode Dispersion, etc., these factors will limit the simultaneous increase of transmission rate and transmission distance. In addition, there is no ideal digital channel in the actual transmission process, and the signal will always produce distortion and non-equivalent delay in the transmission process of various media. It means bit error and jitter, and FEC codec technology can better improve bit error performance. Before the signal is transmitted to the channel, it is processed in a certain format in advance, and redundant codes with signal characteristics are added. At the receiving end, it is decoded according to the specified algorithm. The error detection during transmission is verified by the receiving party. Notify the sender to retransmit, which allows re-encoding from the encoded data with low bit errors to form a series of error-free data streams, in order to find out the error codes and correct them.

    FEC is suitable for high-speed communication (25G, 40G, 100G, especially 100G optical modules). Although the FEC function has two advantages of forward error correction and increased transmission distance, it will inevitably cause some data packets in the process of correcting errors. Delay, so not all high-speed optical modules are recommended to enable this function. For example, when using a 100G LR4 optical transceiver, it is not recommended to enable the FEC function. 100G optical modules mainly rely on configuring the FEC function on the device side to implement error correction, so the switch needs to support the FEC function. Usually, the switches that support the FEC function have the FEC function enabled by default. In addition, it should be noted that if the FEC function is enabled on the A-end optical module, the B-end optical module must also enable this function, otherwise, the interface will not go up.

    It can be seen from this that in the optical module transmission system, the FEC forward error correction function can greatly improve the effectiveness and reliability of communication.

    Soft-decision vs hard-decision FEC

    The performance of optical communication systems depends on the use of forward error correction (FEC) and multilevel modulation (MLM), which can be used individually or in combination. FEC recovers the sensitivity loss resulting from non-binary modulation. Traditionally, optical communication systems have used hard-decision FEC. But recent advances in the field of optical communications have made it possible to use soft-decision FEC.

    The development of Wave-Division Multiplexing (WDM) led to the development of more efficient FEC codes. This second generation of FEC, which is based on concatenated codes, took NCG down to around eight dB. However, this was never demonstrated to be as low as 10 dB, which led to a major search for improved FEC.

    Soft-decision FEC works by dividing possible bit values into multiple levels. The code uses this information to determine the probability of each bit being a 0 or a 1. Compared to hard-decision FEC, soft-decision FEC offers a coding gain of about three dB. However, this kind of FEC requires a greater amount of processing time and increases latency. Therefore, soft-decision FEC is not widely used in optical networking.

    Soft-decision FEC has a lower BER than hard-decision FEC. However, it is more effective when combined with dual-polarization 64QAM. However, hard-decision FEC is the best choice for applications where high BER is the primary concern.

    As optical communication systems are evolving toward longer distances and higher speed, the need for more sophisticated FEC code patterns will increase. As a result, industry experts are researching better FEC code patterns to achieve higher net coding gain and higher error-correction performance.

    NRZ signalling at 25 Gbps

    Non-Return-to-Zero (NRZ) signalling at 25 Gbps is an optical standard for high-performance networks. It is also known as two-level Pulse Amplitude Modulation, and is suitable for low-loss backplane materials. Signal integrity engineers are increasingly looking to improve link speeds.

    NRZ signalling uses two different signal levels to represent digital logic signals: logic 0 and logic 1. These two levels are also referred to as bit rates, since one bit of logic information is transmitted within a single clock cycle. Unlike the Manchester code, NRZ requires half the baseband bandwidth of the former and the same passband bandwidth as the latter.

    NRZ signalling at 25 Gbps uses two-level PAM4 coding. Two-level PAM4 chips use discrete voltage levels and use an algorithm to encode and decode the signals. This method allows designers to continue to use existing channels at high data rates. This means that if the NRZ signalling standard is unavailable, designers can use PAM4 instead.

    PAM4 is more efficient than NRZ signalling at 25 Gbps. It allows twice as much information per symbol cycle and is more efficient for high-speed optical transmission. In addition to reducing the loss of signals in the transmission channel, PAM4 also allows use of existing interconnects and channels.

    Non-Return-to-Zero, inverted (NRZ) signalling at 25 Gbps is a symmetrical digital signal. It uses a cyclic, non-destructive, reversible coding scheme. It is widely used in data networks, and can be applied to many different applications.

    Unipolar NRZ signalling presents a number of issues that relate to the transmitted DC level. It is important to note that the power spectrum of a unipolar NRZ does not approach zero at zero frequency, which leads to higher power losses. In addition, it requires a DC-coupled transmission line.

    Hamming code

    Hamming code is a type of error-correcting code, which can reconstruct the original message in the case of errors. It consists of two parity bits and one data bit. This code is used to reduce communication costs and waste of resources. The Hamming code theory is verified with verification experiments, which show that it reduces bit error rate.

    This code was developed by Hamming, who was interested in increasing code rate and distance. He developed several encoding schemes during the 1940s that were dramatic improvements over existing codes. These codes used parity bits that overlapped and checked each other as well as the data. This improved error-correction scheme is still used in many applications.

    These codes may be stored in OLT 110 or ONU 120. The “Repetition (x,1)” code, for example, represents a code that repeats every original code x times. In addition, “EHamming (a,b)” is an extended Hamming code based on code Hamming (a,b).

    Convolutional codes

    Convolutional codes are a type of coding scheme. They are non-recursive, flexible and use a continuous bitstream for processing. The rate and block length are adjustable. In addition, the codes can be used for soft decision decoding. This makes them ideal for optical modules and other communication systems.

    The Viterbi algorithm is the most popular decoding algorithm for convolutional codes. It is a fast decoding scheme that provides maximum likelihood performance. It is parallelizable and can be implemented in software or VLSI hardware. It is also easy to implement on SIMD instruction sets.

    One type of convolutional code is the LDPC code. This type of code utilizes k successive bits of information that are shifted or combined. An example of a convolutional encoder is shown in Figure 1. This encoder contains three registers and uses 8 states. The corresponding decoder trellis will typically use the same eight states. Another type of convolutional code is the recursive systematic convolutional code. This type of code is useful for implementing LDPC codes and multidimensional turbo codes.

    Convolutional coding is also used in forward error correction (FEC) encoding. It uses a number of different techniques to increase the probability that a message is error-free. Among them is whitening, which randomizes the bit stream to resemble white noise. This method avoids long strings of 1s and 0s, which can confuse synchronization systems and cause spurious emissions. Other methods of convolutional coding include phase shift keying and quadrature amplitude modulation.

    A convolutional encoder that uses r=2 has a basic code rate of 1/2. This gives good error-correcting performance. However, the higher the signal-to-noise ratio, the higher the data rate is desired. Puncturing is another way to increase the code rate by omitting some of the code bits. The decoder inserts dummy bits in place of these omitted bits.