In this series:

As described in the first part  of this series, systems with a high number of channels (like massive multiple-input/multiple-output (MIMO), medical imaging, and particle accelerators) require low-latency, high-throughput data transfer links between the capturing/generating devices and the processing nodes.

To meets such requirements, Xilinx provides multi-gigabit transceivers (MGT) with their field-programmable gate arrays (FPGAs). Depending on a device’s family, generation, speed grade, and MGT type (GTX, GTH, GTZ), its operating speed can range from a few gigabits to tens of gigabits for each differential pair. For example, a Virtex-6 GTX speed grade “-1” device1 (found on the ML605 and Perseus family of carrier boards) can operate up to 5 Gbps while the Virtex-7 GTX speed grade “-2” device2 (found on the VC707 board) can operate at up to 10.3125 Gbps.

On top of MGT, transfer protocols like Aurora 8B/10B3 can be implemented using Xilinx cores. The Aurora 8B/10B protocol is used to produce on average the same number of zeros and ones while enabling clock recovering functionality. In order to do so, 8-bit symbols are encoded on a 10-bit map, causing an 80% data rate efficiency.

On top of the Xilinx Aurora core, Nutaq provides an easy-to-use interface that decreases the integration time with the user’s custom logic. Nutaq’s core can be seen like a first-in/first-out (FIFO) interface for transmitting data between two boards with a few control and status signals:

  1. At the compilation time, you select the group of four MGTs that link the two boards together.
  2. After power-up, you initialize the Aurora core on each board.
  3. During run-time, data written in the FIFO-like interface on one board will be available in the FIFO of the other board.

Nutaq’s Aurora core is currently supported on the Perseus family of carrier boards (Perseus 601X and 611X).

Aurora over Pico backplane

The Pico backplane can connect two Perseus 601X boards together using up to three MGT groups (four TX and four RX lanes each group). When instantiated, the core uses the default parameters of the Xilinx Aurora core. In this configuration, a link up to 5 Gbps can be established between two Perseus601x carrier boards for each lane.

Figure 1: Aurora inside a Pico product

Figure 1: Aurora inside a Pico product

Figure 1 shows a PicoSDR 4×4 (includes two Perseus 601X boards connected to a Pico backplane). The three red data buses (Aurora 4x 4-7, 8-11, and 17-20) each describe an MGT group. Since the Aurora is configured in 8B/10B encoding, an effective 48-Gbps bidirectional link can be establish between both Perseus boards when all three buses are used. At a 5 Gbps rate, the default Aurora parameters provide an error-free link since only a few tens of centimeters separate the two FPGAs.

Aurora over RTM board

The Perseus 611X, shown in Figure 2, is a double-width advanced mezzanine card (AMC) featuring a Virtex-6 FPGA and two high-pin-count FPGA mezzanine card (FMC) sites.

Figure 2: Perseus611X

Figure 2: Perseus611X

The Perseus 611X can be equipped with an MTCA.4 rear-transition module (RTM) that provides seven miniSAS connectors, as shown in Figure 3.

Figure 3: MTCA.4 RTM

Figure 3: MTCA.4 RTM

These connectors can be used to provide very high throughput interfaces to devices external to the FPGA board. Each miniSAS connector is connected to an FPGA GTX group of four TX and four RX. Using Aurora 8B/10B at 5 Gbps on all seven groups, an effective bidirectional throughput of 112 Gbps can be obtained. Figure 4 shows several Perseus 611X cards connected together via the RTM miniSAS connectors.

Figure 4 : Perseus611X connected together using miniSAS cables

Figure 4 : Perseus611X connected together using miniSAS cables

However, running a 5 Gbps link that goes from a FPGA over an RTM, a miniSAS cable, and another RTM before reaching the second FPGA is more difficult than connecting two Perseus 601X together over the Pico backplane. The path of the cabled scenario can exceed one meter, depending on the miniSAS cable length, while the backplane scenario’s path is only a few tens of centimeters. To achieve a reliable communication at this speed over the longer length, the GTX parameters used inside the Aurora core must be tuned.

In the next blog post of this series, we’ll examine the GTX parameters used by the Aurora core and explain how to tune them to your hardware’s configuration.

References

  1. http://www.xilinx.com/support/documentation/data_sheets/ds152.pdf
  2. http://www.xilinx.com/support/documentation/data_sheets/ds183_Virtex_7_Data_Sheet.pdf
  3. http://www.xilinx.com/support/documentation/ip_documentation/aurora_8b10b/v8_3/pg046-aurora-8b10b.pdf