Data Communication Topology

Within the TitanMIMO-4, the Perseus 6111 AMC FPGA cards, which interface with the Radio420M modules on one side, and the Rear Transition Modules on the other, are organized in subgroups. Each subgroup consists of 3 to 4 Perseus 6111, while each MTCA.4 chassis contains up to 3 subgroups.

Every Perseus 6111 is connected to a subgroup master Perseus 6111, and each subgroup master is connected to the system master (the Perseus 6113 for the case of the TitanMIMO-4S, and the Kermode-XV6 for the TitanMIMO-4D) as in the diagram below. All connections are made using miniSAS connectors behind the system using a Rear Transition Module (RTM), and can be easily connected in a different configuration at any time via manual cable connections. Each link provides data rates of 16 Gbps using the Xilinx Aurora high speed serial interface over the multi-gigabit transceivers within the Virtex-6 FPGA.

Each RTM of the system has 7 miniSAS enabled connectors. All of the Perseus 6111 within the subgroup, with the exception of the subgroup master Perseus 6111, have 6 available miniSAS connectors that allow for connection to the other members of the subgroup in a mesh architecture. This allows upstream/downstream distributed processing to be done within the subgroup.

Data Communication Topology TitanMIMO-4S (Single FPGA Central Processing Engine)Data Communication Topology TitanMIMO-4S (Single FPGA Central Processing Engine)
Data Communication Topology TitanMIMO-4D (Distributed FPGA Cluster Central Processing Engine)Data Communication Topology TitanMIMO-4D (Distributed FPGA Cluster Central Processing Engine)
Data Communication Topology TitanMIMO-XD-125 (Distributed FPGA Cluster Central Processing Engine)
Data Communication Topology TitanMIMO-XD-250 (Distributed FPGA Cluster Central Processing Engine)

 

Embedded CPU Communication Topology

Embedded CPU Communication Topology

The TitanMIMO-4 Massive MIMO testbed has PCIe switching which enables communication from the embedded Intel Quad-core i7 CPU to every Perseus FPGA-based AMC as if they were in the same chassis. This PCIe switching is available due to the PCIe Expansion cards which are housed in the TitanMIMO-4 chassis. 

The embedded CPU is used for controlling the system. For example, radio parameters such as frequency tuning or covered bandwidth can be set via the CPU.

The PCIe communication links can also be used to record the content of the Perseus RAM memory to the solid state drive (SSD) on the Rear Transition Module. This function could be used to record all RF channels at maximum RF bandwidth during a certain amount of time, which is useful for test vector validation.

The data would first be transferred in real-time to the RAM, which would then offload the data to the SSD via the embedded CPU.  In the same way, the embedded CPU can upload data to the Perseus RAM memory using the PCIe.

 

The PCIe has a theoretical speed of 10 Gbps and is able to sustain a tested efficient throughput of 6.4 Gbps. Since, for Massive MIMO data computation, the data aggregation isn’t using the PCIe backplane, the whole available throughput of the PCIe can be used for MAC access between the Perseus 6113 or Kermode-XV6 central baseband processing engine, and the embedded CPU for high speed data, while low data rate commands are issued to all RF modules through the Perseus 6111 cards.