In this blog post, we outline six major features that must be considered when selecting a large testbed to use in 5G research. Our main concerns are maximizing the benefits in the long run and avoiding as much risk as possible.

1. Scalability (e.g. long-term evolution, upgradable, expandable, etc.)

Large-scale systems like testbeds for 5G research on massive multiple-input/multiple-output (MIMO) are typically very expensive. Generally speaking, such systems will be based on costly parts like field-programmable gate arrays (FPGAs) and RF integrated circuits (RFICs). As the trends in research usually evolve over time, the required frequency or bandwidth coverage may change. If a large-scale system is based on a fixed architecture where major parts of the system must be replaced in order to change a parameter, any modification will likely be very expensive and the testbed may rapidly become completely outdated or not used at its full efficiency. Therefore, a very important factor to consider when acquiring half-million dollar equipment is its modularity and scalability. One must ensure that the most-likely changes will not be too costly.

Nutaq’s TitanMIMO-4 ‘s architecture, the MTCA.4, is based on modular units that originated from the telecommunications industry. The AdvancedTCA architecture evolved into MicroTCA and then into MicroTCA.4 (MTCA.4). MicroTCA and its descendants have been used extensively in the high-energy physics and telecommunication industries. The standards describe systems based on chassis equipped with multi-slot backplanes in which advanced mezzanine cards (AMCs) are inserted. Because of the backplane and daughter card architecture, MicroTCA-based systems are very versatile and modular. One can completely change the architecture of a system by inserting different cards.

AMC follows a specification of the PCI Industrial Computers Manufacturers Group (PICMG), with hundreds of companies participating. Moreover, Nutaq’s TitanMIMO-4‘s radio front-end is also modular. Based on the FPGA mezzanine card (FMC) described by the Vita 57.1 standard, the radio front-end performing the heterodyning and amplification of the radio signal is totally interchangeable. The AMC hosts an FMC connector normally occupied by Nutaq’s Radio420X. In the near future, however, this RF front-end, which covers 300 MHz to 3.8 GHz with up to 28 MHz bandwidth, could be substituted by a newer Nutaq front-end for millimeter wave coverage at bandwidths up to 100 MHz wide. Replacing the RF front-end can be done at an extremely low cost when compared to that of the entire system; around 80% of the parts value would be conserved as only the radio FMC would have to be substituted.

2. Data throughput

A central theme when selecting hardware is the data throughput of the system. Initially, when considering options to acquire massive MIMO equipment, one must face the reality that most of the existing data interfaces are too slow to scale up to a MIMO system with 100×100 channels. In such a system, all the data must be sent and received by a central processing unit – because of algorithmic related reasons – and this results in a requirement for an extremely rapid data interface, as discussed in this blog post.

The MTCA.4 chassis configuration described above has its roots in the telecommunications industry and is intended to meet the requirements for the next generation of carrier-grade communications equipment. The MTCA.4 chassis architecture is described in detail in this blog post.

One of its principal characteristics is that it removes the dependency of the previous MicroTCA chassis’ reliance on the backplane for data communication. The rear transition modules (RTMs) added to the MTCA.4 chassis give direct access to the GTX or ‘fat pipes’ of a carrier board’s FPGA, enabling very rapid data communication as well as a mesh topology between the sub-units of the system. The fact that many data links between FPGA boards can be made (up to seven) enables throughputs exceeding even the most rapid PCIe Gen III data rates used by some competitors. It uses a proprietary protocol from Xilinx for data communication called Aurora. Each Aurora line supports around 6 Gbps of data throughput, a speed comparable to PCIe Gen II over a lighter communication protocol that enables point-to-point data streaming. As said earlier, seven of these data links are enabled per FPGA carrier board, totalling a maximum of 42 Gbps. This is the key to the extremely high data throughput rate required of systems used for massive MIMO research.

3. Processing power

In other applications like event-based data acquisition systems such as neutron detection in high-energy physics applications, not all the data must be processed by a central processing node. Partial processing is done in distributed nodes; only a small part of the data is sent to a processing node for imaging or detection summary. Contrarily, in massive MIMO systems, all the information must be processed by the central node. The span on 100×100 channels will make the processing load particularly heavy and the requirement for ultra-rapid processing is very important. Therefore, when evaluating potential solutions for building a testbed, one must consider the possibility of scaling up the processing power. Requirements may increase at a later phase in the research project. Multi-core architectures using a mesh of CPU, GPU or FPGAs to perform parallel processing is a possible solution. The solutions offering sufficient processing power are scarce.

Nutaq, in partnership with a Canadian research center, developed the most powerful FPGA-based computing blade ever build. The Kermode XV6 is an AdvancedTCA blade specifically designed to tackle the most demanding signal processing applications. It packs eight Xilinx Virtex-6 SX475T FPGAs, delivering an outstanding 8.8 TeraMACs solely from their DSP48E1 dedicated multiply-accumulate engines. Also, as the processing unit used in Nutaq’s massive MIMO solution is modular, it can be changed to achieve a lower cost. For example, one could acquire a system with a smaller processing unit and then scale the solution to include a Kermode XV-6 later in the process.

4. Community-based development

Who wants to develop alone? Researchers need to be surrounded by a community working with the same environment, because many heads are worth a lot more than a handful when a new challenge arises. It is of prime importance that a community develop using similar tools in order to confront similar challenges.

Nutaq encourages partnership and makes it a priority to provide access to open tools that promote teamwork and accelerate development. Our moto is to accelerate the development process of our clients and a community-based approach fits perfectly with this philosophy.

5. Installation and training

Researchers want to be able to do their work. They expect a provider to do the same on their side. Researchers want to start implementing algorithms as quickly as possible and there is no reason why they should struggle with installing hardware and studying a company’s software tools. Installation and training of the research personnel by the provider is fundamental.

6. Post-sale support (long term support)

Large investments often imply long-term maintenance. Because Nutaq specializes in providing large systems, such as the massive MIMO testbed or platforms with applications in nuclear physics, we have many options that ensure our customers will receive long-term support if the system shows normal wear symptoms. We keep an inventory of key parts and will keep supporting a technology as long as it remains relevant to our customers.