Massive MIMO is a cornerstone technology in reaching the 5G target of a thousandfold capacity increase by 2020. The paradigm is based on the fact that if there are enough antennas at the base station (several hundreds to a thousand or more), the so-called massive effect is observed whereby simple linear processing (eigenbeamforming and maximal-ratio combining) becomes optimal. This is attractive because such processing is not only extremely simple, it scales linearly with the size of the array and requires very little inter-processor communication. However, while antennas are not in themselves costly, such very large arrays with hundreds or thousands of RF front-ends, A/D and D/A converters are cumbersome and energy-hungry, to say the least. It is therefore of interest to explore the quasi-massive case, where the number of antennas is not sufficient to achieve the massive effect, but still large enough to make full-fledged interference nulling processing — such as minimum mean-square error (MMSE) and zero forcing (ZF) — undesirable because their complexity scales in polynomial fashion, according to the cube of the number of antennas. We show here that applying MMSE or ZF on subsets of antennas and further combining the resulting outputs in a second layer of processing constitutes an attractive approach to achieve good performance with reduced numbers of antennas, while limiting complexity. Furthermore, this approach maps extremely well to the TitanMIMO modular architecture where remote radio head (RRH) units of 8 antennas, each equipped with local baseband processing capability, are aggregated together to form massive MIMO prototyping platforms of various sizes.
Some of the presented findings include:
- Use of MMSE and ZF on a two-layer processing scheme to reduce Massive MIMO system cost
- Performance comparisons of such approach
- Prototyping on a real-time Massive MIMO testbed
To access the full white paper, please fill out this form: