1. Bit-wise Autoencoder for Multiple Antenna Systems
- Author
-
Sebastian Dorner, Marc Gauger, Stephan ten Brink, and Sarah Rottacker
- Subjects
Computer science ,MIMO ,Data_CODINGANDINFORMATIONTHEORY ,Vector notation ,Antenna (radio) ,Communications system ,Algorithm ,Autoencoder ,Bitwise operation ,Computer Science::Information Theory ,Complex normal distribution ,Channel use - Abstract
We propose an end-to-end learned bit-wise autoen-coder neural network (NN) for open loop multiple-input multiple-output (MIMO) antenna based communication systems. The optimized transmit vector constellations are learned based on the number of transmit and receive antennas, the number of bits conveyed per channel use and the signal-to-noise ratio. By training through an i.i.d. complex Gaussian (i.e., Rayleigh ergodic) matrix channel, the system implicitly picks up constellation shaping gains “along the way”. We evaluate and analyze these gains in comparison to the single-input single-output (SISO) results shown in [1] for different symmetric and asymmetric MIMO configurations. We first show that the NN-based receiver is able to compete with the a posteriori probability (APP) receiver performance up to certain configuration settings. Then we perform an end-to-end optimization of the transmit vector constellations using the conventional APP receiver as the decoder part. Thereby, we also investigate an edge case where the number of bits per vector symbol is not a multiple of the number of transmit antennas. Finally, we examine the performance of the system if a non-optimal zero forcing (ZF) receiver is used for inference, while the vector constellation was optimized for the APP receiver during training. We show that constellations optimized for the APP receiver are not necessarily optimal for such sub-optimal receivers that operate with reduced complexity. To fix this, we propose a detector-aware training scheme to learn constellations that have been optimized for a specific sub-optimal receiver and, thus, achieve higher performance in inference.
- Published
- 2021
- Full Text
- View/download PDF