Since the beginning of the 21st century, terrestrial broadcasting systems have been blamed of an inecient use of the allocated spectrum. To increase the spectral eciency, digital television Standards Developing Organizations (SDOs) settled to develop the technical evolution of the rst-generation Digital Terrestrial Television (DTT) systems. Among others, a primary goal of nextgeneration DTT systems (European DVB-T2 and U.S. ATSC 3.0) is to simultaneously provide TV services to mobile and xed devices. The major drawback of this simultaneous delivery is the dierent requirement of each reception condition. To address these constraints dierent multiplexing techniques have been considered. While DVB-T2 fullled the simultaneous delivery of the two services by Time Division Multiplexing (TDM), ATSC 3.0 adopted the Layered Division Multiplexing (LDM) technology. LDM can outperform TDM and Frequency Division Multiplexing (FDM) by taking advantage of the Unequal Error Protection (UEP) ratio, as both services, namely layers, utilize all the frequency and time resources with dierent power levels. At receiver side, two implementations are distinguished, according to the intended layer. Mobile receivers are only intended to obtain the upper layer, known as Core Layer (CL). In order not to increase their complexity compared to single layer receivers, the lower layer, known as Enhanced Layer (EL) is treated as an additional noise on the CL decoding. Fixed receivers, increase their complexity, as they should performed a Successive Interference Cancellation (SIC) process on the CL for getting the EL. To limit the additional complexity of xed receivers, the LDM layers in ATSC 3.0 are congured with dierent error correction capabilities, but share the rest of physical layer parameters, including the Time v