Orthogonal frequency division multiplexing (OFDM) is a superior technology for the high-speed data rate of wire-line and wireless communication systems. The OFDM has many advantages over other techniques such as its high capacity and immunity against multipath fading channels. However, one of the main drawbacks of the OFDM system is the high-peak-to-average power ratio (PAPR) that leads the system to produce in-band distortion and out-of-band radiation because of the non-linearity of the high-power amplifiers. Therefore, numerous techniques have been proposed to overcome the PAPR problem such as selective mapping, partial transmit sequence (PTS), clipping, and nonlinear companding. In this paper, the PTS technique was analytically reviewed as one of the important methods to reduce the high PAPR problem. The PAPR performance and the computational complexity level are discussed in terms of modifying the PTS technique in the frequency domain, time domain and modulation stage (inverse fast Fourier transform block). Moreover, the numerical statistic comparison of the current modified-PTS methods is introduced, and the criteria for selecting the suitable modified-PTS method in the OFDM system are also given. The simulation and the numerical calculations results show that the rows exchange-interleaving PTS scheme is the best method for reducing the PAPR value with low complexly in the frequency domain, and the cooperative PTS method is the best among the modulation stage methods, while the cyclic shift sequence PTS method achieves the superior performance in PAPR reduction and computational complexity for the time domain methods.
The sixth generation (6G) wireless communication network presents itself as a promising technique that can be utilized to provide a fully data-driven network evaluating and optimizing the end-toend behavior and big volumes of a real-time network within a data rate of Tb/s. In addition, 6G adopts an average of 1000+ massive number of connections per person in one decade (2030 virtually instantaneously). The data-driven network is a novel service paradigm that offers a new application for the future of 6G wireless communication and network architecture. It enables ultra-reliable and low latency communication (URLLC) enhancing information transmission up to around 1 Tb/s data rate while achieving a 0.1 millisecond transmission latency. The main limitation of this technique is the computational power available for distributing with big data and greatly designed artificial neural networks. The work carried out in this paper aims to highlight improvements to the multi-level architecture by enabling artificial intelligence (AI) in URLLC providing a new technique in designing wireless networks. This is done through the application of learning, predicting, and decision-making to manage the stream of individuals trained by big data. The secondary aim of this research paper is to improve a multi-level architecture. This enables user level for device intelligence, cell level for edge intelligence, and cloud intelligence for URLLC. The improvement mainly depends on using the training process in unsupervised learning by developing data-driven resource management. In addition, improving a multi-level architecture for URLLC through deep learning (DL) would facilitate the creation of a data-driven AI system, 6G networks for intelligent devices, and technologies based on an effective learning capability. These investigational problems are essential in addressing the requirements in the creation of future smart networks. Moreover, this work provides further ideas on several research gaps between DL and 6G that are up-to-date unknown.INDEX TERMS Artificial neural networks, artificial intelligence, Internet of Things, sixth-generation wireless communication and network architecture, URLLC.
There is a growing demand for 5G applications in all fields of knowledge. Current applications, such as the Internet of Things, smart homes, and clean energy, require sophisticated forms of 5G waveforms. Researchers and developers are investigating the requirements of 5G networks for better waveform types, which will result in high spectrum efficiency and lower latency with less complexity in systems. This paper proposes an assessment of various 5G waveform candidates [filtered orthogonal frequency-division multiplexing (OFDM), universal filtered multicarrier (UFMC), filter bank multicarrier (FBMC), and generalized frequency-division multiplexing] under the key performance indicators (KPIs). This paper assesses the main KPI factors (computational complexity, peak-to-average-power ratio, spectral efficiency, filter length, and latency). Moreover, this paper compares and evaluates all KPI factors in various 5G waveforms. Finally, this paper highlights the strengths and weaknesses of each waveform candidate based on the KPI factors for better outcomes in the industry. In conclusion, the current review suggests the use of optimized waveforms (FBMC and UFMC) for better flexibility to overcome the drawbacks encountered by previous works. Regarding coexistence, FBMC and UFMC showed better coexistence with CP-OFDM in 4G networks with a new radio spectrum. The rapprochement between the above-mentioned waveforms has been called green coexistence and is due to the mix between one waveform in 4G networks and two waveforms in 5G networks based on the subcarrier and subband shaping (FBMC and UFMC).
Purpose Several countries have been using internet of things (IoT) devices in the healthcare sector to combat COVID-19. Therefore, this study aims to examine the doctors’ intentions to use IoT healthcare devices in Iraq during the COVID-19 pandemic. Design/methodology/approach This study proposed a model based on the integration of the innovation diffusion theory (IDT). This included compatibility, trialability and image and a set of exogenous factors such as computer self-efficacy, privacy and cost into the technology acceptance model comprising perceived ease of use, perceived usefulness, attitude and behavioral intention to use. Findings The findings revealed that compatibility and image of the IDT factors, have a significant impact on the perceived ease of use, perceived usefulness and behavioral intention, but trialability has a significant impact on perceived ease of use, perceived usefulness and insignificant impact on behavioral intention. Additionally, external factors such as privacy and cost significantly impacted doctors’ behavioral intention to use. Moreover, doctors’ computer self-efficacy significantly influenced the perceived ease of use, perceived usefulness and behavioral intention to use. Furthermore, perceived ease of use has a significant impact on perceived usefulness and attitude, perceived usefulness has a significant impact on attitude, which, in turn, significantly impacting doctors' behavior toward an intention to use. Research limitations/implications The limitations of the present study are the retractions of the number of participants and the lack of qualitative methods. Originality/value The finding of this study could benefit researchers, doctors and policymakers in the adaption of IoT technologies in the health sectors, especially in developing counties.
<span>Massive multi-input–multi-output (MIMO) systems are crucial to maximizing energy efficiency (EE) and battery-saving technology. Achieving EE without sacrificing the quality of service (QoS) is increasingly important for mobile devices. We first derive the data rate through zero forcing (ZF) and three linear precodings: maximum ratio transmission (MRT), zero forcing (ZF), and minimum mean square error (MMSE). Performance EE can be achieved when all available antennas are used and when taking account of the consumption circuit power ignored because of high transmit power. The aim of this work is to demonstrate how to obtain maximum EE while minimizing power consumed, which achieves a high data rate by deriving the optimal number of antennas in the downlink massive MIMO system. This system includes not only the transmitted power but also the fundamental operation circuit power at the transmitter signal. Maximized EE depends on the optimal number of antennas and determines the number of active users that should be scheduled in each cell. We conclude that the linear precoding technique MMSE achieves the maximum EE more than ZF and MRT</span><em></em><span>because the MMSE is able to make the massive MIMO system less sensitive to SNR at an increased number of antennas</span><span>.</span>
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.