Abstract-In this paper, we introduce a low overhead scheme for the uplink channel allocation within a single cell of Cognitive Radio Wireless Mesh Network (CR-WMNs). The scheme does not rely on using a Common Control Channel (CCC). The mechanism is based on Physical layer Network Coding (P N C), in which two Secondary Users (SUs) are allowed to transmit synchronously over a randomly selected channel from a set of available channels, and without coordination for the purpose of requesting channels. The Mesh Router (MR) can detect up to 2 requests on the same channel due to the use of P N C, and replies back with a control packet which contains information about the assigned channel.We propose two P N C modulation schemes, P N C 1 and P N C 2 , where initially SUs choose one of them to employ through the network operation. Decoding the received signals in P N C 1 and P N C 2 depend on their received energy and phases shifts, respectively. Simulation results show that the proposed mechanism significantly outperforms traditional schemes that rely on using one CCC, or do not use P N C in terms of channel allocation time.
Over the previous decades, a need has emerged to empower human‐machine communication systems, which are essential to not only perform actions, but also obtain information especially in education applications. Moreover, any communication system has to introduce an efficient and easy way for interaction with a minimum possible error rate. The keyboard, mouse, trackball, touch‐screen, and joystick are all examples of tools which were built to provide mechanical human‐to‐machine interaction. However, a system with the ability to use oral speech, which is the natural form of communication between humans instead of mechanical communication systems, can be more practical for normal students and even a necessity for arm‐disabled students who cannot use their arms to handle traditional education tools like pens and notebooks. In this paper, we present a speech recognition system that allows arm‐disabled students to control computers by voice as a helping tool in the educational process. When a student speaks through a microphone, the speech is divided into isolated words which are compared with a predefined database of huge number of spoken words to find a match. After that, each recognized word is translated into its related tasks which will be performed by the computer like opening a teaching application or renaming a file. The speech recognition process discussed in this paper involves two separate approaches; the first approach is based on double thresholds voice activity detection and improved Mel‐frequency cepstral coefficients (MFCC), while the second approach is based on discrete wavelet transform along with modified MFCC algorithm. Utilizing the best values for all parameters in just mentioned techniques, our proposed system achieved a recognition rate of 98.7% using the first approach, and 98.86% using the second approach of which is better in ratio than the first one but slower in processing which is a critical point for a real time system. Both proposed approaches were compared with other relevant approaches and their recognition rates were noticeably higher.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.