Light-fidelity (LiFi) is a networked optical wireless communication (OWC) solution for high-speed indoor connectivity for fixed and mobile optical communications. Unlike conventional radio frequency wireless systems, the OWC channel is not isotropic, meaning that the device orientation affects the channel gain significantly, particularly for mobile users. However, due to the lack of a proper model for device orientation, many studies have assumed that the receiver is vertically upward and fixed. In this paper, a novel model for device orientation based on experimental measurements of 40 participants has been proposed. It is shown that the probability density function (PDF) of the polar angle can be modeled either based on a Laplace (for static users) or a Gaussian (for mobile users) distribution. In addition, a closed-form expression is obtained for the PDF of the cosine of the incidence angle based on which the line-of-sight (LOS) channel gain is described in OWC channels. An approximation of this PDF based on the truncated Laplace is proposed and the accuracy of this approximation is confirmed by the Kolmogorov-Smirnov distance. Moreover, the statistics of the LOS channel gain are calculated and the random orientation of a user equipment (UE) is modeled as a random process. The influence of the random orientation on signal-to-noise-ratio performance of OWC systems has been evaluated. Finally, an orientationbased random waypoint (ORWP) mobility model is proposed by considering the random orientation of the UE during the user's movement. The performance of ORWP is assessed on the handover rate and it is shown that it is important to take the random orientation into account.
Automated analysis of human affective behavior has attracted increasing attention from researchers in psychology, computer science, linguistics, neuroscience, and related disciplines. However, the existing methods typically handle only deliberately displayed and exaggerated expressions of prototypical emotions despite the fact that deliberate behaviour differs in visual appearance, audio profile, and timing from spontaneously occurring behaviour. To address this problem, efforts to develop algorithms that can process naturally occurring human affective behaviour have recently emerged. Moreover, an increasing number of efforts are reported toward multimodal fusion for human affect analysis including audiovisual fusion, linguistic and paralinguistic fusion, and multi-cue visual fusion based on facial expressions, head movements, and body gestures. This paper introduces and surveys these recent advances. We first discuss human emotion perception from a psychological perspective. Next we examine available approaches to solving the problem of machine understanding of human affective behavior, and discuss important issues like the collection and availability of training and test data. We finally outline some of the scientific and engineering challenges to advancing human affect sensing technology.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.