Post-load insulin resistance (OGIS <9.8 mg/kg/min) is associated with severe hepatic fibrosis in both NAFLD and CHC patients, and may help identify subjects at risk of progressive disease.
Deception detection is a relevant ability in high stakes situations such as police interrogatories or court trials, where the outcome is highly influenced by the interviewed person behavior. With the use of specific devices, e.g. polygraph or magnetic resonance, the subject is aware of being monitored and can change his behavior, thus compromising the interrogation result. For this reason, video analysis-based methods for automatic deception detection are receiving ever increasing interest. In this paper, a deception detection approach based on RGB videos, leveraging both facial features and stacked generalization ensemble, is proposed. First, a face, which is well-known to present several meaningful cues for deception detection, is identified, aligned, and masked to build video signatures. These signatures are constructed starting from five different descriptors, which allow the system to capture both static and dynamic facial characteristics. Then, video signatures are given as input to four base-level algorithms, which are subsequently fused applying the stacked generalization technique, resulting in a more robust meta-level classifier used to predict deception. By exploiting relevant cues via specific features, the proposed system achieves improved performances on a public dataset of famous court trials, with respect to other state-of-the-art methods based on facial features, highlighting the effectiveness of the proposed method.
In recent years, the spread of video sensor networks both in public and private areas has grown considerably. Smart algorithms for video semantic content understanding are increasingly developed to support human operators in monitoring different activities, by recognizing events that occur in the observed scene. With the term event, we refer to one or more actions performed by one or more subjects (e.g., people or vehicles) acting within the same observed area. When these actions are performed by subjects that do not interact with each other, the events are usually classified as simple. Instead, when any kind of interaction occurs among subjects, the involved events are typically classified as complex. This survey starts by providing the formal definitions of both scene and event, and the logical architecture for a generic event recognition system. Subsequently, it presents two taxonomies based on features and machine learning algorithms, respectively, which are used to describe the different approaches for the recognition of events within a video sequence. This paper also discusses key works of the current state-of-the-art of event recognition, providing the list of datasets used to evaluate the performance of reported methods for video content understanding.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.