The latency of saccadic eye movements evoked by the presentation of auditory and visual targets was studied while starting eye position was either 0 or 20 deg right, or 20 deg left. The results show that for any starting position the latency of visually elicited saccades increases with target eccentricity with respect to the eyes. For auditory elicited saccades and for any starting position the latency decreases with target eccentricity with respect to the eyes. Therefore auditory latency depends on a retinotopic motor error, as in the case of visual target presentation.
Continuous monitoring of frail individuals for detecting dangerous situations during their daily living at home can be a powerful tool toward their inclusion in the society by allowing living independently while safely. To this goal we developed a pose recognition system tailored to disabled students living in college dorms and based on skeleton tracking through four Kinect One devices independently recording the inhabitant with different viewpoints, while preserving the individual's privacy. The system is intended to classify each data frame and provide the classification result to a further decisionmaking algorithm, which may trigger an alarm based on the classified pose and the location of the subject with respect to the furniture in the room. An extensive dataset was recorded on 12 individuals moving in a mockup room and undertaking four poses to be recognized: standing, sitting, lying down, and "dangerous sitting." The latter consists of the subject slumped in a chair with his/her head lying forward or backward as if unconscious. Each skeleton frame was labeled and represented using 10 discriminative features: three skeletal joint vertical coordinates and seven relative and absolute angles describing articular joint positions and body segment orientation. In order to classify the pose of the subject in each skeleton frame we built a two hidden layers multi-layer perceptron neural network with a "SoftMax" output layer, which we trained on the data from 10 of the 12 subjects (495,728 frames), with the data from the two remaining subjects representing the test set (106,802 frames). The system achieved very promising results, with an average accuracy of 83.9% (ranging 82.7 and 94.3% in each of the four classes). Our work proves the usefulness of human pose recognition based on machine learning in the field of safety monitoring in assisted living conditions.
Eye–head coordination during gaze orientation toward auditory targets in total darkness has been examined in human subjects. The findings have been compared, for the same subjects, with those obtained by using visual targets. The use of auditory targets when investigating eye–head coordination has some advantages with respect to the more common use of visual targets: (i) more eccentric target positions can be presented to the subject; (ii) visual feedback is excluded during the execution of gaze displacement; (iii) complex patterns of saccadic responses can be elicited. This last aspect is particularly interesting for examining the coupling between the eyes and the head displacements. The experimental findings indicate that during gaze orientation toward a visual or an auditory target the central nervous system adopts the same strategy of using both the saccadic mechanism and the head motor plant. In spite of a common strategy, qualitative and quantitative parameters of the resulting eye–head coordination are slightly different, depending on the nature of the target. The findings relating to patterns of eye–head coordination seem to indicate a dissociation between the eyes and the head, which receive different motor commands independently generated from the gaze error signal. The experimental findings reported in this paper have been summarized in a model of the gaze control system that makes use of a gaze feedback hypothesis through the central reconstruction of the eye and head positions.
Human Action Recognition (HAR) is a rapidly evolving field impacting numerous domains, among which is Ambient Assisted Living (AAL). In such a context, the aim of HAR is meeting the needs of frail individuals, whether elderly and/or disabled and promoting autonomous, safe and secure living. To this goal, we propose a monitoring system detecting dangerous situations by classifying human postures through Artificial Intelligence (AI) solutions. The developed algorithm works on a set of features computed from the skeleton data provided by four Kinect One systems simultaneously recording the scene from different angles and identifying the posture of the subject in an ecological context within each recorded frame. Here, we compare the recognition abilities of Multi-Layer Perceptron (MLP) and Long-Short Term Memory (LSTM) Sequence networks. Starting from the set of previously selected features we performed a further feature selection based on an SVM algorithm for the optimization of the MLP network and used a genetic algorithm for selecting the features for the LSTM sequence model. We then optimized the architecture and hyperparameters of both models before comparing their performances. The best MLP model (3 hidden layers and a Softmax output layer) achieved 78.4%, while the best LSTM (2 bidirectional LSTM layers, 2 dropout and a fully connected layer) reached 85.7%. The analysis of the performances on individual classes highlights the better suitability of the LSTM approach.
Continuous theta-burst stimulation (cTBS) applied over the cerebellum exerts long-lasting effects by modulating long-term synaptic plasticity, which is thought to be the basis of learning and behavioral adaptation. To investigate the impact of cTBS over the cerebellum on short-term sensory-motor memory, we recorded in two groups of eight healthy subject each the visually guided saccades (VGSs), the memory-guided saccades (MGSs), and the multiple memory-guided saccades (MMGSs), before and after cTBS (cTBS group) or simulated cTBS (control group). In the cTBS group, cTBS determined hypometria of contralateral centrifugal VGSs and worsened the accuracy of MMGS bilaterally. In the control group, no significant differences were found between the two recording sessions. These results indicate that cTBS over the cerebellum causes eye movement effects that last longer than the stimulus duration. The VGS contralateral hypometria suggested that we eventually inhibited the fastigial nucleus on the stimulated side. MMGSs in normal subjects have a better final accuracy with respect to MGSs. Such improvement is due to the availability in MMGSs of the efference copy of the initial reflexive saccade directed toward the same peripheral target, which provides a sensory-motor information that is memorized and then used to improve the accuracy of the subsequent volitional memory-guided saccade. Thus, we hypothesize that cTBS disrupted the capability of the cerebellum to make an internal representation of the memorized sensory-motor information to be used after a short interval for forward control of saccades.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.