The traditional method of estimating an Event Related Potential (ERP) is to take the average of signal epochs time locked to a set of similar experimental events. This averaging method is useful as long as the experimental procedure can sufficiently isolate the brain or non-brain process of interest. However, if responses from multiple cognitive processes, time locked to multiple classes of closely spaced events, overlap in time with varying inter-event intervals, averaging will most likely fail to identify the individual response time courses. For this situation, we study estimation of responses to all recorded events in an experiment by a single model using standard linear regression (the rERP technique). Applied to data collected during a Rapid Serial Visual Presentation (RSVP) task, our analysis shows: (1) The rERP technique accounts for more variance in the data than averaging when individual event responses are highly overlapping; (2) the variance accounted for by the estimates is concentrated into a fewer ICA components than raw EEG channel signals.
In this work, we detail a methodology based on Convolutional Neural Networks (CNNs) to detect falls from non-invasive thermal vision sensors. First, we include an agile data collection to label images in order to create a dataset that describes several cases of single and multiple occupancy. These cases include standing inhabitants and target situations with a fallen inhabitant. Second, we provide data augmentation techniques to increase the learning capabilities of the classification and reduce the configuration time. Third, we have defined 3 types of CNN to evaluate the impact that the number of layers and kernel size have on the performance of the methodology. The results show an encouraging performance in single-occupancy contexts, with up to 92 % of accuracy, but a 10 % of reduction in accuracy in multiple-occupancy. The learning capabilities of CNNs have been highlighted due to the complex images obtained from the low-cost device. These images have strong noise as well as uncertain and blurred areas. The results highlight that the CNN based on 3-layers maintains a stable performance, as well as quick learning.
The desire to remain living in one’s own home rather than a care home by those in need of 24/7 care is one that requires a level of understanding for the actions of an environment’s inhabitants. This can potentially be accomplished with the ability to recognise Activities of Daily Living (ADLs); however, this research focuses first on producing an unobtrusive solution for pose recognition where the preservation of privacy is a primary aim. With an accurate manner of predicting an inhabitant’s poses, their interactions with objects within the environment and, therefore, the activities they are performing, can begin to be understood. This research implements a Convolutional Neural Network (CNN), which has been designed with an original architecture derived from the popular AlexNet, to predict poses from thermal imagery that have been captured using thermopile infrared sensors (TISs). Five TISs have been deployed within the smart kitchen in Ulster University where each provides input to a corresponding trained CNN. The approach is evaluated using an original dataset and an F1-score of 0.9920 was achieved with all five TISs. The limitations of utilising a ceiling-based TIS are investigated and each possible permutation of corner-based TISs is evaluated to satisfy a trade-off between the number of TISs, the total sensor cost and the performances. These tests are also promising as F1-scores of 0.9266, 0.9149 and 0.8468 were achieved with the isolated use of four, three, and two corner TISs, respectively.
To provide accurate activity recognition within a smart environment, visible spectrum cameras can be used as data capture devices in solution applications. Privacy, however, is a significant concern with regards to monitoring in a smart environment, particularly with visible spectrum cameras. Their use, therefore, may not be ideal. The need for accurate activity recognition is still required and so an unobtrusive approach is addressed in this research highlighting the use of a thermopile infrared sensor as the sole means of data collection. Image frames of the monitored scene are acquired from a thermopile infrared sensor that only highlights sources of heat, for example, a person. The recorded frames feature no discernable characteristics of people; hence privacy concerns can successfully be alleviated. To demonstrate how thermopile infrared sensors can be used for this task, an experiment was conducted to capture almost 600 thermal frames of a person performing four single component activities. The person’s position within a room, along with the action being performed, is used to appropriately predict the activity. The results demonstrated that high accuracy levels, 91.47%, for activity recognition can be obtained using only thermopile infrared sensors.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.