Abstract-We deployed 72 sensors of 10 modalities in 15 wireless and wired networked sensor systems in the environment, in objects, and on the body to create a sensor-rich environment for the machine recognition of human activities. We acquired data from 12 subjects performing morning activities, yielding over 25 hours of sensor data. We report the number of activity occurrences observed during post-processing, and estimate that over 13000 and 14000 object and environment interactions occurred. We describe the networked sensor setup and the methodology for data acquisition, synchronization and curation. We report on the challenges and outline lessons learned and best practice for similar large scale deployments of heterogeneous networked sensor systems. We evaluate data acquisition quality for on-body and object integrated wireless sensors; there is less than 2.5% packet loss after tuning. We outline our use of the dataset to develop new sensor network self-organization principles and machine learning techniques for activity recognition in opportunistic sensor configurations. Eventually this dataset will be made public.
There is a growing interest on using ambient and wearable sensors for human activity recognition, fostered by several application domains and wider availability of sensing technologies. This has triggered increasing attention on the development of robust machine learning techniques that exploits multimodal sensor setups. However, unlike other applications, there are no established benchmarking problems for this field. As a matter of fact, methods are usually tested on custom datasets acquired in very specific experimental setups. Furthermore, data is seldom shared between different groups. Our goal is to address this issue by introducing a versatile human activity dataset recorded in a sensor-rich environment. This database was the basis of an open challenge on activity recognition. We report here the outcome of this challenge, as well as baseline performance using different classification techniques. We expect this benchmarking database will motivate other researchers to replicate and outperform the presented results, thus contributing to further advances in the state-of-the-art of activity recognition methods.
Abstract-We describe error-related potentials generated while a human user monitors the performance of an external agent and discuss their use for a new type of brain-computer interaction. In this approach, single trial detection of error-related electroencephalography (EEG) potentials is used to infer the optimal agent behavior by decreasing the probability of agent decisions that elicited such potentials. Contrasting with traditional approaches, the user acts as a critic of an external autonomous system instead of continuously generating control commands. This sets a cognitive monitoring loop where the human directly provides information about the overall system performance that, in turn, can be used for its improvement. We show that it is possible to recognize erroneous and correct agent decisions from EEG (average recognition rates of 75.8% and 63.2%, respectively), and that the elicited signals are stable over long periods of time (from 50 to 600 days).Moreover, these performances allow to infer the optimal behavior of a simple agent in a brain-computer interaction paradigm after a few trials.Index Terms-Brain-computer interface, electroencephalography (EEG), error-related potentials, reinforcement learning.
The ability to recognize errors is crucial for efficient behavior. Numerous studies have identified electrophysiological correlates of error recognition in the human brain (error-related potentials, ErrPs). Consequently, it has been proposed to use these signals to improve human-computer interaction (HCI) or brain-machine interfacing (BMI). Here, we present a review of over a decade of developments toward this goal. This body of work provides consistent evidence that ErrPs can be successfully detected on a single-trial basis, and that they can be effectively used in both HCI and BMI applications. We first describe the ErrP phenomenon and follow up with an analysis of different strategies to increase the robustness of a system by incorporating single-trial ErrP recognition, either by correcting the machine's actions or by providing means for its error-based adaptation. These approaches can be applied both when the user employs traditional HCI input devices or in combination with another BMI channel. Finally, we discuss the current challenges that have to be overcome in order to fully integrate ErrPs into practical applications. This includes, in particular, the characterization of such signals during real(istic) applications, as well as the possibility of extracting richer information from them, going beyond the time-locked decoding that dominates current approaches.
Future neuroprosthetic devices, in particular upper limb, will require decoding and executing not only the user's intended movement type, but also when the user intends to execute the movement. This work investigates the potential use of brain signals recorded non-invasively for detecting the time before a self-paced reaching movement is initiated which could contribute to the design of practical upper limb neuroprosthetics. In particular, we show the detection of self-paced reaching movement intention in single trials using the readiness potential, an electroencephalography (EEG) slow cortical potential (SCP) computed in a narrow frequency range (0.1–1 Hz). Our experiments with 12 human volunteers, two of them stroke subjects, yield high detection rates prior to the movement onset and low detection rates during the non-movement intention period. With the proposed approach, movement intention was detected around 500 ms before actual onset, which clearly matches previous literature on readiness potentials. Interestingly, the result obtained with one of the stroke subjects is coherent with those achieved in healthy subjects, with single-trial performance of up to 92% for the paretic arm. These results suggest that, apart from contributing to our understanding of voluntary motor control for designing more advanced neuroprostheses, our work could also have a direct impact on advancing robot-assisted neurorehabilitation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.