In brain-computer interfaces (BCIs) based on electroencephalography (EEG), two distinct types of EEG patterns related to movement have been used for detecting the brain's preparation for voluntary movements: a) event-related patterns in the time domain named movement related cortical potentials (MRCPs) and b) patterns in the frequency domain named event-related desynchronization/synchronization (ERD/ERS). The applicability of those patterns in BCIs is often evaluated by the classification performance. To this end, the known spatio-temporal differences in EEG activity can be of interest, since they might influence the classification performance of the two different patterns. In this paper, we compared the classification performance based on ERD/ERS and MRCP while varying the time point of prediction as well as the used electrode sites. Empirical results were obtained from eight subjects performing voluntary right arm movements. Results show: a) classification based on MRCP is superior compared to ERD/ERS close to the movement onset whereas the opposite results farther away from the movement onset, b) the performance maximum of MRCP is located at central electrodes whereas it is at fronto-central electrodes for ERD/ERS. In summary, the results contribute to a better insight into the spatial and temporal differences between ERD/ERS and MRCP in terms of prediction performance. 219Seeland A., Manca L., Kirchner F. and Kirchner E.. Spatio-temporal Comparison between ERD/ERS and MRCP-based Movement Prediction.
A common challenge with processing naturalistic driving data is that humans may need to categorize great volumes of recorded visual information. By means of the online platform CrowdFlower, we investigated the potential of crowdsourcing to categorize driving scene features (i.e., presence of other road users, straight road segments, etc.) at greater scale than a single person or a small team of researchers would be capable of. In total, 200 workers from 46 different countries participated in 1.5days. Validity and reliability were examined, both with and without embedding researcher generated control questions via the CrowdFlower mechanism known as Gold Test Questions (GTQs). By employing GTQs, we found significantly more valid (accurate) and reliable (consistent) identification of driving scene items from external workers. Specifically, at a small scale CrowdFlower Job of 48 three-second video segments, an accuracy (i.e., relative to the ratings of a confederate researcher) of 91% on items was found with GTQs compared to 78% without. A difference in bias was found, where without GTQs, external workers returned more false positives than with GTQs. At a larger scale CrowdFlower Job making exclusive use of GTQs, 12,862 three-second video segments were released for annotation. Infeasible (and self-defeating) to check the accuracy of each at this scale, a random subset of 1012 categorizations was validated and returned similar levels of accuracy (95%). In the small scale Job, where full video segments were repeated in triplicate, the percentage of unanimous agreement on the items was found significantly more consistent when using GTQs (90%) than without them (65%). Additionally, in the larger scale Job (where a single second of a video segment was overlapped by ratings of three sequentially neighboring segments), a mean unanimity of 94% was obtained with validated-as-correct ratings and 91% with non-validated ratings. Because the video segments overlapped in full for the small scale Job, and in part for the larger scale Job, it should be noted that such reliability reported here may not be directly comparable. Nonetheless, such results are both indicative of high levels of obtained rating reliability. Overall, our results provide compelling evidence for CrowdFlower, via use of GTQs, being able to yield more accurate and consistent crowdsourced categorizations of naturalistic driving scene contents than when used without such a control mechanism. Such annotations in such short periods of time present a potentially powerful resource in driving research and driving automation development.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.