An audio-visual corpus has been collected to support the use of common material in speech perception and automatic speech recognition studies. The corpus consists of high-quality audio and video recordings of 1000 sentences spoken by each of 34 talkers. Sentences are simple, syntactically identical phrases such as "place green at B 4 now". Intelligibility tests using the audio signals suggest that the material is easily identifiable in quiet and low levels of stationary noise. The annotated corpus is available on the web for research use.
Although robot technology has been successfully used to empower people who suffer from motor disabilities to increase their interaction with their physical environment, it remains a challenge for individuals with severe motor impairment, who do not have the motor control ability to move robots or prosthetic devices by manual control. In this study, to mitigate this issue, a noninvasive brain-computer interface (BCI)-based robotic arm control system using gaze based steady-state visual evoked potential (SSVEP) was designed and implemented using a portable wireless electroencephalogram (EEG) system. A 15-target SSVEP-based BCI using a filter bank canonical correlation analysis (FBCCA) method allowed users to directly control the robotic arm without system calibration. The online results from 12 healthy subjects indicated that a command for the proposed brain-controlled robot system could be selected from 15 possible choices in 4[Formula: see text]s (i.e. 2[Formula: see text]s for visual stimulation and 2[Formula: see text]s for gaze shifting) with an average accuracy of 92.78%, resulting in a 15 commands/min transfer rate. Furthermore, all subjects (even naive users) were able to successfully complete the entire move-grasp-lift task without user training. These results demonstrated an SSVEP-based BCI could provide accurate and efficient high-level control of a robotic arm, showing the feasibility of a BCI-based robotic arm control system for hand-assistance.
Automatic extraction of road curbs from uneven, unorganized, noisy and massive 3D point clouds is a challenging task. Existing methods often project 3D point clouds onto 2D planes to extract curbs. However, the projection causes loss of 3D information which degrades the performance of the detection. This paper presents a robust, accurate and efficient method to extract road curbs from 3D mobile LiDAR point clouds. Our method consists of two steps: 1) extracting the candidate points of curbs based on the proposed novel energy function and 2) refining the candidate points using the proposed least cost path model. We evaluated our method on a large-scale of residential area (16.7GB, 300 million points) and an urban area (1.07GB, 20 million points) mobile LiDAR point clouds. Results indicate that the proposed method is superior to the state-of-the-art methods in terms of robustness, accuracy and efficiency. The proposed curb extraction method achieved a completeness of 78.62% and a correctness of 83.29%. These experiments demonstrate that the proposed method is a promising solution to extract road curbs from mobile LiDAR point clouds.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.