While brain-computer interfaces (BCIs) can provide communication to people who are locked-in, they suffer from a very low information transfer rate. Further, using a BCI requires a concentration effort and using it continuously can be tiring. The brain controlled wheelchair (BCW) described in this paper aims at providing mobility to BCI users despite these limitations, in a safe and efficient way. Using a slow but reliable P300 based BCI, the user selects a destination amongst a list of predefined locations. While the wheelchair moves on virtual guiding paths ensuring smooth, safe, and predictable trajectories, the user can stop the wheelchair by using a faster BCI. Experiments with nondisabled subjects demonstrated the efficiency of this strategy. Brain control was not affected when the wheelchair was in motion, and the BCW enabled the users to move to various locations in less time and with significantly less control effort than other control strategies proposed in the literature.
Point clouds are often sparse and incomplete. Existing shape completion methods are incapable of generating details of objects or learning the complex point distributions. To this end, we propose a cascaded refinement network together with a coarse-to-fine strategy to synthesize the detailed object shapes. Considering the local details of partial input with the global shape information together, we can preserve the existing details in the incomplete point set and generate the missing parts with high fidelity. We also design a patch discriminator that guarantees every local area has the same pattern with the ground truth to learn the complicated point distribution. Quantitative and qualitative experiments on different datasets show that our method achieves superior results compared to existing state-of-the-art approaches on the 3D point cloud completion task. Our source code is available at https: // github.com/ xiaogangw/ cascaded-point-completion.git.
In this paper, we present a multivehicle cooperative driving system architecture using cooperative perception along with experimental validation. For this goal, we first propose a multimodal cooperative perception system that provides see-through, lifted-seat, satellite and all-around views to drivers. Using the extended range information from the system, we then realize cooperative driving by a see-through forward collision warning, overtaking/lane-changing assistance, and automated hidden obstacle avoidance. We demonstrate the capabilities and features of our system through real-world experiments using four vehicles on the road.
Autonomous driving requires 3D perception of vehicles and other objects in the in environment. Much of the current methods support 2D vehicle detection. This paper proposes a flexible pipeline to adopt any 2D detection network and fuse it with a 3D point cloud to generate 3D information with minimum changes of the 2D detection networks. To identify the 3D box, an effective model fitting algorithm is developed based on generalised car models and score maps. A two-stage convolutional neural network (CNN) is proposed to refine the detected 3D box. This pipeline is tested on the KITTI dataset using two different 2D detection networks. The 3D detection results based on these two networks are similar, demonstrating the flexibility of the proposed pipeline. The results rank second among the 3D detection algorithms, indicating its competencies in 3D detection.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.