The paper presents a Brain-Computer Interface (BCI) controller for a semiautonomous three-wheeled omnidirectional robot capable of processing real-time commands. The kinematical model of the omni-directional robot and the software architecture of the overall hybrid system with motion control algorithm are presented. The system design, acquisition of the electroencephalography (EEG) signal, recognition processing technology and implementation are the main focus. Signals are recorded and processed by a program called OpenVibe. Preprocessed signals are cleaned by EEGLAB and used to train OpenVibe classifiers to accurately identify the expected signals produced by the users. Once identified, the controller converts the signal into input commands {forward, left, right, rotate, stop}, which are written in the Python syntax and delivered to the robot system. The robot has three degrees of freedom (DoF) allowing it to traverse its environment in any direction and orientation. The sensor system provides feedback allowing for the semi-autonomous control to avoid obstacles. Overall, this paper demonstrates the architecture of the hybrid control system for omni-directional robot using BCI. The developed system integrates the EEG signal to control the motion of the robot and the experimental results show the system performance and effectiveness of possessing the user’s EEG signals.
The proposed assistive hybrid brain-computer interface (BCI) semiautonomous mobile robotic arm demonstrates a design that is (1) adaptable by observing environmental changes with sensors and deploying alternate solutions and (2) versatile by receiving commands from the user’s brainwave signals through a noninvasive electroencephalogram cap. Composed of three integrated subsystems, a hybrid BCI controller, an omnidirectional mobile base, and a robotic arm, the proposed robot has commands mapped to the user’s brainwaves related to a set of specific physical or mental tasks. The implementation of sensors and the camera systems enable both the mobile base and the arm to be semiautonomous. The mobile base’s SLAM algorithm has obstacle avoidance capability and path planning to assist the robot maneuver safely. The robot arm calculates and deploys the necessary joint movement to pick up or drop off a desired object selected by the user via a brainwave controlled cursor on a camera feed. Validation, testing, and implementation of the subsystems were conducted using Gazebo. Communication between the BCI controller and the subsystems is tested independently. A loop of prerecorded brainwave data related to each specific task is used to ensure that the mobile base command is executed; the same prerecorded file is used to move the robot arm cursor and initiate a pick-up or drop-off action. A final system test is conducted where the BCI controller input moves the cursor and selects a goal point. Successful virtual demonstrations of the assistive robotic arm show the feasibility of restoring movement capability and autonomy for a disabled user.
In the field of mobile robotics, Simultaneous Localization and Mapping (SLAM) is an algorithmic approach to the computational problem of creating and updating a map of an environment while simultaneously keeping track of where the robot is within the environment. Applications of a SLAM algorithm are important for autonomous mobile systems to traverse an environment while avoiding obstacles and accurately achieving designated goal destinations. This paper presents the design of a SLAM-driven controller for a semi-autonomous omnidirectional mobile robot. Input to the system comes from a Brain Computer Interface in the form of simple driving commands or a goal destination as decided by the user. Due to latency issues of reacting and responding in real time, the system must safely navigate following the last given commands until it runs out of free space, reaches a goal designation, or receives a new command. The robotic system utilizes a three-wheeled robot kit with an upgraded sensor system. The Intel RealSense Depth Camera D435 and two lidar sensors are utilized to construct a full 360° field of view. The SLAM algorithm and system controllers are developed using the Robot Operating System (ROS). The controllers are developed and tested within Gazebo, which is a physics simulation engine utilized for rapid prototyping. Testing was performed to validate controller performance when given varying commands as well as performing long distance path planning and obstacle avoidance. The system was often capable of achieving its goal destinations with a small error of around 3% or less though the error was found to increase with the more commands the system processed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.