Autonomous navigation in complex environments is a crucial task in time-sensitive scenarios such as disaster response or search and rescue. However, complex environments pose significant challenges for autonomous platforms to navigate due to their challenging properties: constrained narrow passages, unstable pathway with debris and obstacles, or irregular geological structures and poor lighting conditions. In this work, we propose a multimodal fusion approach to address the problem of autonomous navigation in complex environments such as collapsed cites, or natural caves. We first simulate the complex environments in a physics-based simulation engine and collect a large-scale dataset for training. We then propose a Navigation Multimodal Fusion Network (NMFNet) which has three branches to effectively handle three visual modalities: laser, RGB images, and point cloud data. The extensively experimental results show that our NMFNet outperforms recent state of the art by a fair margin while achieving real-time performance. We further show that the use of multiple modalities is essential for autonomous navigation in complex environments. Finally, we successfully deploy our network to both simulated and real mobile robots. I. INTRODUCTIONAutonomous navigation is a long-standing field of robotics research, which provides an essential capability for mobile robot to execute a series of tasks on the same environments performed by human everyday. In general, the task of autonomous navigation is to control a robot navigate around the environment without colliding with obstacles. It can be seen that navigation is an elementary skill for intelligent agents, which requires decision-making across a diverse range of scales in time and space. In practice, autonomous navigation is not a trivial task since the robot needs to close the perception-control loop under the uncertainty in order to obtain the autonomy.Recently, the learning-based approaches (e.g., deep learning models, etc.) have demonstrated the ability to directly derive end-to-end policies which map raw sensor data to control commands [1], [2]. This end-to-end approach also reduces the complexity of the implementation and effectively utilizes input data from different sensors (e.g., depth camera, laser) thereby reducing cost, power and computational time. One more advantage is that the end-to-end relationship between input data and control outputs can result in an arbitrarily nonlinear complex model (i.e., sensor to actuation) which has yielded surprisingly encouraging results in different control problems such as lane following [3], autonomous driving [4], and Unmanned Aerial Vehicles (UAV) control [5]. However,
Cardiovascular diseases are the most common cause of global death. Endovascular interventions, in combination with advanced imaging technologies, are promising approaches for minimally invasive diagnosis and therapy. More recently, teleoperated robotic platforms target improved manipulation accuracy, stabilisation of instruments in the vasculature, and reduction of patient recovery times. However, benefits of recent platforms are undermined by a lack of haptics and residual patient exposure to ionising radiation. The purpose of this research was to design, implement, and evaluate a novel endovascular robotic platform, which accommodates emerging non-ionising magnetic resonance imaging (MRI). Methods: We proposed a pneumatically actuated MR-safe teleoperation platform to manipulate endovascular instrumentation remotely and to provide operators with haptic feedback for endovascular tasks. The platform task performance was evaluated in an ex vivo cannulation study with clinical experts (N = 7) under fluoroscopic guidance and haptic assistance on abdominal and thoracic phantoms. Results: The study demonstrated that the robotic dexterity involving pneumatic actuation concepts enabled successful remote cannulation of different vascular anatomies with success rates of 90% -100%. Compared to manual cannulation, slightly lower interaction forces between instrumentation and phantoms were measured for specific tasks. The maximum robotic interaction forces did not exceed 3 N. Conclusion: This research demonstrates a promising versatile robotic technology for remote manipulation of endovascular instrumentation in MR environments. Significance: The results pave the way for clinical translation with device deployment to endovascular interventions using non-ionising real-time 3D MR guidance.
The increasing popularity of virtual reality (VR) in a wide spectrum of applications has generated sensitive personal data such as medical records and credit card information. While protecting such data from unauthorized access is critical, directly applying traditional authentication methods (e.g., PIN) through new VR input modalities such as remote controllers and head navigation would cause security issues. The authentication action can be purposefully observed by attackers to infer the authentication input. Unlike any other mobile devices, VR presents immersive experience via a head-mounted display (HMD) that fully covers users' eye area without public exposure. Leveraging this feature, we explore human visual system (HVS) as a novel biometric authentication tailored for VR platforms. While previous works used eye globe movement (gaze) to authenticate smartphones or PCs, they suffer from a high error rate and low stability since eye gaze is highly dependent on cognitive states. In this paper, we explore the HVS as a whole to consider not just the eye globe movement but also the eyelid, extraocular muscles, cells, and surrounding nerves in the HVS. Exploring HVS biostructure and unique HVS features triggered by immersive VR content can enhance authentication stability. To this end, we present OcuLock, an HVS-based system for reliable and unobservable VR HMD authentication. OcuLock is empowered by an electrooculography (EOG) based HVS sensing framework and a record-comparison driven authentication scheme. Experiments through 70 subjects show that OcuLock is resistant against common types of attacks such as impersonation attack and statistical attack with Equal Error Rates as low as 3.55% and 4.97% respectively. More importantly, OcuLock maintains a stable performance over a 2month period and is preferred by users when compared to other potential approaches.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.