People who suffer from hearing impairment caused by illness, age or extremely noisy environments are constantly in danger of being hit or knocked down by fast moving objects behind them when they have no companion or augmented sensory system to warn them. In this paper, we propose the General Moving Object Alarm System (GMOAS), a system focused on aiding the safe mobility of people under these circumstances. The GMOAS is a wearable haptic device that consists of two main subsystems: (i) a moving object monitoring subsystem that uses laser range data to detect and track approaching objects, and (ii) an alarm subsystem that warns the user of possibly dangerous approaching objects by triggering tactile vibrations on an ʺalarm necklaceʺ. For moving object monitoring, we propose a simple yet efficient solution to monitor the approaching behavior of objects. Compared with previous work in motion detection and tracking, we are not interested in specific objects but any type of approaching object that might harm the user. To this extent, we define a boundary in the laser range data where the objects are monitored. Within this boundary a fan‐shape grid is constructed to obtain an evenly distributed spatial partitioning of the data. These partitions are efficiently clustered into continuous objects which are then tracked through time using an object association algorithm based on updating a deviation matrix that represents angle, distance and size variations of the objects. The speed of the tracked objects is monitored throughout the algorithm. When the speed of an approaching object surpasses the safety threshold, the alarm necklace is triggered indicating the approaching direction of the fast moving object. The alarm necklace is equipped with three motors that can indicate five directions with respect to the user: left, back, right, left‐ back and right‐back. We performed three types of outdoor experiments (object passing, approaching and crossing) that empirically verified the effectiveness of our proposed algorithm. Furthermore, we analyzed the time and direction response based on neck vibrations. The statistical analysis (including hypothesis test) suggests that the chosen alarm necklace can provide a rapid indication for a quick human response
Coordination is essential in the design of dynamic control strategies for multi-arm robotic systems. Given the complexity of the task and dexterity of the system, coordination constraints can emerge from different levels of planning and control. Primarily, one must consider task-space coordination, where the robots must coordinate with each other, with an object or with a target of interest. Coordination is also necessary in joint space, as the robots should avoid self-collisions at any time. We provide such joint-space coordination by introducing a centralized inverse kinematics (IK) solver under self-collision avoidance constraints, formulated as a quadratic program and solved in real-time. The space of free motion is modeled through a sparse non-linear kernel classification method in a data-driven learning approach. Moreover, we provide multi-arm task-space coordination for both synchronous or asynchronous behaviors. We define a synchronous behavior as that in which the robot arms must coordinate with each other and with a moving object such that they reach for it in synchrony. In contrast, an asynchronous behavior allows for each robot to perform independent point-to-point reaching motions. To transition smoothly from asynchronous to synchronous behaviors and vice versa, we introduce the notion of synchronization allocation. We show how this allocation can be controlled through an external variable, such as the location of the object to be manipulated. Both behaviors and their synchronization allocation are encoded in a single dynamical system. We validate our framework on a dual-arm robotic system and demonstrate that the robots can re-synchronize and adapt the motion of each arm while avoiding self-collision within milliseconds. The speed of control is exploited to intercept fast moving objects whose motion cannot be predicted accurately.
This paper introduces a hierarchical framework that is capable of learning complex sequential tasks from human demonstrations through kinesthetic teaching, with minimal human intervention. Via an automatic task segmentation and action primitive discovery algorithm, we are able to learn both the high-level task decomposition (into action primitives), as well as low-level motion parameterizations for each action, in a fully integrated framework. In order to reach the desired task goal, we encode a task metric based on the evolution of the manipulated object during demonstration, and use it to sequence and parametrize each action primitive. We illustrate this framework with a pizza dough rolling task and show how the learned hierarchical knowledge is directly used for autonomous robot execution.
With the introduction of new depth-sensing technologies, interactive hand-gesture devices (such as smart televisions and displays) have been rapidly emerging. However, given the lack of a common vocabulary, most hand-gesture control commands are device-specific, burdening the user into learning different vocabularies for different devices. In order for hand gestures to become a natural communication for users with interactive devices, a standardized interactive hand-gesture vocabulary is necessary. Recently, researchers have approached this issue by conducting studies that elicit gesture vocabularies based on users' preferences. Nonetheless, a universal vocabulary has yet to be proposed. In this paper, a thorough design methodology for achieving such a universal hand-gesture vocabulary is presented. The methodology is derived from the work of Wobbrock et al. and includes four steps: 1) a preliminary survey eliciting users' attitudes; 2) a broader user survey in order to construct the universal vocabulary via results of the preliminary survey; 3) an evaluation test to study the implementation of the vocabulary; and 4) a memory test to analyze the memorability of the vocabulary. The proposed vocabulary emerged from this methodology achieves an agreement score exceeding those of the existing studies. Moreover, the results of the memory test show that, within a 15-min training session, the average accuracy of the proposed vocabulary is 90.71%. Despite the size of the proposed gesture vocabulary being smaller than that of similar work, it shares the same functionality, is easier to remember and can be integrated with smart TVs, interactive digital displays, and so on.INDEX TERMS Hand-gesture interaction, gesture elicitation study, preferences and attitudes, gesture set, human-computer interaction.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.