From the coffee mug we put to our mouths, to our fingers finding their way to the correct letters on our keyboard, hand movements are ever-present. Using data gloves, optical tracking, or accelerometers, a lot of information about these habitual interactions can be captured, showing that, as well as dominating everyday life, they can also be quite complex. [4] Designing for more interactive objects, with hand gestures acting as one of the main controls, is an area of increasing research and commercial interest. While the degrees of freedom (DOF) the human hand possesses allow for "flexibility to perform skilled finger movements" [5] in real life, they pose technological challenges for digital interfaces, as most typical interaction design processes are not yet well adapted to high-dimensional input. [6] The goal when prototyping such controls is to find gestures that are easy for the user to recall and perform reliably, and easy for the computer to distinguish from one another. However, gestures possess a high degree of complexity and human movement possesses a variability [7] that needs to be known and incorporated into the design process. Confronting designers with the raw high-dimensional data as input and asking them to design robust, usable interactive systems has proven to be challenging. This has limited the impact of novel sensing systems that capturing more of the complexity of real-life interactions. Such systems' datasets possess higher dimensionalities, which makes them more challenging for designers to understand and utilize. Lu et al., for example, are developing a system that can capture the full 25 dimensions hand movement possesses with the help of multiple inertial sensors. [8] Further, large amounts of high-dimensional data can be generated from the deployment of multiple sensors across physical objects, as is the case for the WebBike Project, where the researchers equipped e-bikes with sensors. [9] The highly complex datasets created by those novel sensing systems require a layer of simplification to become comprehensible for researchers and designers and to proactively apply design processes appropriately. We explore the role of lowdimensional embeddings in conceptualizing and visualizing high-dimensional movement data by answering the following research question.Can Autoencoder-Based Dimensionality Reduction Simplify the Data-Driven Design Process?Our findings suggest that the low-dimensional representation of complex movement data is more convenient to work with than high-dimensional data, which builds a basis for analyzing and visualizing hand movements and interactions among team members during the design process. Further, we define
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.