MelodicBrush is a novel system that connects two ancient art forms: Chinese ink-brush calligraphy and Chinese music. Our system uses vision-based techniques to create a digitized ink-brush calligraphic writing surface with enhanced interaction functionalities. The music generation combines cross-modal stroke-note mapping and statistical language modeling techniques into a hybrid model that generates music as a real-time, auditory response and feedback to the user's calligraphic strokes.Our system is in fact a new cross-modal musical system that endows the ancient art of calligraphy writing with a novel auditory representation to provide the users with a natural and novel artistic experience. Experiment evaluations with real users suggest that MelodicBrush is intuitive and realistic, and can also be easily used to exercise creativity and support art generation.
This paper presents KID, an interactive app on a smart device, designed to facilitate and encourage young children to learn and practice Chinese characters. It relies on pen dynamics to extract the strokes and map the written character to the proper one. The stroke orientation is also analyzed for ordering and spatial alignment features that pinpoint common errors. A visual pictorial feedback is then provided to motivate children and to arouse their interest. We iterate the prototype design and implementation upon collecting feedback from focus group interviews, from where the system is greeted with positive comments.
MelodicBrush is a novel cross-modal musical system that connects two ancient art forms: Chinese ink-brush calligraphy and Chinese music. Our system endows the process of calligraphy writing with a novel auditory representation in a natural and intuitive manner to create a novel artistic experience. The writing effect is simulated as though the user were writing on an infinitely large piece of paper viewed through a viewport. The real-time musical generation effects are motivated by principles of metaphoric congruence and statistical music modeling.
We present i*Chameleon, a configurable and extensible multimodal platform for developing highly interactive applications. The platform leverages a principled and comprehensive development cycle to systematically capture the multimodal interaction artifact. Importantly, by introducing MVC architectural pattern, it enforces the concept of separation-ofconcerns to enable cross collaboration among device engineers, programmers, modality designers and interaction designers who are collectively working on different aspects of human computer interaction and programming. Collectively, the development efforts are combined, integrated and compiled by the i*Chameleon kernel to derive the multimodal interactive application. The i*Chameleon platform sets itself apart from previous works in that it advocates the need to engineer a software development approach that leverages a MVC software architectural pattern to promote ease of software development through division of responsibilities among engineers and HCI designers. To validate the usefulness of i*Chameleon, we describe several application case examples to demonstrate the ease of developing multimodal applications through systematic integration of the design models as described above.
This paper introduces the Multi-modal Interface Framework (MIF). It is a system which allows developers to easily integrate interface devices of multiple modalities, such as voice, hand and finger gestures, and various tangible devices such as game controllers into a multi-modal input system. The integrated devices can then be used to control practically any computer application. The advantages offered by MIF are ease of use, flexibility and support for collaboration. Its design has been validated by applying it to integrate finger gestures, voice, a Wii mote and an iPhone to control applications such as Google Earth and Windows Media Player.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.