The ability to learn is a potentially compelling and important quality for interactive synthetic characters. To that end, we describe a practical approach to real-time learning for synthetic characters. Our implementation is grounded in the techniques of reinforcement learning and informed by insights from animal training. It simpliÞes the learning task for characters by (a) enabling them to take advantage of predictable regularities in their world, (b) allowing them to make maximal use of any supervisory signals, and (c) making them easy to train by humans.We built an autonomous animated dog that can be trained with a technique used to train real dogs called "clicker training". Capabilities demonstrated include being trained to recognize and use acoustic patterns as cues for actions, as well as to synthesize new actions from novel paths through its motion space.A key contribution of this paper is to demonstrate that by addressing the three problems of state, action, and state-action space discovery at the same time, the solution for each becomes easier. Finally, we articulate heuristics and design principles that make learning practical for synthetic characters.
The ability to learn is a potentially compelling and important quality for interactive synthetic characters. To that end, we describe a practical approach to real-time learning for synthetic characters. Our implementation is grounded in the techniques of reinforcement learning and informed by insights from animal training. It simpliÞes the learning task for characters by (a) enabling them to take advantage of predictable regularities in their world, (b) allowing them to make maximal use of any supervisory signals, and (c) making them easy to train by humans.We built an autonomous animated dog that can be trained with a technique used to train real dogs called "clicker training". Capabilities demonstrated include being trained to recognize and use acoustic patterns as cues for actions, as well as to synthesize new actions from novel paths through its motion space.A key contribution of this paper is to demonstrate that by addressing the three problems of state, action, and state-action space discovery at the same time, the solution for each becomes easier. Finally, we articulate heuristics and design principles that make learning practical for synthetic characters.
We have developed a very simple retrofit to a large display surface that enables knocks or taps to be located and characterized (e.g., determining type of hit -metallic tap, knuckle tap, or bashand intensity) in real time. We do this by analyzing the waveforms captured by 4 piezoelectric transducers (one mounted in each corner of the surface) and a dynamic microphone (mounted anywhere on the glass) in a digital signal processor. Differential timing yields the position, frequency content infers the kind of hit, and peak amplitude reflects the intensity. This technique was first explored in collaboration between Paradiso and Ishii [Ishii et. al. 1999] to make an interactive ping-pong table. Moving to glass display surfaces introduced significant problems, however -knuckle taps are low-frequency impulses that vary considerably hit-to-hit, and the bending waves propagating through the glass are highly dispersive. A heuristically-guided cross-correlation algorithm [Paradiso et al. 2002] was developed to counteract these effects and provide spatial measurements that can resolve knuckle impacts to within σ = 2-4 cm (depending on the material thickness) across a 2-meter sheet of glass. As the requisite hardware is minimal, and everything is mounted on the inside sheet of glass, this is a very simple retrofit to, for example, store window displays, ushering in an entirely new concept of interactive window browsing, where passers-by can interact with information on the store's products by simply knocking. We have explored this concept in retail, where one of our trackers was installed on the main display window of an American Greetings store near Rockefeller Center in Manhattan for this year's Christmas-Valentine's Day season (right figure), and in museums (e.g., left figure, which shows the system running at the Ars Electronica Center in Linz, Austria).We plan to augment the contact interaction of the knocking system with a noncontact system that detects the presence and activity of participants in front of the screen. Although we have used motion radar systems before for measuring user activity (e.g., [Paradiso et al. 1997]), we are now developing a small, low-power ranging radar system that can determine the distance to users as they approach the screen. Unlike IR, vision or sonar systems, this technique isn't sensitive to lighting, clothing, or clutter. It can also sense through opaque, nonconductive material, such as plastic, wood, or wallboard. We plan to place 2-3 radar antennae behind the frosted glass display and outside of the light cone of the projector. This will allow the system to zone the users as they approach, adjusting the audiovisual content accordingly.The participants' activity will result in accompanying music and graphics. Free gesture will produce gentle sounds and amorphous cloud-like graphics that become more frenetic with decreasing range. Contact interaction will materialize discrete audiovisual events. In particular, a non-representational, nonphotorealistic rendering style will be used that co...
The lack of development environments for interdisciplinary research conducted on large-scale datasets hampers research at every stage. Projects incur large startup costs as disparate infrastructure is assembled; experimentation slows when software components and environment are mismatched for specific research tasks; and findings are disseminated in forms that are hard to examine, learn from, and reuse. Behind these problems is a common cause -the lack of good tools. When large, heterogeneous and distributed data is added to the equation, further frustration, at the least, ensues. As a result using existing platforms, the programmers of 21 st century interactive visualizations are reduced to working in the same fashion with the same tools as 20 th century database programmers. Our contribution is to bring the tools of digital artists to bear on the aforementioned data analysis and visualization challenges. Here we report on the current state of progress in adapting Field for large-scale, web-based scientific data analysis and visualization with an emphasis on Linked Open Data [1] and especially the current data hosted by RPI [2].
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.