The ability for a given agent to adapt on‐line to better interact with another agent is a difficult and important problem. This problem becomes even more difficult when the agent to interact with is a human, because humans learn quickly and behave nondeterministically. In this paper, we present a novel method whereby an agent can incrementally learn to predict the actions of another agent (even a human), and thereby can learn to better interact with that agent. We take a case‐based approach, where the behavior of the other agent is learned in the form of state–action pairs. We generalize these cases either through continuous k‐nearest neighbor, or a modified bounded minimax search. Through our case studies, our technique is empirically shown to require little storage, learn very quickly, and be fast and robust in practice. It can accurately predict actions several steps into the future. Our case studies include interactive virtual environments involving mixtures of synthetic agents and humans, with cooperative and/or competitive relationships.
We present a particle-based algorithm for modeling highly viscous liquids. Using a numerical time-integration of particle acceleration and velocity, we apply external forces to particles and use a convenient organization, the adhesion matrix, to represent forces between different types of liquids and objects. Viscosity is handled by performing a momentum exchange between particle pairs such that momentum is conserved. Volume is maintained by iteratively adjusting particle positions after each time step. We use a two-tiered approach to time stepping that allows particle positions to be updated many times per frame while expensive operations, such as calculating viscosity and adhesion, are done only a few times per frame. The liquid is rendered using an implicit surface polygonization algorithm, and we present an implicit function that convolves the liquid surface with a Gaussian function, yielding a smooth liquid skin.
Adaptation (online learning) by autonomous virtual characters, due to interaction with a human user in a virtual environment, is a difficult and important problem in computer animation. In this article we present a novel multi-level technique for fast character adaptation. We specifically target environments where there is a cooperative or competitive relationship between the character and the human that interacts with that character.In our technique, a distinct learning method is applied to each layer of the character's behavioral or cognitive model. This allows us to efficiently leverage the character's observations and experiences in each layer. This also provides a convenient temporal distinction between what observations and experiences provide pertinent lessons for each layer. Thus the character can quickly and robustly learn how to better interact with any given unique human user, relying only on observations and natural performance feedback from the environment (no explicit feedback from the human). Our technique is designed to be general, and can be easily integrated into most existing behavioral animation systems. It is also fast and memory efficient.
Rule-based deduplication utilizes expert domain knowledge to identify and remove duplicate data records. Achieving high accuracy in a rule-based system requires the creation of rules containing a good combination of discriminatory clues. Unfortunately, accurate rule-based deduplication often requires significant manual tuning of both the rules and the corresponding thresholds. This need for manual tuning reduces the efficacy of rule-based deduplication and its applicability to real-world data sets. No adequate solution exists for this problem. We propose a novel technique for rule-based deduplication. We apply individual deduplication rules, and combine the resultant match scores via learning-based information fusion. We show empirically that our fused deduplication technique achieves higher average accuracy than traditional rule-based deduplication. Further, our technique alleviates the need for manual tuning of the deduplication rules and corresponding thresholds.
Behavioral and cognitive modeling for virtual characters is a promising field. It significantly reduces the workload on the animator, allowing characters to act autonomously in a believable fashion. It also makes interactivity between humans and virtual characters more practical than ever before. In this paper we present a novel technique where an artificial neural network is used to approximate a cognitive model. This allows us to execute the model much more quickly, making cognitively empowered characters more practical for interactive applications. Through this approach, we can animate several thousand intelligent characters in real time on a PC. We also present a novel technique for how a virtual character, instead of using an explicit model supplied by the user, can automatically learn an unknown behavioral/ cognitive model by itself through reinforcement learning. The ability to learn without an explicit model appears promising for helping behavioral and cognitive modeling become more broadly accepted and used in the computer graphics community, as it can further reduce the workload on the animator. Further, it provides solutions for problems that cannot easily be IntroductionVirtual characters are an important part of computer graphics. These characters have taken forms such as synthetic humans, animals, mythological creatures, and non-organic objects that exhibit lifelike properties (walking lamps, etc). Their uses include entertainment, training, and simulation. As computing and rendering power continue to increase, virtual characters will only become more commonplace and important.One of the fundamental challenges involved in using virtual characters is animating them. It can often be difficult and time consuming to explicitly define all aspects of the behavior and animation of a complex virtual character. Further, the desired behavior may be impossible to define ahead of time if the character's virtual world changes in unexpected or diverse ways. For these reasons, it is desirable to make virtual characters as autonomous and intelligent as possible while still maintaining animator control over their high-level goals. This can be accomplished with a behavioral model: an executable model defining how the character should react to stimuli from its environment. Alternatively, we can use a cognitive model: an executable model of the character's thought process. A behavioral model is reactive (i.e., seeks to fulfill immediate goals), whereas a cognitive model seeks to accomplish long-term goals through planning: a search for what actions should be performed in what order to reach a goal state. Thus a cognitive model is generally considered more powerful than a behavioral one, but can require significantly more processing power. As can be seen, behavioral and cognitive modeling have unique strengths and weaknesses, and each has proven to be very useful for virtual character animation.However, despite the success of these techniques in certain domains, some important arguments have been brought against curren...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.