Serious games (SG) allow us to learn even when we are relaxing. These games are called “serious” because they allow us to be trained at domain-specific knowledge level. That is the main reason SG are gathering an even increasing research interest in recent years. In contrast with traditional, purely entertainment games, SGs architectures and design principles are under active investigation by researchers. Recent work in that field attempts to define how SG are structured, built, used and extended. However, there is still a lot of debate which design techniques are adequate or which techniques can be borrowed from other fields – such as computer science or mainstream entertainment games. The main objective of our research is three-fold: investigate and analyze current architectural approaches; summarize the top characteristics of a modern serious game; and propose an architecture that is coherent with current approaches. Following these principles, we determine that the prevailing views in the SGs area are that they should be distributed and modular, service-based and easily extendible. Building on top of that, we come up with a novel concept for creating serous games that are independent of their input devices and propose two ways that independence can be achieved. We briefly discuss the possible integration of 3-rd party services by using message queue brokers in a publish / subscribe manner. Finally, we summarize and propose different methods for extending our proposed approach.
Education and self-improvement are key features of human behavior. However, learning in the physical world is not always desirable or achievable. That is how simulators came to be. There are domains where purely virtual simulators can be created in contrast to physical ones. In this research we present a novel environment for learning, using a natural user interface. We, humans, are not designed to operate and manipulate objects via keyboard, mouse or a controller. The natural way of interaction and communication is achieved through our actuators (hands and feet) and our sensors (hearing, vision, touch, smell and taste). That is the reason why it makes more sense to use sensors that can track our skeletal movements, are able to estimate our pose, and interpret our gestures. After acquiring and processing the desired – natural input, a system can analyze and translate those gestures into movement signals.
In this paper, we present the usability evolution of human-computer interactions. In addition, we group the various user interfaces of human-computer interaction into two generations with three classes each. The most user interfaces for human-computer interaction are assigned into a class depending on objective criteria scores, such as technical literacy level, level of natural interactions, user learning curve and UI’s ability to adapt. In addition, current tendencies for HCI are presented and future perspectives are discussed. Finally, we summarize the achieved results and draw conclusions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.