In recent years, a tremendous amount of effort has been devoted to modeling the cognition of human brain, particularly hypothesis generation process. Most research of the hypothesis generation model is probability-based. However, computation of human brains is still neuron-based instead of calculating the probability. As an attempt to solve this problem in this paper, we propose a novel neuron-based hypothesis generation model, called hypothesis generation net, to model human cognition, including how to make decisions and how to do actions. Basically, the proposed hypothesis generation model consists of two parts, i.e., a hypothesis model and an evaluation model. When these two models interact, the system is able to generate hypotheses to solve complex tasks based on historical experiences. To validate the feasibility of the proposed hypothesis generation model, we show a virtual robot with its cognition system can learn how to write Chinese calligraphy in a simulation environment, where an image-to-action translation via a cognitive framework is proposed to learn the pattern of Chinese characters. Based on the proposed deep thinking and learning mechanism, the virtual robot is able to write Chinese calligraphy well, which is a difficult task requiring extremely complicated motions, through thinking and practicing according to a human writing sample. INDEX TERMS Hypothesis generation model, deep neural networks, Chinese calligraphy, image-to-action translation.
This paper attempts to use a delta robot's structure and reliable coordinates to develop a selflearning Chinese calligraphy-writing system that requires precise control. Ideally, to achieve human-like behavior, a delta robot can learn stroke trajectories autonomously and present the stroke beauty of calligraphy characters. Unfortunately, state-of-the-art approaches have not yet considered the presentation of stroke beauty resulting from angles of rotation and tilt of the brush. This paper presents an integrated system consisting of a stroke processing module, a hypothesis generation net (HGN) learning model, a delta robot, and an image capture module. Our approach utilizes both the stroke trajectories from the stroke processing module and angles information from the HGN learning model to automatically produce five degrees of freedom action instructions. Based on the instructions, the delta robot completes calligraphy writing. Then, the image capture module provides feedback to the writing system for error calculation and coordinate correction. We utilize the mean absolute percentage error to verify the performance of the writing results. A correction algorithm and linear regression were used to improve the error correction results (less than 2% error). After several cycles, the written results approached the target sample finally. Consequently, the written results produced by the delta robot prove that our proposed system is capable of self-learning and correction.
In this paper, we propose a robotic cognitive system which can learn itself to do a specific assignment by accumulating experiences through bottom-up thinking to make decision by itself via topdown thinking according to the experiences. That is, the cognitive system has a self-learning ability which can accumulate its experiences to make itself smarter. In essence, the cognitive system possesses a perception model, a memory model, and a hypothesis model. The perception model converts image information into perception codes. The memory model stores experiences in the past and present to provide to the perception model and the hypothesis model. The hypothesis model, which generates the next decision according to the experiences from the memory model, is the most important part of the proposed cognitive system. To validate the performance of the proposed system, we utilize Chinese calligraphy writing tasks by a virtual robot through simulation to evaluate the abilities of the cognitive system. In order to generate the coordinates of the writing brush, we made the virtual robot practice to learn Chinese calligraphy through bottom-up thinking to construct the writing patterns. The illustrative examples in this paper show that the virtual robot can learn to write Chinese calligraphy by top-down thinking according to its own experiences.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.