Reduced training sets are major problems typically found on the task of offline signature verification. To increase the number of samples, the use of synthetic signatures can be taken into account. In this work, a new method for the generation of synthetic offline signatures by using dynamic and static (real) ones is presented. The synthesis is here faced under the perspective of supervised training: the learning model is trained to perform the task of online-to-offline signature conversion. The approach is based on a deep convolutional neural network. The main goal is to enlarge offline training dataset in order to improve performance of the offline signature verification systems. For this purpose, a machine-oriented evaluation on the BiosecurID signature dataset is carried out. The use of synthetic samples (in the training phase) generated with the proposed method on a state-of-the-art classification system exhibits performance similar to those obtained using real signatures; moreover, the combination of real and synthetic signatures in the training set is also able to show improvements of the equal error rate.
Robotics is a field of research that has undergone several changes in recent years. Currently, robot applications are commonly used for many applications, such as pump deactivation, mobile robotic manipulation, etc. However, most robots today are programmed to follow a predefined path. This is sufficient when the robot is working in a settled environment. Nonetheless, for many tasks, autonomous robots are needed. In this way, NAO humanoid robots constitute the new active research platform within the robotics community. In this article, we present a vision system that connects to the NAO robot, allowing robots to detect and recognize the visible text present in objects in images of natural scenes and use that knowledge to interpret the content of a given scene. The proposed vision system is based on deep learning methods and was designed to be used by NAO robots and consists of five stages: 1) capturing the image; 2) after capturing the image, the YOLOv3 algorithm is used for object detection and classification; 3) selection of the objects of interest; 4) text detection and recognition stage, based on the OctShuffleMLT approach; and 5) synthesis of the text. The choice of these models was due to the better results obtained in the COCO databases, in the list of objects, and in the ICDAR 2015, in the text list, these bases are very similar to those found with the NAO robot. Experimental results show that the rate of detecting and recognizing text from the images obtained through the NAO robot camera in the wild are similar to those presented in models pre-trained with natural scenes databases.
The simultaneous surges in the research on socially assistive robotics and that on computer vision can be seen as a result of the shifting and increasing necessities of our global population, especially towards social care with the expanding population in need of socially assistive robotics. The merging of these fields creates demand for more complex and autonomous solutions, often struggling with the lack of contextual understanding of tasks that semantic analysis can provide and hardware limitations. Solving those issues can provide more comfortable and safer environments for the individuals in most need. This work aimed to understand the current scope of science in the merging fields of computer vision and semantic analysis in lightweight models for robotic assistance. Therefore, we present a systematic review of visual semantics works concerned with assistive robotics. Furthermore, we discuss the trends and possible research gaps in those fields. We detail our research protocol, present the state of the art and future trends, and answer five pertinent research questions. Out of 459 articles, 22 works matching the defined scope were selected, rated in 8 quality criteria relevant to our search, and discussed in depth. Our results point to an emerging field of research with challenging gaps to be explored by the academic community. Data on database study collection, year of publishing, and the discussion of methods and datasets are displayed. We observe that the current methods regarding visual semantic analysis show two main trends. At first, there is an abstraction of contextual data to enable an automated understanding of tasks. We also observed a clearer formalization of model compaction metrics.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.