The automatic classification of ships from aerial images is a considerable challenge. Previous works have usually applied image processing and computer vision techniques to extract meaningful features from visible spectrum images in order to use them as the input for traditional supervised classifiers. We present a method for determining if an aerial image of visible spectrum contains a ship or not. The proposed architecture is based on Convolutional Neural Networks (CNN), and it combines neural codes extracted from a CNN with a k-Nearest Neighbor method so as to improve performance. The kNN results are compared to those obtained with the CNN Softmax output. Several CNN models have been configured and evaluated in order to seek the best hyperparameters, and the most suitable setting for this task was found by using transfer learning at different levels. A new dataset (named MASATI) composed of aerial imagery with more than 6000 samples has also been created to train and evaluate our architecture. The experimentation shows a success rate of over 99% for our approach, in contrast with the 79% obtained with traditional methods in classification of ship images, also outperforming other methods based on CNNs. A dataset of images (MWPU VHR-10) used in previous works was additionally used to evaluate the proposed approach. Our best setup achieves a success ratio of 86% with these data, significantly outperforming previous state-of-the-art ship classification methods.
Orientation discrimination, the capacity to recognize an orientation difference between two lines presented at different times, probably involves cortical processes such as stimuli encoding, holding them in memory, comparing them, and then deciding. To correlate discrimination with neural activity in combined psychophysical and electrophysiological experiments, precise knowledge of the strategies followed in the completion of the behavioral task is necessary. To address this issue, we measured human and nonhuman primates' capacities to discriminate the orientation of lines in a fixed and in a continuous variable task. Subjects have to indicate whether a line (test) was oriented to one side or to the other of a previously presented line (reference). When the orientation of the reference line did not change across trials (fixed discrimination task), subjects can complete the task either by categorizing the test line, thus ignoring the reference, or by discriminating between them. This ambiguity was avoided when the reference stimulus was changed randomly from trial to trial (continuous discrimination task), forcing humans and monkeys to discriminate by paying continuous attention to the reference and test stimuli. Both humans and monkeys discriminated accurately with stimulus duration as short as 150 ms. Effective interstimulus intervals were of 2.5 s for monkeys but much longer (>6 s) in humans. These results indicated that the fixed and continuous discrimination tasks are different, and accordingly humans and monkeys do use different behavioral strategies to complete each task. Because both tasks might involve different neural processes, these findings have important implications for studying the neural mechanisms underlying visual discrimination.
Robotic manipulators have to constantly deal with the complex task of detecting whether a grasp is stable or, in contrast, whether the grasped object is slipping. Recognising the type of slippage—translational, rotational—and its direction is more challenging than detecting only stability, but is simultaneously of greater use as regards correcting the aforementioned grasping issues. In this work, we propose a learning methodology for detecting the direction of a slip (seven categories) using spatio-temporal tactile features learnt from one tactile sensor. Tactile readings are, therefore, pre-processed and fed to a ConvLSTM that learns to detect these directions with just 50 ms of data. We have extensively evaluated the performance of the system and have achieved relatively high results at the detection of the direction of slip on unseen objects with familiar properties (82.56% accuracy).
Flexible multisensorial systems are a very important issue in the current industry when disassembling and recycling tasks have to be performed. These tasks can be performed by a human operator or by a robot system. In the current paper a robotic system to perform the required tasks is presented. This system takes into consideration the distribution of the necessary tasks to perform the disassembly of a component using several robots in a parallel or in a cooperative way. The algorithm proposed to distribute the task among robots takes into consideration the characteristics of each task and the sequence that needs to be followed to perform the required disassembly of the product. Furthermore, this paper presents a disassembly system based on a sensorized cooperative robots interaction framework for the planning of movements and detections of objects in the disassembly tasks. To determine the sequence of the disassembly of some products, a new strategy to distribute a set of tasks among robots is presented. Subsequently, the visual detection system used for detecting targets and characteristics is described. To carry out this detection process, different well known strategies, such as matching templates, polygonal approach and edge detection, are applied. Finally, a visual-force control system has been implemented in order to track disassembly trajectories. An important aspect of this system is the processing of the sensorial information in order to guarantee coherence. This aspect allows the application of both sensors, visual and force sensors, co-ordinately to disassembly tasks. The proposed system is validated by experiments using several types of components such as the covers of batteries and electronic circuits from toys, and drives and screws from PCs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.