Locomotion is a prime example for adaptive behavior in animals and biological control principles have inspired control architectures for legged robots. While machine learning has been successfully applied to many tasks in recent years, Deep Reinforcement Learning approaches still appear to struggle when applied to real world robots in continuous control tasks and in particular do not appear as robust solutions that can handle uncertainties well. Therefore, there is a new interest in incorporating biological principles into such learning architectures. While inducing a hierarchical organization as found in motor control has shown already some success, we here propose a decentralized organization as found in insect motor control for coordination of different legs. A decentralized and distributed architecture is introduced on a simulated hexapod robot and the details of the controller are learned through Deep Reinforcement Learning. We first show that such a concurrent local structure is able to learn better walking behavior. Secondly, that the simpler organization is learned faster compared to holistic approaches.
In the present study, precise, animal-based biometric data on the space needed for the body dimensions of individual pigs (static space) were collected. Per batch, two groups of eight piglets each were formed after weaning (35 days old). Using three-dimensional cameras that recorded a piglets’ pen from above and newly developed software, the static space of individuals was determined over 6 weeks. The area covered by an individual increased almost linearly with increasing body weight (R2 = 0.97). At the end of rearing (25 kg body weight), an individual covered 1704 cm2 in standing position, 1687 cm2 in sitting posture and 1798 cm2 in a recumbent position. According to the allometric equation: Space = k × body weight0.667, k values for the static space in standing position (k = 0.021), in recumbent position in general (k = 0.022) and in lateral recumbent posture (k = 0.027) were calculated. Compared with spatial requirements in different countries, the results of static space obtained in the present study revealed that pigs weighing 25 kg are provided with 0.09–0.18 m2 free space per pig which is not covered by the pig's body. This free space can be used as dynamic space needed for body movements or social interactions. The present study was not intended to enhance space recommendations in pig farming, but to demonstrate the amount of free space in a pigs’ pen. It was shown that innovative technologies based on image analysis offer completely new possibilities to assess spatial requirements for pigs.
Machine learning-based models for object detection rely on large datasets of labeled images, such as COCO or ImageNet. When models trained on these datasets are applied to aerial images recorded on Unmanned Aerial Vehicles (UAVs), the problem arises that the conditions under which the training images were created (for example, light, altitude, or angle) may be different in the environment where the UAVs are put into practice, leading to failed detections. This problem becomes even more pressing in safety critical applications where failures can have huge negative impacts and also constitutes an obstacle for certification of cognitive components in UAVs. Along a case study on car detection in low-altitude aerial imagery, we show that using, both, artificial and real images for model training has a positive effect on the performance of object detection algorithms when the trained model is applied on images from another domain. Since simulated images are easy to create and object labels are inherently given, the presented approach shows a promising direction for scenarios where adequate datasets are difficult to obtain, as well as for targeted exploration of weak points of object detection algorithms.
Machine learning-based models for object detection heavily rely on large datasets of labeled images. When models trained on these datasets are applied to Unmanned Aerial Vehicle (UAV) imagery, the problem arises that the conditions under which the training images were created (lighting, altitude, angle) may be different to the UAVs applied conditions, leading to misclassifications. This problem becomes even more pressing in safety critical applications where failures can have huge negative impacts and constitute obstacles for certification of cognitive UAV components. Along a case study on car detection in low-altitude aerial imagery, we show that using, both, artificial and real images for model training has a positive effect on the performance of object detection algorithms when the trained model is applied on images from another domain. Additionally, we show that weak points from object detection neural networks trained on real-world images transfer to synthetic images and that synthetic data can be used to evaluate neural networks trained on real-world data. Since simulated images are easy to create and object labels are inherently given, the presented approaches show a promising direction for scenarios where adequate datasets are difficult to obtain, as well as for the targeted exploration of weak points of object detection algorithms.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.