One Sentence Summary: A modular platform for synthesis is demonstrated that makes purified organic compounds autonomously without physical reconfiguration and is driven using a chemical programming language.Abstract: The synthesis of complex organic compounds is largely a manual process that is often incompletely documented. To address these shortcomings, we developed an abstraction that maps commonly reported methodological instructions into discrete steps amenable to automation. These unit operations were implemented in a modular robotic platform using a chemical programming language which formalizes and controls the assembly of the molecules.We validated the concept by directing the automated system to synthesize three pharmaceutical compounds, Nytol, Rufinamide, and Sildenafil, without any human intervention. Yields and purities of products and intermediates were comparable to or better than those achieved manually. The syntheses are captured as digital code that can be published, versioned, and transferred flexibly between platforms with no modification, thereby greatly enhancing reproducibility and reliable access to complex molecules.The automation of chemical synthesis is currently expanding, and this is driven by the availability of digital labware. The field currently encompasses areas as diverse as the design of new reactions (1), chemistry in reactionware (2), reaction monitoring and optimization (3,4), flow chemistry (5) for reaction optimization and scale up, to full automation of the synthesis
Abstract-We present a visually guided, dual-arm, industrial robot system that is capable of autonomously flattening garments by means of a novel visual perception pipeline that fully interprets high-quality RGB-D images of the clothing scene based on an active stereo robot head. A segmented clothing range map is B-Spline smoothed prior to being parsed by means of shape and topology into 'wrinkle' structures. The wrinkle length, width and height are used to quantify the topology of wrinkles and thereby rank the size of wrinkles such that a greedy algorithm can identify the largest wrinkle present. A flattening plan optimised for this specific wrinkle is formulated based on dual-arm manipulation. Validation of the reported autonomous flattening behaviour has been undertaken and has demonstrated that dual-arm flattening requires significantly fewer manipulation iterations than single-arm flattening. The experimental results also revel that the flattening process is heavily influenced by the quality of the RGB-D sensor, use of a custom off-the-shelf high-resolution stereo-based sensor system outperforming a commercial low-resolution kinect-like camera in terms of required flattening iterations.
Abstract-This paper proposes a single-shot approach for recognising clothing categories from 2.5D features. We propose two visual features, BSP (B-Spline Patch) and TSD (Topology Spatial Distances) for this task. The local BSP features are encoded by LLC (Locality-constrained Linear Coding) and fused with three different global features. Our visual feature is robust to deformable shapes and our approach is able to recognise the category of unknown clothing in unconstrained and random configurations. We integrated the category recognition pipeline with a stereo vision system, clothing instance detection, and dual-arm manipulators to achieve an autonomous sorting system. To verify the performance of our proposed method, we build a high-resolution RGBD clothing dataset of 50 clothing items of 5 categories sampled in random configurations (a total of 2,100 clothing samples). Experimental results show that our approach is able to reach 83.2% accuracy while classifying clothing items which were previously unseen during training. This advances beyond the previous state-of-the-art by 36.2%. Finally, we evaluate the proposed approach in an autonomous robot sorting system, in which the robot recognises a clothing item from an unconstrained pile, grasps it, and sorts it into a box according to its category. Our proposed sorting system achieves reasonable sorting success rates with single-shot perception.
The development of the internet of things has led to an explosion in the number of networked devices capable of control and computing. However, whilst common place in remote sensing, these approaches have not impacted chemistry due to difficulty in developing systems flexible enough for experimental data collection. Herein we present a simple and affordable (<$500) chemistry capable robot built with a standard set of hardware and software protocols that can be networked to coordinate many chemical experiments in real time. We demonstrate how multiple processes can be done with two internet-connected robots collaboratively, exploring a set of azo-coupling reactions in a fraction of time needed for a single robot, as well as encoding and decoding information into a network of oscillating reactions. The system can also be used to assess the reproducibility of chemical reactions and discover new reaction outcomes using game playing to explore a chemical space.
Abstract-In this paper, we propose a Gaussian Processbased interactive perception approach for recognising highlywrinkled clothes. We have integrated this recognition method within a clothes sorting pipeline for the pre-washing stage of an autonomous laundering process. Our approach differs from reported clothing manipulation approaches by allowing the robot to update its perception confidence via numerous interactions with the garments. The classifiers predominantly reported in clothing perception (e.g. SVM, Random Forest) studies do not provide true classification probabilities, due to their inherent structure. In contrast, probabilistic classifiers (of which the Gaussian Process is a popular example) are able to provide predictive probabilities. In our approach, we employ a multi-class Gaussian Process classification using the Laplace approximation for posterior inference and optimising hyper-parameters via marginal likelihood maximisation. Our experimental results show that our approach is able to recognize unknown garments in difficult configurations using limited visual perception and demonstrates a substantial improvement over non-interactive perception approaches.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.