Robot locomotion is typically generated by coordinated integration of single-purpose components, like actuators, sensors, body segments, and limbs. We posit that certain future robots could self-propel using systems in which a delineation of components and their interactions is not so clear, becoming robust and flexible entities composed of functional components that are redundant and generic and can interact stochastically. Control of such a collective becomes a challenge because synthesis techniques typically assume known input-output relationships. To discover principles by which such future robots can be built and controlled, we study a model robophysical system: planar ensembles of periodically deforming smart, active particles—smarticles. When enclosed, these individually immotile robots could collectively diffuse via stochastic mechanical interactions. We show experimentally and theoretically that directed drift of such a supersmarticle could be achieved via inactivation of individual smarticles and used this phenomenon to generate endogenous phototaxis. By numerically modeling the relationship between smarticle activity and transport, we elucidated the role of smarticle deactivation on supersmarticle dynamics from little data—a single experimental trial. From this mapping, we demonstrate that the supersmarticle could be exogenously steered anywhere in the plane, expanding supersmarticle capabilities while simultaneously enabling decentralized closed-loop control. We suggest that the smarticle model system may aid discovery of principles by which a class of future “stochastic” robots can rely on collective internal mechanical interactions to perform tasks.
Self-organization is frequently observed in active collectives as varied as ant rafts and molecular motor assemblies. General principles describing self-organization away from equilibrium have been challenging to identify. We offer a unifying framework that models the behavior of complex systems as largely random while capturing their configuration-dependent response to external forcing. This allows derivation of a Boltzmann-like principle for understanding and manipulating driven self-organization. We validate our predictions experimentally, with the use of shape-changing robotic active matter, and outline a methodology for controlling collective behavior. Our findings highlight how emergent order depends sensitively on the matching between external patterns of forcing and internal dynamical response properties, pointing toward future approaches for the design and control of active particle mixtures and metamaterials.
Micrometer‐scale robots capable of navigating enclosed spaces and remote locations are approaching reality. However, true autonomy remains an open challenge despite substantial progress made with externally supervised and manipulated systems. To accelerate the development of autonomous microrobots, alternatives to conventional top‐down lithography are sought. Such additive technologies like printing, coating, and colloidal self‐assembly allow for rapid prototyping and access to novel materials, such as polymers, bio‐ and nanomaterials. On the basis of recent experimental findings that memristive networks can be rapidly printed and lifted off as electronic microparticles, an alternative design paradigm is introduced based on arrays of two‐terminal memristive elements, which enables real‐time use of memory, sensing, and actuation in microrobots. Several memristor‐based designs are validated, each representing a key building block toward robotic autonomy: tracking elapsed time, timestamping a rare event, continuously cataloguing time‐indexed data, and accessing the collected information for a feedback‐controlled response as in a robotic glucose‐responsive insulin. The computational results establish an actionable framework for microrobotic design—tasks normally requiring complex circuits can now be achieved with self‐assembled and printed memristor arrays within microparticles.
Motions carry information about the underlying task being executed. Previous work in human motion analysis suggests that complex motions may result from the composition of fundamental submovements called movemes. The existence of finite structure in motion motivates information-theoretic approaches to motion analysis and robotic assistance. We define task embodiment as the amount of task information encoded in an agent's motions. By decoding task-specific information embedded in motion, we can use task embodiment to create detailed performance assessments. We extract an alphabet of behaviors comprising a motion without a priori knowledge using a novel algorithm, which we call dynamical system segmentation. For a given task, we specify an optimal agent, and compute an alphabet of behaviors representative of the task. We identify these behaviors in data from agent executions, and compare their relative frequencies against that of the optimal agent using the Kullback-Leibler divergence. We validate this approach using a dataset of human subjects (n = 53) performing a dynamic task, and under this measure find that individuals receiving assistance better embody the task. Moreover, we find that task embodiment is a better predictor of assistance than integrated mean-squarederror.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.