Soft robotics has yielded numerous examples of soft grippers that utilize compliance to achieve impressive grasping performances with great simplicity, adaptability, and robustness. Designing soft grippers with substantial grasping strength while remaining compliant and gentle is one of the most important challenges in this field. In this paper, we present a lightweight , vacuum-driven soft robotic gripper made of an origami "magic-ball" and a flexible thin membrane. We also describe the design and fabrication method to rapidly manufacture the gripper with different combinations of lowcost materials for diverse applications. Grasping experiments demonstrate that our gripper can lift a large variety of objects, including delicate foods, heavy bottles, and other miscellaneous items. The grasp force on 3D-printed objects is also characterized through mechanical load tests. The results reveal that our soft gripper can produce significant grasp force on various shapes using negative pneumatic pressure (vacuum). This new gripper holds the potential for many practical applications that require safe, strong, and simple grasping.
The COVID-19 pandemic created significant interest and demand for infection detection and monitoring solutions. In this paper, we propose a machine learning method to quickly detect COVID-19 using audio recordings made on consumer devices. The approach combines signal processing and noise removal methods with an ensemble of fine-tuned deep learning networks and enables COVID detection on coughs. We have also developed and deployed a mobile application that uses a symptoms checker together with voice, breath, and cough signals to detect COVID-19 infection. The application showed robust performance on both openly sourced datasets and the noisy data collected during beta testing by the end users.
Interpretation of non-coding genomic variants is one of the most important challenges in human genetics. Machine learning methods have emerged recently as a powerful tool to solve this problem. State-of-the-art approaches allow prediction of transcriptional and epigenetic effects caused by non-coding mutations. However, these approaches require specific experimental data for training and can not generalize across cell types where required features were not experimentally measured. We show here that available epigenetic characteristics of human cell types are extremely sparse, limiting those approaches that rely on specific epigenetic input. We propose a new neural network architecture, DeepCT, which can learn complex interconnections of epigenetic features and infer unmeasured data from any available input. Furthermore, we show that DeepCT can learn cell type-specific properties, build biologically meaningful vector representations of cell types and utilize these representations to generate cell type-specific predictions of the effects of non-coding variations in the human genome.
Interpretation of noncoding genomic variants is one of the most important challenges in human genetics. Machine learning methods have emerged recently as a powerful tool to solve this problem. State-of-the-art approaches allow prediction of transcriptional and epigenetic effects caused by noncoding mutations. However, these approaches require specific experimental data for training and cannot generalize across cell types where required features were not experimentally measured. We show here that available epigenetic characteristics of human cell types are extremely sparse, limiting those approaches that rely on specific epigenetic input. We propose a new neural network architecture, DeepCT, which can learn complex interconnections of epigenetic features and infer unmeasured data from any available input. Furthermore, we show that DeepCT can learn cell type–specific properties, build biologically meaningful vector representations of cell types, and utilize these representations to generate cell type–specific predictions of the effects of noncoding variations in the human genome.
The spatially-varying field of the human visual system has recently received a resurgence of interest with the development of virtual reality (VR) and neural networks. The computational demands of high resolution rendering desired for VR can be offset by savings in the periphery [16], while neural networks trained with foveated input have shown perceptual gains in i. i.d and o.o.d generalization [27, 6]. In this paper, we present a technique that exploits the CUDA GPU architecture to efficiently generate Gaussian-based foveated images at high definition (1920px × 1080px) in real-time (165 Hz), with a larger number of pooling regions than previous Gaussian-based foveation algorithms by several orders of magnitude [10,27], producing a smoothly foveated image that requires no further blending or stitching, and that can be well fit for any contrast sensitivity function. The approach described can be adapted from Gaussian blurring to any eccentricity-dependent image processing and our algorithm can meet demand for experimentation to evaluate the role of spatially-varying processing across biological and artificial agents, so that foveation can be added easily on top of existing systems rather than forcing their redesign ("emulated foveated renderer" [24]). Altogether, this paper demonstrates how a GPU, with a CUDA block-wise architecture, can be employed for radially-variant rendering, with opportunities for more complex post-processing to ensure a metameric foveation scheme [35]. Our code can be found here [21].
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.