Convolutional Neural Networks (CNNs) have become indispensable for solving machine learning tasks in speech recognition, computer vision, and other areas that involve highdimensional data. A CNN filters the input feature using a network containing spatial convolution operators with compactly supported stencils. In practice, the input data and the hidden features consist of a large number of channels, which in most CNNs are fully coupled by the convolution operators. This coupling leads to immense computational cost in the training and prediction phase. In this paper, we introduce LeanConvNets that are derived by sparsifying fully-coupled operators in existing CNNs. Our goal is to improve the efficiency of CNNs by reducing the number of weights, floating point operations and latency times, with minimal loss of accuracy. Our lean convolution operators involve tuning parameters that controls the trade-off between the network's accuracy and computational costs. These convolutions can be used in a wide range of existing networks, and we exemplify their use in residual networks (ResNets) and U-Nets. Using a range of benchmark problems from image classification and semantic segmentation, we demonstrate that the resulting LeanConvNet's accuracy is close to state-of-the-art networks while being computationally less expensive. In our tests, the lean versions of ResNet and U-net slightly outperforms comparable reduced architectures such as MobileNets and ShuffleNets.
Recent advancements in machine learning techniques for protein structure prediction motivate better results in its inverse problem–protein design. In this work we introduce a new graph mimetic neural network, MimNet, and show that it is possible to build a reversible architecture that solves the structure and design problems in tandem, allowing to improve protein backbone design when the structure is better estimated. We use the ProteinNet data set and show that the state of the art results in protein design can be met and even improved, given recent architectures for protein folding.
Recently, the concept of unsupervised learning for superpixel segmentation via CNNs has been studied. Essentially, such methods generate superpixels by convolutional neural network (CNN) employed on a single image, and such CNNs are trained without any labels or further information. Thus, such approach relies on the incorporation of priors, typically by designing an objective function that guides the solution towards a meaningful superpixel segmentation. In this paper we propose three key elements to improve the efficacy of such networks: (i) the similarity of the soft superpixelated image compared to the input image, (ii) the enhancement and consideration of object edges and boundaries and (iii) a modified architecture based on atrous convolution, which allow for a wider field of view, functioning as a multi-scale component in our network. By experimenting with the BSDS500 dataset, we find evidence to the significance of our proposal, both qualitatively and quantitatively.
Recent advancements in machine learning techniques for protein folding motivate better results in its inverse problem -protein design. In this work we introduce a new graph mimetic neural network, MimNet, and show that it is possible to build a reversible architecture that solves the structure and design problems in tandem, allowing to improve protein design when the structure is better estimated. We use the ProteinNet data set and show that the state of the art results in protein design can be improved, given recent architectures for protein folding.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.