We propose an end-to-end deep learning architecture that produces a 3D shape in triangular mesh from a single color image. Limited by the nature of deep neural network, previous methods usually represent a 3D shape in volume or point cloud, and it is non-trivial to convert them to the more ready-to-use mesh model. Unlike the existing methods, our network represents 3D mesh in a graph-based convolutional neural network and produces correct geometry by progressively deforming an ellipsoid, leveraging perceptual features extracted from the input image. We adopt a coarse-to-fine strategy to make the whole deformation procedure stable, and define various of mesh related losses to capture properties of different levels to guarantee visually appealing and physically accurate 3D geometry. Extensive experiments show that our method not only qualitatively produces mesh model with better details, but also achieves higher 3D shape estimation accuracy compared to the state-of-the-art.
Existing methods usually utilize pre-defined criterions, such as p -norm, to prune unimportant filters. There are two major limitations in these methods. First, the relations of the filters are largely ignored. The filters usually work jointly to make an accurate prediction in a collaborative way. Similar filters will have equivalent effects on the network prediction, and the redundant filters can be further pruned. Second, the pruning criterion remains unchanged during training. As the network updated at each iteration, the filter distribution also changes continuously. The pruning criterions should also be adaptively switched.In this paper, we propose Meta Filter Pruning (MFP) to solve the above problems. First, as a complement to the existing p -norm criterion, we introduce a new pruning criterion considering the filter relation via filter distance. Additionally, we build a meta pruning framework for filter pruning, so that our method could adaptively select the most appropriate pruning criterion as the filter distribution changes. Experiments validate our approach on two image classification benchmarks. Notably, on ILSVRC-2012, our MFP reduces more than 50% FLOPs on ResNet-50 with only 0.44% top-5 accuracy loss.
Abstract-Most existing zero-shot learning approaches exploit transfer learning via an intermediate semantic representation shared between an annotated auxiliary dataset and a target dataset with different classes and no annotation. A projection from a low-level feature space to the semantic representation space is learned from the auxiliary dataset and applied without adaptation to the target dataset. In this paper we identify two inherent limitations with these approaches. First, due to having disjoint and potentially unrelated classes, the projection functions learned from the auxiliary dataset/domain are biased when applied directly to the target dataset/domain. We call this problem the projection domain shift problem and propose a novel framework, transductive multi-view embedding, to solve it. The second limitation is the prototype sparsity problem which refers to the fact that for each target class, only a single prototype is available for zero-shot learning given a semantic representation. To overcome this problem, a novel heterogeneous multi-view hypergraph label propagation method is formulated for zero-shot learning in the transductive embedding space. It effectively exploits the complementary information offered by different semantic representations and takes advantage of the manifold structures of multiple representation spaces in a coherent manner. We demonstrate through extensive experiments that the proposed approach (1) rectifies the projection shift between the auxiliary and target domains, (2) exploits the complementarity of multiple semantic representations, (3) significantly outperforms existing methods for both zero-shot and N-shot recognition on three image and video benchmark datasets, and (4) enables novel cross-view annotation tasks.
BackgroundMicroRNAs (miRNAs) can function as either oncogenes or tumor suppressor genes via regulation of cell proliferation and/or apoptosis. MiR-221 and miR-222 were discovered to induce cell growth and cell cycle progression via direct targeting of p27 and p57 in various human malignancies. However, the roles of miR-221 and miR-222 have not been reported in human gastric cancer. In this study, we examined the impact of miR-221 and miR-222 on human gastric cancer cells, and identified target genes for miR-221 and miR-222 that might mediate their biology.MethodsThe human gastric cancer cell line SGC7901 was transfected with AS-miR-221/222 or transduced with pMSCV-miR-221/222 to knockdown or restore expression of miR-221 and miR-222, respectively. The effects of miR-221 and miR-222 were then assessed by cell viability, cell cycle analysis, apoptosis, transwell, and clonogenic assay. Potential target genes were identified by Western blot and luciferase reporter assay.ResultsUpregulation of miR-221 and miR-222 induced the malignant phenotype of SGC7901 cells, whereas knockdown of miR-221 and miR-222 reversed this phenotype via induction of PTEN expression. In addition, knockdonwn of miR-221 and miR-222 inhibited cell growth and invasion and increased the radiosensitivity of SGC7901 cells. Notably, the seed sequence of miR-221 and miR-222 matched the 3'UTR of PTEN, and introducing a PTEN cDNA without the 3'UTR into SGC7901 cells abrogated the miR-221 and miR-222-induced malignant phenotype. PTEN-3'UTR luciferase reporter assay confirmed PTEN as a direct target of miR-221 and miR-222.ConclusionThese results demonstrate that miR-221 and miR-222 regulate radiosensitivity, and cell growth and invasion of SGC7901 cells, possibly via direct modulation of PTEN expression. Our study suggests that inhibition of miR-221 and miR-222 might form a novel therapeutic strategy for human gastric cancer.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.