We propose a novel regularization algorithm to train deep neural networks, in which data at training time is severely biased. Since a neural network efficiently learns data distribution, a network is likely to learn the bias information to categorize input data. It leads to poor performance at test time, if the bias is, in fact, irrelevant to the categorization. In this paper, we formulate a regularization loss based on mutual information between feature embedding and bias. Based on the idea of minimizing this mutual information, we propose an iterative algorithm to unlearn the bias information. We employ an additional network to predict the bias distribution and train the network adversarially against the feature embedding network. At the end of learning, the bias prediction network is not able to predict the bias not because it is poorly trained, but because the feature embedding network successfully unlearns the bias information. We also demonstrate quantitative and qualitative experimental results which show that our algorithm effectively removes the bias information from feature embedding.
Training of deep neural networks heavily depends on the data distribution. In particular, the networks easily suffer from class imbalance. The trained networks would recognize the frequent classes better than the infrequent classes. To resolve this problem, existing approaches typically propose novel loss functions to obtain better feature embedding. In this paper, we argue that drawing a better decision boundary is as important as learning better features. Inspired by observations, we investigate how the class imbalance affects the decision boundary and deteriorates the performance. We also investigate the feature distributional discrepancy between training and test time. As a result, we propose a novel, yet simple method for class imbalanced learning. Despite its simplicity, our method shows outstanding performance. In particular, the experimental results show that we can significantly improve the network by scaling the weight vectors, even without additional training process.
Nonsense-mediated mRNA decay (NMD) typifies an mRNA surveillance pathway. Because NMD necessitates a translation event to recognize a premature termination codon on mRNAs, truncated misfolded polypeptides (NMD-polypeptides) could potentially be generated from NMD substrates as byproducts. Here, we show that when the ubiquitin–proteasome system is overwhelmed, various misfolded polypeptides including NMD-polypeptides accumulate in the aggresome: a perinuclear nonmembranous compartment eventually cleared by autophagy. Hyperphosphorylation of the key NMD factor UPF1 is required for selective targeting of the misfolded polypeptide aggregates toward the aggresome via the CTIF–eEF1A1–DCTN1 complex: the aggresome-targeting cellular machinery. Visualization at a single-particle level reveals that UPF1 increases the frequency and fidelity of movement of CTIF aggregates toward the aggresome. Furthermore, the apoptosis induced by proteotoxic stresses is suppressed by UPF1 hyperphosphorylation. Altogether, our data provide evidence that UPF1 functions in the regulation of a protein surveillance as well as an mRNA quality control.
Deep learning is considered to be a breakthrough in the field of computer vision, since most of the world records of the recognition tasks are being broken. In this paper, we try to apply such deep learning techniques to recognizing facial expressions that represent human emotions. The procedure of our facial expression recognition system is as follows: First, face is detected from input image using Haar-like features. Second, the deep network is used for recognizing facial expression using detected faces. In this step, two different deep networks can be used such as deep neural network and convolutional neural network. Consequently, we compared experimentally two types of deep networks, and the convolutional neural network had better performance than deep neural network.
In computer vision, monocular depth estimation is the problem of obtaining a high-quality depth map from a two-dimensional image. This map provides information on three-dimensional scene geometry, which is necessary for various applications in academia and industry, such as robotics and autonomous driving. Recent studies based on convolutional neural networks achieved impressive results for this task. However, most previous studies did not consider the relationships between the neighboring pixels in a local area of the scene. To overcome the drawbacks of existing methods, we propose a patch-wise attention method for focusing on each local area. After extracting patches from an input feature map, our module generates attention maps for each local patch, using two attention modules for each patch along the channel and spatial dimensions. Subsequently, the attention maps return to their initial positions and merge into one attention feature. Our method is straightforward but effective. The experimental results on two challenging datasets, KITTI and NYU Depth V2, demonstrate that the proposed method achieves significant performance. Furthermore, our method outperforms other state-of-the-art methods on the KITTI depth estimation benchmark.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.