This paper investigates into the colorization problem which converts a grayscale image to a colorful version. This is a very difficult problem and normally requires manual adjustment to achieve artifact-free quality. For instance, it normally requires human-labelled color scribbles on the grayscale target image or a careful selection of colorful reference images (e.g., capturing the same scene in the grayscale target image). Unlike the previous methods, this paper aims at a high-quality fully-automatic colorization method. With the assumption of a perfect patch matching technique, the use of an extremely large-scale reference database (that contains sufficient color images) is the most reliable solution to the colorization problem. However, patch matching noise will increase with respect to the size of the reference database in practice. Inspired by the recent success in deep learning techniques which provide amazing modeling of large-scale data, this paper re-formulates the colorization problem so that deep learning techniques can be directly employed. To ensure artifact-free quality, a joint bilateral filtering based post-processing step is proposed. We further develop an adaptive image clustering technique to incorporate the global image information. Numerous experiments demonstrate that our method outperforms the state-of-art algorithms both in terms of quality and speed.
Although the pharmacological effects of fibroblast growth factor 21 (FGF21) are well-documented, uncertainty about its role in regulating excessive energy intake remains. Here, we show that FGF21 improves systemic insulin sensitivity by promoting the healthy expansion of subcutaneous adipose tissue (SAT). Serum FGF21 levels positively correlate with the SAT area in insulin-sensitive obese individuals. FGF21 knockout mice (FGF21KO) show less SAT mass and are more insulin-resistant when fed a high-fat diet. Replenishment of recombinant FGF21 to a level equivalent to that in obesity restores SAT mass and reverses insulin resistance in FGF21KO, but not in adipose-specific βklotho knockout mice. Moreover, transplantation of SAT from wild-type to FGF21KO mice improves insulin sensitivity in the recipients. Mechanistically, circulating FGF21 upregulates adiponectin in SAT, accompanied by an increase of M2 macrophage polarization. We propose that elevated levels of endogenous FGF21 in obesity serve as a defense mechanism to protect against systemic insulin resistance.
Pulmonary cancer is considered as one of the major causes of death worldwide. For the detection of lung cancer, computer-assisted diagnosis (CADx) systems have been designed. Internet-of-Things (IoT) has enabled ubiquitous internet access to biomedical datasets and techniques; in result, the progress in CADx is significant. Unlike the conventional CADx, deep learning techniques have the basic advantage of an automatic exploitation feature as they have the ability to learn mid and high level image representations. We proposed a Computer-Assisted Decision Support System in Pulmonary Cancer by using the novel deep learning based model and metastasis information obtained from MBAN (Medical Body Area Network). The proposed model, DFCNet, is based on the deep fully convolutional neural network (FCNN) which is used for classification of each detected pulmonary nodule into four lung cancer stages. The performance of proposed work is evaluated on different datasets with varying scan conditions. Comparison of proposed classifier is done with the existing CNN techniques. Overall accuracy of CNN and DFCNet was 77.6% and 84.58%, respectively. Experimental results illustrate the effectiveness of proposed method for the detection and classification of lung cancer nodules. These results demonstrate the potential for the proposed technique in helping the radiologists in improving nodule detection accuracy with efficiency.
In this paper, we present a method for human action recognition from depth images and posture data using convolutional neural networks (CNN). Two input descriptors are used for action representation. The first input is a depth motion image (DMI) that accumulates consecutive depth images of a human action, whilst the second input is a proposed moving joints descriptor (MJD) which represents the motion of body joints over time. In order to maximize feature extraction for accurate action classification, three CNN channels are trained with different inputs. The first channel is trained with depth motion images, the second channel is trained with both depth motion images and moving joint descriptors together, and the third channel is trained with moving joint descriptors only. The action predictions from the three CNN channels are fused together for the final action classification. We propose several fusion score operations to maximize the score of the right action. The experiments show that the results of fusing the output of three channels are better than using one channel or fusing two channels only. Our proposed method was evaluated on three public datasets: MSRAction3D, UTD-MAHD, and MAD dataset. The testing results indicate that the proposed approach outperforms most of existing state of the art methods such as HON4D and Actionlet on MSRAction3D. Although MAD dataset contains a high number of actions (35 actions) compared to existing action RGB-D datasets, our work achieves 91.86% of accuracy.
Image decolorization is a fundamental problem for many real-world applications, including monochrome printing and photograph rendering. In this paper, we propose a new color-to-gray conversion method that is based on a region-based saliency model. First, we construct a parametric color-to-gray mapping function based on global color information as well as local contrast. Second, we propose a region-based saliency model that computes visual contrast among pixel regions. Third, we minimize the salience difference between the original color image and the output grayscale image in order to preserve contrast discrimination. To evaluate the performance of the proposed method in preserving contrast in complex scenarios, we have constructed a new decolorization data set with 22 images, each of which contains abundant colors and patterns. Extensive experimental evaluations on the existing and the new data sets show that the proposed method outperforms the state-of-the-art methods quantitatively and qualitatively.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.