A total of 144 weaned piglets were used to evaluate the effects of essential oil (EO) supplementation of a low-energy diet on performance, apparent nutrient digestibility, small intestinal morphology, intestinal microflora, immune properties and antioxidant activities in weaned pigs. Pigs received a low-energy diet (negative control, NC, digestible energy = 3250 kcal/kg), NC plus 0.025% EO or a positive control diet (PC, digestible energy = 3400 kcal/kg) for 28 days. Growth performance was similar between the EO group and PC group. However, EO supplementation increased (P < 0.05) average daily gain and the apparent digestibility of dry matter, crude protein and energy compared with pigs fed the NC diet. Greater (P < 0.05) villus height and lower (P < 0.05) counts of Escherichia coli and total anaerobes in the rectum in the EO group were observed compared with NC or PC groups. Pigs fed EO diet had higher (P < 0.05) concentrations of albumin, immunoglobulin A (IgA), IgG and total antioxidant capacity and lower fecal score than pigs fed the PC and NC diets. Above all, this study indicates that supplementation of EO to a low-energy pig diet has beneficial results and obtains similar performance compared with normal energy (PC) diet.
Pulmonary cancer is considered as one of the major causes of death worldwide. For the detection of lung cancer, computer-assisted diagnosis (CADx) systems have been designed. Internet-of-Things (IoT) has enabled ubiquitous internet access to biomedical datasets and techniques; in result, the progress in CADx is significant. Unlike the conventional CADx, deep learning techniques have the basic advantage of an automatic exploitation feature as they have the ability to learn mid and high level image representations. We proposed a Computer-Assisted Decision Support System in Pulmonary Cancer by using the novel deep learning based model and metastasis information obtained from MBAN (Medical Body Area Network). The proposed model, DFCNet, is based on the deep fully convolutional neural network (FCNN) which is used for classification of each detected pulmonary nodule into four lung cancer stages. The performance of proposed work is evaluated on different datasets with varying scan conditions. Comparison of proposed classifier is done with the existing CNN techniques. Overall accuracy of CNN and DFCNet was 77.6% and 84.58%, respectively. Experimental results illustrate the effectiveness of proposed method for the detection and classification of lung cancer nodules. These results demonstrate the potential for the proposed technique in helping the radiologists in improving nodule detection accuracy with efficiency.
In this paper, we present a method for human action recognition from depth images and posture data using convolutional neural networks (CNN). Two input descriptors are used for action representation. The first input is a depth motion image (DMI) that accumulates consecutive depth images of a human action, whilst the second input is a proposed moving joints descriptor (MJD) which represents the motion of body joints over time. In order to maximize feature extraction for accurate action classification, three CNN channels are trained with different inputs. The first channel is trained with depth motion images, the second channel is trained with both depth motion images and moving joint descriptors together, and the third channel is trained with moving joint descriptors only. The action predictions from the three CNN channels are fused together for the final action classification. We propose several fusion score operations to maximize the score of the right action. The experiments show that the results of fusing the output of three channels are better than using one channel or fusing two channels only. Our proposed method was evaluated on three public datasets: MSRAction3D, UTD-MAHD, and MAD dataset. The testing results indicate that the proposed approach outperforms most of existing state of the art methods such as HON4D and Actionlet on MSRAction3D. Although MAD dataset contains a high number of actions (35 actions) compared to existing action RGB-D datasets, our work achieves 91.86% of accuracy.
Given a financial time series such as S&P 500, or any historical data in stock markets, how can we obtain useful information from recent transaction data to predict the ups and downs at the next moment? Recent work on this issue shows initial evidence that machine learning techniques are capable of identifying (non-linear) dependency in the stock market price sequences. However, due to the high volatility and non-stationary nature of the stock market, forecasting the trend of a financial time series remains a big challenge. In this paper, we introduced a new method to simplify noisy-filled financial temporal series via sequence reconstruction by leveraging motifs (frequent patterns), and then utilize a convolutional neural network to capture spatial structure of time series. The experimental results show the efficiency of our proposed method in feature learning and outperformance with 4%-7% accuracy improvement compared with the traditional signal process methods and frequency trading patterns modeling approach with deep learning in stock trend prediction. INDEX TERMS Trend prediction, convolutional neural network, financial time series, motif extraction.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.