The bandwidth of the communication networks has been increased continuously as results of technological advances. However, the introduction of new services and the expansion of the existing ones have resulted in even higher demand for the bandwidth. This explains the many efforts currently being invested in the area of data compression. The primary goal of these works is to develop techniques of coding information sources such as speech, image and video to reduce the number of bits required to represent a source without significantly degrading its quality. With the large increase in the generation of digital image data, there has been a correspondingly large increase in research activity in the field of image compression. The goal is to represent an image in the fewest number of bits without losing the essential information content within. Images carry three main type of information: redundant, irrelevant, and useful. Redundant information is the deterministic part of the information, which can be reproduced without loss from other information contained in the image. Irrelevant information is the part of information that has enormous details, which are beyond the limit of perceptual significance (i.e., psychovisual redundancy). Useful information, on the other hand, is the part of information, which is neither redundant nor irrelevant. Human usually observes decompressed images. Therefore, their fidelities are subject to the capabilities and limitations of the Human Visual System.
This paper considers a novel image compression technique called Hybrid Predictive Wavelet coding. The new proposed technique combines the properties of predictive coding and discrete Wavelet coding. In contrast to JPEG2000, the image data values are pre-processed using predictive coding to remove inter-pixel redundancy. The error values, which are the difference between the original and the predicted values, are discrete wavelet coding transformed. In this case, a nonlinear neural network predictor is utilised in the predictive coding system. The simulation results indicated that the proposed technique can achieve good compressed images at high decomposition levels in comparison to JPEG2000.The authors would like to thanks the reviewers for their constructive comments which have certainly improved the quality of the paper significantly.Reviewer #1: -The paper proposes an image compression system based on neural networks. I found the paper well written, technically sound and with clear focus.-The authors have provided a good study of related techniques and the proposed approach is well motivated. I have two minor comments to improve the experimental part of the paper further.The authors would like to thanks the reviewer for his/her encouraging comment.-The first comment concerns the quantitative results. I suggest that the authors study the statistical significance of the results as compared to other approaches. Actually it will be helpful to test if the difference between the proposed approach and comparable approaches is statistically significant.To check the statistical significant between the proposed HNNPWA and JPPEG 2000 techniques, the authors performed a paired t-test based on the Absolute value of the error image. The calculated tvalues showed that the proposed technique outperform JPEG2000 with α = 5% significance level for a one-tailed test at decomposition levels 4, 5 and 6. The t-test indicated that there is no significant difference between the two image compression techniques at decomposition level 3. As it can be noted from Figure 8, at decomposition levels 1, 2 and 3, the visual quality of the reconstructed images for the proposed HPNNWA and JPEG2000 is very good and it is not easy to notice the difference between the original image and the constructed image for both systems.-The comparison is mainly based on the PSNR which is the most widely used approach. Are there other measures that can be used to emphasize further the advantages of the proposed approach.For all the experiments, the authors have added the mean absolute value of the error as another quality measure.-Finally, the conclusion could be improved further by adding more analysis and discussions that could be devoted to explaining the main problems that could be related to the application of the proposed approach.The conclusion section was expanded and the problem of the application of the proposed system was mentioned as requested by the reviewer.Reviewer #2: The quality of written presentation demonstrates a good standard ...
Abstract-This paper presents a novel approach based on the analysis of genetic variants from publicly available genetic profiles and the manually curated database, the National Human Genome Research Institute Catalog. Using data science techniques, genetic variants are identified in the collected participant profiles and then indexed as risk variants in the National Human Genome Research Institute Catalog. Indexed genetic variants or Single Nucleotide Polymorphisms are used as inputs in various machine learning algorithms for the prediction of obesity. Body mass index status of participants is divided into two classes, Normal Class and Risk Class. Dimensionality reduction tasks are performed to generate a set of principal variables -13 SNPs -for the application of various machine learning methods. The models are evaluated using receiver operator characteristic curves and the area under the curve. Machine learning techniques including gradient boosting, generalized linear model, classification and regression trees, k-nearest neighbours, support vector machines, random forest and multilayer perceptron neural network are comparatively assessed in terms of their ability to identify the most important factors among the initial 6622 variables describing genetic variants, age and gender, to classify a subject into one of the body mass index related classes defined in this study. Our simulation results indicated that support vector machine generated the highest area under the curve value of 90.5%.
Abstract-This paper presents a novel type of recurrent neural network, the regularized dynamic self-organized neural network inspired by the immune algorithm. The regularization technique is used with the dynamic self-organized multilayer perceptrons network that is inspired by the immune algorithm. The regularization has been addressed to improve the generalization and to solve the over-fitting problem. In this work, the average values of 30 simulations generated from 10 financial time series are examined. The results of the proposed network were compared with the standard dynamic self-organized multilayer perceptrons network inspired by the immune algorithm, the regularized multilayer neural networks and the regularized self-organized neural network inspired by the immune algorithm.The simulation results indicated that the proposed network showed average improvement using the annualized return for all signals of 0.491, 8.1899 and 1.0072 in comparison to the benchmarked networks, respectively.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.