The training of a multilayer perceptron neural network (MLPNN) concerns the selection of its architecture and the connection weights via the minimization of both the training error and a penalty term. Different penalty terms have been proposed to control the smoothness of the MLPNN for better generalization capability. However, controlling its smoothness using, for instance, the norm of weights or the Vapnik-Chervonenkis dimension cannot distinguish individual MLPNNs with the same number of free parameters or the same norm. In this paper, to enhance generalization capabilities, we propose a stochastic sensitivity measure (ST-SM) to realize a new penalty term for MLPNN training. The ST-SM determines the expectation of the squared output differences between the training samples and the unseen samples located within their Q -neighborhoods for a given MLPNN. It provides a direct measurement of the MLPNNs output fluctuations, i.e., smoothness. We adopt a two-phase Pareto-based multiobjective training algorithm for minimizing both the training error and the ST-SM as biobjective functions. Experiments on 20 UCI data sets show that the MLPNNs trained by the proposed algorithm yield better accuracies on testing data than several recent and classical MLPNN training methods.
In advanced IC manufacturing, as the gap increases between lithography optical wavelength and feature size, it becomes challenging to detect problematic layout patterns called lithography hotspot. In this paper, we propose a novel fuzzy matching model which extracts appropriate feature vectors of hotspot and nonhotspot patterns. Our model can dynamically tune appropriate fuzzy regions around known hotspots. Based on this paper, we develop a fast algorithm for lithography hotspot detection with high accuracy of detection and low probability of false-alarm counts. In addition, since higher dimensional size of feature vectors can produce better accuracy but requires longer run time, this paper proposes a grid reduction technique to significantly reduce the CPU run time with very minor impact on the advantages of higher dimensional space. Our results are very encouraging, with average 94.5% accuracy and low false-alarm counts on a set of test benchmarks.
Traditional 1 -regularized compressed sensing magnetic resonance imaging (CS-MRI) model tends to underestimate the fine textures and edges of the MR image, which play important roles in clinical diagnosis. In contrast, the convex nonconvex (CNC) strategy allows the use of nonconvex regularization while maintaining the convexity of the total objective function. Plug-and-play (PnP) algorithm is a powerful framework for sparse regularization problems, which plug any advanced denoiser into traditional proximal algorithms. In this paper, we propose a PnP-ADMM algorithm for CS-MRI reconstruction with CNC sparse regularization. We first obtain the proximal operator for CNC sparse regularization. Then we present PnP-ADMM algorithm by replacing the proximal operator of ADMM with properly pretrained denoisers. Furthermore, we conduct comparison experiments using various denoisers under different sampling templates for different images. The experimental results verify the effectiveness of the proposed algorithm with both numerical criteria and visual effects.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.