We generalize the scattering transform to graphs and consequently construct a convolutional neural network on graphs. We show that under certain conditions, any feature generated by such a network is approximately invariant to permutations and stable to graph manipulations. Numerical results demonstrate competitive performance on relevant datasets.
In this paper we discuss the stability properties of convolutional neural networks. Convolutional neural networks are widely used in machine learning. In classification they are mainly used as feature extractors. Ideally, we expect similar features when the inputs are from the same class. That is, we hope to see a small change in the feature vector with respect to a deformation on the input signal. This can be established mathematically, and the key step is to derive the Lipschitz properties. Further, we establish that the stability results can be extended for more general networks. We give a formula for computing the Lipschitz bound, and compare it with other methods to show it is closer to the optimal value.Recently convolutional neural networks have enjoyed tremendous success in many applications in image and signal processing. According to [5], a general convolutional network contains three types of layers: convolution layers, detection layers, and pooling layers. In [7], Mallat proposes the scattering network, which is a tree-structured convolutional neural network whose filters in convolution layers are wavelets. Mallat proves that the scattering network satisfies two important properties: (approximately) invariance to translation and stabitity to deformation. However, for those properties to hold, the wavelets must satisfy an admissibility condition. This restricts the adaptability of the theory. The authors in [11,12] use a slightly different setting to relax the conditions. They consider sets of filters that form semi-discrete frames of upper frame bound equal to one. They prove that deformation stability holds for signals that satisfy certain conditions.
Generative networks have made it possible to generate meaningful signals such as images and texts from simple noise. Recently, generative methods based on GAN and VAE were developed for graphs and graph signals. However, the mathematical properties of these methods are unclear, and training good generative models is difficult. This work proposes a graph generation model that uses a recent adaptation of Mallat's scattering transform to graphs. The proposed model is naturally composed of an encoder and a decoder. The encoder is a Gaussianized graph scattering transform, which is robust to signal and graph manipulation. The decoder is a simple fully connected network that is adapted to specific tasks, such as link prediction, signal generation on graphs and full graph and signal generation. The training of our proposed system is efficient since it is only applied to the decoder and the hardware requirements are moderate. Numerical results demonstrate stateof-the-art performance of the proposed system for both link prediction and graph and signal generation.
In this paper we prove two results regarding reconstruction from magnitudes of frame coefficients (the so called "phase retrieval problem"). First we show that phase retrievability as an algebraic property implies that nonlinear maps are bi-Lipschitz with respect to appropriate metrics on the quotient space. Second we prove that reconstruction can be performed using Lipschitz continuous maps. Specifically we show that when nonlinear analysis maps α, β :Ĥ → R m are injective, with α(, where {f1, . . . , fm} is a frame for a Hilbert space H andĤ = H/T 1 , then α is bi-Lipschitz with respect to the class of "natural metrics" Dp(x, y) = minϕ x − e iϕ y p , whereas β is biLipschitz with respect to the class of matrix-norm induced metrics dp(x, y) = xx * − yy * p . Furthermore, there exist left inverse maps ω, ψ : R m →Ĥ of α and β respectively, that are Lipschitz continuous with respect to the appropriate metric. Additionally we obtain the Lipschitz constants of these inverse maps in terms of the lower Lipschitz constants of α and β. Surprisingly the increase in Lipschitz constant is a relatively small factor, independent of the space dimension or the frame redundancy.
Many convolutional neural networks (CNN's) have a feed-forward structure. In this paper, a linear program that estimates the Lipschitz bound of such CNN's is proposed. Several CNN's, including the scattering networks, the AlexNet and the GoogleNet, are studied numerically and compared to the theoretical bounds. Next, concentration inequalities of the output distribution to a stationary random input signal expressed in terms of the Lipschitz bound are established. The Lipschitz bound is further used to establish a nonlinear discriminant analysis designed to measure the separation between features of different classes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.