Starting from a dynamical system (Ω, G), with G a generic topological group, we devise algorithms that generate families of patterns in the Euclidean space, which densely embed G and on which G acts continuously by rigid shifts. We refer to such patterns as being dynamically generated. For G = Z d , we adopt Bellissard's C * -algebraic formalism to analyze the dynamics of coupled resonators arranged in dynamically generated point patterns. We then use the standard connecting maps of K-theory to derive precise conditions that assure the existence of topological boundary modes when a sample is halved. We supply four examples for which the calculations can be carried explicitly. The predictions are supported by many numerical experiments.
We present a combinatorial approach to rigorously show the existence of fixed points, periodic orbits, and symbolic dynamics in discrete-time dynamical systems, as well as to find numerical approximations of such objects. Our approach relies on the method of 'correctly aligned windows'. We subdivide 'windows' into cubical complexes, and we assign to the vertices of the cubes labels determined by the dynamics. In this way, we encode the information on the dynamics into combinatorial structure. We use a version of Sperner's Lemma to infer that, if the labeling satisfies certain conditions, then there exist fixed points/periodic orbits/orbits with prescribed itineraries. The method developed here does not require the computation of algebraic topologytype invariants, as only combinatorial information is needed; our arguments are elementary.
Kakutani's fixed point theorem is a generalization of Brouwer's fixed point theorem to upper semicontinuous multivalued maps and is used extensively in game theory and other areas of economics. Earlier works have shown that Sperner's lemma implies Brouwer's theorem. In this paper, a new combinatorial labeling lemma, generalizing Sperner's original lemma, is given and is used to derive a simple proof for Kakutani's fixed point theorem. The proof is constructive and can be easily applied to numerically approximate the location of fixed points. The main method of the proof is also used to obtain a generalization of Kakutani's theorem for discontinuous maps which are locally gross direction preserving.
We study the stability of accuracy for the training of deep neural networks. Here the training of a DNN is preformed via the minimization of a crossentropy loss function and the performance metric is the accuracy (the proportion of objects classified correctly). While training amounts to the decrease of loss, the accuracy does not necessarily increase during the training. A recent result by Berlyand, Jabin and Safsten introduces a doubling condition on the training data which ensures the stability of accuracy during training for DNNs with the absolute value activation function. For training data in R n , this doubling condition is formulated using slabs in R n and it depends on the choice of the slabs. The goal of this paper is twofold. First to make the doubling condition uniform, that is independent on the choice of slabs leading to sufficient conditions for stability in terms of training data only. Second to extend the original stability results for the absolute value activation function to a broader class of piecewise linear activation function with finitely many critical points such as the popular Leaky ReLU.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.