We examine the stability of loss-minimizing training processes that are used for deep neural networks (DNN) and other classifiers. While a classifier is optimized during training through a so-called loss function, the performance of classifiers is usually evaluated by some measure of accuracy, such as the overall accuracy which quantifies the proportion of objects that are well classified. This leads to the guiding question of stability: does decreasing loss through training always result in increased accuracy? We formalize the notion of stability, and provide examples of instability. Our main result consists of two novel conditions on the classifier which, if either is satisfied, ensure stability of training, that is we derive tight bounds on accuracy as loss decreases. We also derive a sufficient condition for stability on the training set alone, identifying flat portions of the data manifold as potential sources of instability. The latter condition is explicitly verifiable on the training dataset. Our results do not depend on the algorithm used for training, as long as loss decreases with training.
Kaleidocycles are continuously rotating n-jointed linkages. We consider a certain class of six-jointed kaleidocycles which have a spring at each joint. For this class of kaleidocycles, stored energy varies throughout the rotation process in a nonconstant, cyclic pattern. The purpose of this paper is to model and provide an analysis of the stored energy of a kaleidocycle throughout its motion. In particular, we will solve analytically for the number of stable equilibrium states for any kaleidocycle in this class.
We examine the stability of loss-minimizing training processes that are used for deep neural network (DNN) and other classifiers. While a classifier is optimized during training through a so-called loss function, the performance of classifiers is usually evaluated by some measure of accuracy, such as the overall accuracy which quantifies the proportion of objects that are well classified. This leads to the guiding question of stability: does decreasing loss through training always result in increased accuracy? We formalize the notion of stability, and provide examples of instability. Our main result is two novel conditions on the classifier which, if either is satisfied, ensure stability of training, that is we derive tight bounds on accuracy as loss decreases. These conditions are explicitly verifiable in practice on a given dataset. Our results do not depend on the algorithm used for training, as long as loss decreases with training.
We demonstrate analytically that it is possible to construct a developable mechanism on a cone that has rigid motion. We solve for the paths of rigid motion and analyze the properties of this motion. In particular, we provide an analytical method for predicting the behavior of the mechanism with respect to the conical surface. Moreover, we observe that the conical developable mechanisms specified in this paper have motion paths that necessarily contain bifurcation points which lead to an unbounded array of motion paths in the parameterization plane.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.