We consider the problem of finding a subgraph of a given graph minimizing the sum of given functions at vertices evaluated at their subgraph degrees. While the problem is NP-hard already for bipartite graphs when the functions are convex on one side and concave on the other, we show that when all functions are convex, the problem can be solved in polynomial time for any graph. We also provide polynomial time solutions for bipartite graphs with one side fixed for arbitrary functions, and for arbitrary graphs when all but a fixed number of functions are either nondecreasing or nonincreasing. We note that the general factor problem and the (l,u)-factor problem over a graph are special cases of our problem, as well as the intriguing exact matching problem. The complexity of the problem remains widely open, particularly for arbitrary functions over complete graphs.
Hardness and exact matchingWe begin by showing that the optimization problem over degree sequence is generally hard.Proposition 1.1 Deciding if the optimal value in our problem is zero is NP-complete already:1. when f 1 = · · · = f n = f are identical, with f (0) = f (3) = 0 and f (i) = 1 for i = 0, 3.2. when H = (I, J, E) is bipartite and f i is convex for all i ∈ I and concave for all i ∈ J.
Machine unlearning is the process through which a deployed machine learning model forgets about one of its training data points. While naively retraining the model from scratch is an option, it is almost always associated with a large computational effort for deep learning models. Thus, several approaches to approximately unlearn have been proposed along with corresponding metrics that formalize what it means for a model to forget about a data point. In this work, we first taxonomize approaches and metrics of approximate unlearning. As a result, we identify verification error, i.e., the 2 difference between the weights of an approximately unlearned and a naively retrained model, as a metric approximate unlearning should optimize for as it implies a large class of other metrics. We theoretically analyze the canonical stochastic gradient descent (SGD) training algorithm to surface the variables which are relevant to reducing the verification error of approximate unlearning for SGD. From this analysis, we first derive an easy-to-compute proxy for verification error (termed unlearning error). The analysis also informs the design of a new training objective penalty that limits the overall change in weights during SGD and as a result facilitates approximate unlearning with lower verification error. We validate our theoretical work through an empirical evaluation on CIFAR-10, CIFAR-100, and IMDB sentiment analysis.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.