Given a symmetric nonnegative matrix A, symmetric nonnegative matrix factorization (sym-NMF) is the problem of finding a nonnegative matrix H, usually with much fewer columns than A, such that A ≈ HH T . SymNMF can be used for data analysis and in particular for various clustering tasks. In this paper, we propose simple and very efficient coordinate descent schemes to solve this problem, and that can handle large and sparse input matrices. The effectiveness of our methods is illustrated on synthetic and real-world data sets, and we show that they perform favorably compared to recent state-of-the-art methods.
This paper studies few-shot learning via representation learning, where one uses T source tasks with n 1 data per task to learn a representation in order to reduce the sample complexity of a target task for which there is only n 2 (≪ n 1 ) data. Specifically, we focus on the setting where there exists a good common representation between source and target, and our goal is to understand how much of a sample size reduction is possible. First, we study the setting where this common representation is low-dimensional and provide a fast rate of O C(Φ) n1T + k n2 ; here, Φ is the representation function class, C (Φ) is its complexity measure, and k is the dimension of the representation. When specialized to linear representation functions, this rate becomeswhere d(≫ k) is the ambient input dimension, which is a substantial improvement over the rate without using representation learning, i.e. over the rate of O d n2 . Second, we consider the setting where the common representation may be high-dimensional but is capacity-constrained (say in norm); here, we again demonstrate the advantage of representation learning in both high-dimensional linear regression and neural network learning. Our results demonstrate representation learning can fully utilize all n 1 T samples from source tasks.
Self-supervised representation learning solves auxiliary prediction tasks (known as pretext tasks), that do not require labeled data, to learn semantic representations. These pretext tasks are created solely using the input features, such as predicting a missing image patch, recovering the color channels of an image from context, or predicting missing words, yet predicting this known information helps in learning representations effective for downstream prediction tasks. This paper posits a mechanism based on conditional independence to formalize how solving certain pretext tasks can learn representations that provably decreases the sample complexity of downstream supervised tasks. Formally, we quantify how approximate independence between the components of the pretext task (conditional on the label and latent variables) allows us to learn representations that can solve the downstream task with drastically reduced sample complexity by just training a linear layer on top of the learned representation.
Usually, machine and automated guided vehicle (AGV) scheduling are studied simultaneously. However, previous studies often used a fixed number of AGVs or did not consider routing problems and transportation time. This paper focuses on the machine and AGV scheduling problem in a flexible manufacturing system by simultaneously considering the optimal number of AGVs, the shortest transportation time, a path planning problem, and a conflict-free routing problem (CFRP). To study these problems simultaneously, we propose a genetic algorithm combined with the Dijkstra algorithm that is based on a time window. The tri-string chromosome coding method is designed to ensure that the solutions are feasible after the genetic operator has been applied. Global, local, and random searches are adopted in reasonable proportions to improve the quality and diversity of the initial population. The Dijkstra algorithm based on the time window is embedded into the genetic algorithm to search for the shortest route, detect collisions for multiple vehicles simultaneously, and finally, solve the shortest CFRP. The objective is to minimize the makespan while considering the influence of the number of AGVs. Increasing the number of AGVs has a significant impact on the makespan in the initial stage. However, the makespan tends to stabilize as the number of AGVs reaches some threshold. To balance the relationship between the minimum makespan and the optimal number of AGVs, we set the minimum decrease rate to 5% when determining the minimum makespan and confirming the corresponding number of AGVs to be the optimal value. In this paper, to verify the effectiveness of our approach, we propose two sets of computational experiments. The first set of results shows that the proposed algorithm is as efficient and effective at solving the scheduling problem as the benchmark approaches. The second set of computational experiments indicates that the proposed approach is applicable for solving integrated scheduling problems in flexible manufacturing systems.INDEX TERMS Conflict-free routing problem, genetic algorithm, Dijkstra algorithm, optimal number of AGVs, time window.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.