Neural-based multi-task learning has been successfully used in many real-world large-scale applications such as recommendation systems. For example, in movie recommendations, beyond providing users movies which they tend to purchase and watch, the system might also optimize for users liking the movies afterwards. With multi-task learning, we aim to build a single model that learns these multiple goals and tasks simultaneously. However, the prediction quality of commonly used multi-task models is often sensitive to the relationships between tasks. It is therefore important to study the modeling tradeo s between task-speci c objectives and inter-task relationships. In this work, we propose a novel multi-task learning approach, Multi-gate Mixture-of-Experts (MMoE), which explicitly learns to model task relationships from data. We adapt the Mixture-of-Experts (MoE) structure to multi-task learning by sharing the expert submodels across all tasks, while also having a gating network trained to optimize each task. To validate our approach on data with di erent levels of task relatedness, we rst apply it to a synthetic dataset where we control the task relatedness. We show that the proposed approach performs better than baseline methods when the tasks are less related. We also show that the MMoE structure results in an additional trainability bene t, depending on di erent levels of randomness in the training data and model initialization. Furthermore, we demonstrate the performance improvements by MMoE on real tasks including a binary classi cation benchmark, and a large-scale content recommendation system at Google.
Recommender systems are one of the most pervasive applications of machine learning in industry, with many services using them to match users to products or information. As such it is important to ask: what are the possible fairness risks, how can we quantify them, and how should we address them?In this paper we offer a set of novel metrics for evaluating algorithmic fairness concerns in recommender systems. In particular we show how measuring fairness based on pairwise comparisons from randomized experiments provides a tractable means to reason about fairness in rankings from recommender systems. Building on this metric, we offer a new regularizer to encourage improving this metric during model training and thus improve fairness in the resulting rankings. We apply this pairwise regularization to a large-scale, production recommender system and show that we are able to significantly improve the system's pairwise fairness.
ORCID ID: 0000-0002-7224-8449 (Y.Z.).Mutations generated by CRISPR/Cas9 in Arabidopsis (Arabidopsis thaliana) are often somatic and are rarely heritable. Isolation of mutations in Cas9-free Arabidopsis plants can ensure the stable transmission of the identified mutations to next generations, but the process is laborious and inefficient. Here, we present a simple visual screen for Cas9-free T2 seeds, allowing us to quickly obtain Cas9-free Arabidopsis mutants in the T2 generation. To demonstrate this in principle, we targeted two sites in the AUXIN-BINDING PROTEIN1 (ABP1) gene, whose function as a membrane-associated auxin receptor has been challenged recently. We obtained many T1 plants with detectable mutations near the target sites, but only a small fraction of T1 plants yielded Cas9-free abp1 mutations in the T2 generation. Moreover, the mutations did not segregate in Mendelian fashion in the T2 generation. However, mutations identified in the Cas9-free T2 plants were stably transmitted to the T3 generation following Mendelian genetics. To further simplify the screening procedure, we simultaneously targeted two sites in ABP1 to generate large deletions, which can be easily identified by PCR. We successfully generated two abp1 alleles that contained 1,141-and 711-bp deletions in the ABP1 gene. All of the Cas9-free abp1 alleles we generated were stable and heritable. The method described here allows for effectively isolating Cas9-free heritable CRISPR mutants in Arabidopsis.
Wikipedia's brilliance and curse is that any user can edit any of the encyclopedia entries. We introduce the notion of the impact of an edit, measured by the number of times the edited version is viewed. Using several datasets, including recent logs of all article views, we show that an overwhelming majority of the viewed words were written by frequent editors and that this majority is increasing. Similarly, using the same impact measure, we show that the probability of a typical article view being damaged is small but increasing, and we present empirically grounded classes of damage. Finally, we make policy recommendations for Wikipedia and other wikis in light of these findings.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.