In this paper we study resource allocation problems that involve multiple self-interested parties or players, and a central decision maker. We introduce and study the price of fairness, which is the relative system efficiency loss under a "fair" allocation assuming that a fully efficient allocation is one that maximizes the sum of player utilities. We focus on two well accepted, axiomatically justified notions of fairness, viz. proportional fairness and max-min fairness. For these notions we provide a tight characterization of the price of fairness for a broad family of problems.
Large-scale, unbiased proteomics studies are constrained by the complexity of the plasma proteome. Here we report a highly parallel protein quantitation platform integrating nanoparticle (NP) protein coronas with liquid chromatography-mass spectrometry for efficient proteomic profiling. A protein corona is a protein layer adsorbed onto NPs upon contact with biofluids. Varying the physicochemical properties of engineered NPs translates to distinct protein corona patterns enabling differential and reproducible interrogation of biological samples, including deep sampling of the plasma proteome. Spike experiments confirm a linear signal response. The median coefficient of variation was 22%. We screened 43 NPs and selected a panel of 5, which detect more than 2,000 proteins from 141 plasma samples using a 96-well automated workflow in a pilot non-small cell lung cancer classification study. Our streamlined workflow combines depth of coverage and throughput with precise quantification based on unique interactions between proteins and NPs engineered for deep and scalable quantitative proteomic studies.
A central push in operations models over the last decade has been the incorporation of models of customer choice. Real world implementations of many of these models face the formidable stumbling block of simply identifying the 'right' model of choice to use. Thus motivated, we visit the following problem: For a 'generic' model of consumer choice (namely, distributions over preference lists) and a limited amount of data on how consumers actually make decisions (such as marginal information about these distributions), how may one predict revenues from offering a particular assortment of choices? We present a framework to answer such questions and design a number of tractable algorithms from a data and computational standpoint for the same. This paper thus takes a significant step towards 'automating' the crucial task of choice model selection in the context of operational decision problems.
A central decision maker allocates resources to each of n players who derive utility from their allocation. Consequently, her actions may be viewed as choosing an allocation of utilities u = (u 1 , u 2 , . . . , u n ) from some set of feasible allocations U ∈ R n + . Consider a decision maker that chooses an allocation u ∈ U to maximizeThe above family of objective functions proposed originally by Atkinson and parameterized by α ∈ R + , permits the decision maker to tradeoff efficiency (by lowering α) for fairness (by increasing α). The family of Atkinson utility functions is canonical in that it captures the efficient or "utilitarian" allocation (α = 0), the "max-min" fair allocation (α → ∞), and the proprotionally fair (or Nash bargaining) allocation (α → 1).This paper characterizes the tradeoff between efficiency and fairness in this general setting. In particular, we demonstrate that under reasonable assumptions on U , the total utility to players under a fair allocation, FAIR(U, α), and the total utility to players under a perfectly efficient allocation, FAIR(U, 0) SYSTEM(U ), must satisfy, and moreover, that this bound is essentially tight.
Significance This paper compares the probabilistic accuracy of short-term forecasts of reported deaths due to COVID-19 during the first year and a half of the pandemic in the United States. Results show high variation in accuracy between and within stand-alone models and more consistent accuracy from an ensemble model that combined forecasts from all eligible models. This demonstrates that an ensemble model provided a reliable and comparatively accurate means of forecasting deaths during the COVID-19 pandemic that exceeded the performance of all of the models that contributed to it. This work strengthens the evidence base for synthesizing multiple models to support public-health action.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.