Abstract-Human machine teaming has, for decades, been conceptualized as a function allocation (FA) or levels of autonomy (LOA) process: the human is suited for some tasks, while the machine is suitable for others, and as machines improve they take over duties previously assigned to humans. A wide variety of methods-including adaptive, adjustable, blended, supervisory and mixed initiative control, implemented discretely or continuously, as potential fields, as virtual fixture interfaces, or haptic interfaces-are derivatives of FA/LOA. We formalize FA/LOA (and all their derivatives) under a single mathematical formulation called classical shared control (CSC). Despite the widespread adoption of CSC, we prove that it fails to optimize human and robot agreement and intent if either the human or robot model displays "intention ambiguity" (e.g., the human's intended goal is unclear or the robot finds multiple viable solutions). Practically, this suboptimality can manifest as unnecessary and unresolvable disagreement (an unnecessary deadlock). For instance, if the robot chooses to go left around an obstacle and the human chooses to go right, CSC only provides two solutions: freeze in place or collide with the obstacle (we provide a wide variety of failure examples in [52], https://arxiv.org/abs/1611.09490). We find that CSC suboptimality stems from arbitrating over model samples, rather than over models. Our key insight is thus to arbitrate over human and robot distributions; we prove this method optimizes human and robot agreement and intent and resolves deadlocking. Our key contribution is computationally efficient distribution arbitration: if the human and robot carry N our joint has fewer modes than the individual agent models. We call our approach N min -sparse generalized shared control.