Shareable interfaces, those that can be interacted simultaneously by several users, are a common tool used both in CSCW research and in real world applications. They tend however to lack a capability that has been traditionally relevant to the usefulness of computing systems: multi-tasking. In this paper we explain why a combination of the multiuser features of shareable interfaces and the multitasking capabilities of general-purpose computing, could be relevant for building useful systems, and why these features are not present today in most of the current prototypes and systems. We also discuss possible approaches for solving the problems that prevent shareable interfaces to fully support multitasking, and we present a novel approach based on a distributed, application-centered, content-based gesture disambiguation. We describe how an already existing framework, GestureAgents, implements this new approach, focusing on expanding the description of the relevant elements related to this problem, and conclude with some example applications and a discussion.