Touchscreen interaction currently relies on a limited set of multitouch gestures and a wide range of graphical widgets that are often difficult to manipulate and consume much screen real-estate. Many tasks remain tedious to perform on touchscreens: selecting text over multiple views, manipulating different degrees of freedom of a graphical object, invoking a command and setting its parameter values in a row. We propose a design space of simple multitouch gestures that designers of user interfaces can systematically explore to propose more gestures to users. We further consider a set of 32 gestures for tablet-sized devices, by proposing an incremental recognition engine that works with current hardware technology, and empirically testing the usability of those gestures. In our experiment, individual gestures are recognized with an average accuracy of ∼90%, and users successfully achieve some of the transitions between gestures without the use of explicit delimiters. The goal of our contribution is to assist designers in optimizing the use of the rich multi-touch input channel for the activation of discrete and continuous controls, and enable fluid transitions between controls.