IntroductionOne of the main motivations of logic programming is the idea of using a high level, logical specification of an algorithm, which abstracts away from many details related to its execution. As Miller pointed out, logical operators can be interpreted as high level search instructions, and the sequent calculus can be used to give a very clear and simple account of logic programming [13]. In traditional logic programming, one is mainly interested in the result of a computation, and computing is essentially the exploration of a search space. Recently, Miller's methods have been extended to so-called resource-conscious logics, like linear logic [4,12], and researchers designed several languages based on them [2,10,12]. These logics allow to deal directly with notions of resources, messages, processes, and so on; in other words, it is possible to give a proof-theoretical account of concurrent computations, in the logic programming spirit.A concurrent computation is not as much about getting a result, as it is about establishing certain communication patterns, protocols, and the like. Hence we might wonder to which extent logic can be useful in the specification of concurrent programs. Differently stated, if concurrent programs are essentially protocols, subject mainly to an operational view of computation, can logic contribute to their design? We are not concerned here about the use of logics to prove properties of programs, like, say, Hennessy-Milner logic for CCS. We want to use logic in the design of languages for concurrent computation, in order to obtain some useful inherent properties, at the object level, so to speak.In this paper I will present a very simple process algebra and I will argue about its proof-theoretical understanding in terms of proof-search. We will work within the calculus of structures [7], which is a recent generalisation of the sequent