The goal-oriented manipulation of articulated objects plays an important role in real-world robot tasks. Current approaches typically pose a number of simplifying assumptions to reason upon how to obtain an articulated object's goal configuration, and exploit ad hoc algorithms. The consequence is twofold: firstly, it is difficult to generalise obtained solutions (in terms of actions a robot can execute) to different target object's configurations and, in a broad sense, to different object's physical characteristics; secondly, the representation and the reasoning layers are tightly coupled and interdependent. In this paper we investigate the use of automated planning techniques for dealing with articulated objects manipulation tasks. Such techniques allow for a clear separation between knowledge and reasoning, as advocated in Knowledge Engineering. We introduce two PDDL formulations of the task, which rely on conceptually different representations of the orientation of the objects. Experiments involving several planners and increasing size objects demonstrate the effectiveness of the proposed models, and confirm its exploitability when embedded in a real-world robot software architecture.