This paper introduces the DIP (Domain Interaction Patterns) language, which provides a new way of integrating task and data parallelism. Coordination skeletons or patterns are used to express task parallelism among a collection of data parallel HPF tasks. Patterns specify the interaction among domains involved in the application along with the processor and data layouts. The use of domains, i.e. regions together with some interaction information such as borders, improves pattern reusability. The knowledge at the coordination level of data distribution belonging to the different HPF tasks is the key for an efficient implementation of the communication among them. Besides that, our system implementation requires no change to the runtime system support of the HPF compiler used.,1752'8&7,21High Performance Fortran (HPF) [4] has emerged as a standard data parallel, high level programming language for parallel computing. However, a disadvantage of using a language like HPF is that the user is constrained by the model of parallelism supported by the language. It is widely accepted that many important applications cannot be efficiently implemented following a pure data-parallel paradigm: pipelines of data parallel tasks, a common computation structure in image processing, signal processing and computer vision; multi-block codes containing irregularly structured regular meshes; multidisciplinary optimization problems like aircraft design. For these applications, rather than having a single data-parallel program, it is more appropriate to subdivide the whole computation into several data-parallel pieces, where these run concurrently and co-operate, thus exploiting task parallelism.Integration of task and data parallelism is currently an active area of research and several approaches have been proposed [1]. Integrating the two forms of parallelism cleanly and within a coherent programming model is difficult. In general, compiler-based approaches are limited in terms of the forms of task parallelism structures they can support, and runtime solutions require that the programmer have to manage task parallelism at a lower level than data parallelism. The use of coordination models and languages [2] to integrate task and data parallelism is proving to be a good alternative [3][5][6], providing a high level mechanism and supporting different forms of task parallelism structures in a clear and elegant way.In this paper we present DIP, a high level coordination language to express task parallelism among a collection of data parallel HPF tasks, which interact according to static and predictable patterns * . DIP allows an application to be organized as a combination of common skeletons, such as multi-blocking or pipelining. Skeletons specify the interaction among domains involved in the application along with the mapping of processors and data distribution. On the one hand, the use of domains, which are regions together with some interaction information such as borders, make the language suitable for the solution of numerical problem...