Many programming languages support either task parallelism or data parallelism, but few languages provide a uniform framework for writing applications that need both types of parallelism. We present a programming language and system that integrates task and data parallelism using shared objects. Shared objects may be stored on one processor or may be replicated. Objects may also be partitioned and distributed on several processors. Task parallelism is achieved by forking processes remotely and have them communicate and synchronize through objects. Data parallelism is achieved by executing operations on partitioned objects in parallel. Writing taskand data-parallel applications with shared objects has several advantages. Programmers use the objects as if they were stored in a memory common to all processors. On distributed-memory machines, if objects are remote, replicated, or partitioned, the system takes care of many low-level details such as data transfers and consistency semantics. In this article, we show how to write task-and data-parallel programs with our shared object model. We also describe a portable implementation of the model. To assess the performance of the system, we wrote several applications that use task and data parallelism and executed them on a collection of Pentium Pros connected by Myrinet. The performance of these applications is also discussed in this article.· 1133 easier to write in a data-parallel style and sometimes also obtain better performance. Our programming model also allows a single program to use both task and data parallelism, which, as we will see, is useful for several applications.The programming model also has clean semantics. All ADT operations on shared objects are executed atomically, whether the objects are stored on a single node, replicated, or partitioned. Within a single data-parallel operation, the language uses owner-computes semantics, which implies that processors do not observe updates of remote elements until the operation is completed.This article describes a programming language based on the model. Our language is based on the Orca task-parallel language [Bal et al. 1992]. We have extended Orca with constructs for data parallelism (i.e., partitioned objects and data-parallel operations). Likewise, the implementation of the extended language is based on an implementation of the original Orca language. For completeness, we briefly describe the original task-parallel constructs and their implementation in this article; for more detailed descriptions we refer to Bal et al. [1992; and Rühl et al. [1996].The outline of the rest of the article is as follows. In Section 2 we describe our programming model and the language we use for writing mixed task-and dataparallel applications. In Section 3, we present an implementation of the model on distributed-memory machines. This implementation consists of three layers: the compiler, the RTS, and a portability layer that facilitates porting the system to various platforms. In Section 4, we discuss the execution of parall...