The array is a fundamental data structure that provides an efficient way to store and retrieve non-sparse data contiguous in memory. Arrays are important for the performance of many memory-intensive applications due to the design of modern memory hierarchies: contiguous storage facilitates spatial locality and predictive access patterns which enables prefetching.Operations on large arrays often lend themselves well to parallelisation, such as a fork-join style divideand-conquer algorithm for sorting. For parallel operations on arrays to be deterministic, data-race freedom must be guaranteed. For operations on arrays of primitive data, data-race freedom is obtained by coordinating accesses so that no two threads operate on the same array indices. This is however not enough for arrays of non-primitives due to aliasing: accesses of separate array elements may return pointers to the same object, or overlapping structures.Reference capabilities have been used successfully in the past to statically guarantee the absence of dataraces in object-oriented programs by combining concepts such as uniqueness, read-only access, immutability, and borrowing. This paper presents the first extension of reference capabilities-called array capabilitiesthat support concurrent and parallel operations on arrays of both primitive and non-primitive values.We define a small language of operators on array capabilities. In addition to element access, these operations support the abstract manipulation of arrays: logical splitting of arrays into subarrays; merging subarrays; and in-place alignment of the physical elements of an array with the logical order defined through splitting and merging. The interplay of these operations with uniqueness, borrowing and read-only access allows expressing a wide range of array use cases.By formalising our type system and the dynamic semantics of the key operations on array capabilities in a simple calculus, we give a precise description of the array capabilities. We when prove type soundness and array disjointness-that two capabilities that allow mutation can never give access to overlapping sets of elements. Data-race freedom is stated as a corollary to these theorems, as data races require two array capabilities in different threads. In addition to the formal approach, we experiment with array capabilities on a small number of parallel array algorithms and report our preliminary findings.Array capabilities extend the safety of reference capabilities to arrays in a way that build cleanly on existing reference capability concepts and constructs. This allows programmers to express parallel array algorithms on a higher level than individual indices, and with absences of data races guaranteed statically. ACM CCSComputing methodologies → Parallel programming languages; Mathematics of computing → Computations on matrices;Software and its engineering → Data types and structures; race-free parallel programming with support for arrays. We do this in the context of Arr-O-Matic, an extension of Arrgh [ ]; examine ho...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.