This paper describes the design and implementation of a concurrent compacting garbage collector for languages that distinguish mutable data from immutable data (
e.g.
, ML) as well for languages that manipulate only immutable data (
e.g.
, pure functional languages such as Haskell). The collector runs on shared-memory parallel computers and requires minimal mutator/collector synchronization. No special hardware or operating system support is required.
Our concurrent collector derives from sequential semi-space copying collectors. The collector requires that a heap object includes
forwarding pointer
in addition to its data fields. An access to an immutable object can be satisfied either by the original or the forwarded copy of the object. A mutable object is always accessed through the forwarded copy, if one exists.
Measurements of this collector in a Standard ML compiler on a shared-memory computer indicate that it eliminates perceptible garbage-collection pauses by reclaiming storage in parallel with the computation proper. All observed pause times are less than 20 milliseconds. We describe extensions for the concurrent collection of multiple mutator threads and refinements to our design that can improve its efficiency.
Dynamic granularity estimation is a new technique for automatically identifying expressions in functional languages for parallel evaluation. Expressions with little computation relative to thread-creation costs should evaluate sequentially for maximum performance. Static identification of such threads is however difficult. Therefore, dynamic granularity estimation has compile-time and run-time components: Abstract interpretation statically identifies functions whose complexity depends on data structure sizes; the run-time system maintains approximations to these sizes. Compiler-inserted checks consult this size information to make thread creation decisions dynamically.
We describe dynamic granularity estimation for a list-based functional language. Extension to general recursive data structures and imperative operations is possible. Performance measurements of dynamic granularity estimation in a parallel ML implementation on a shared-memory machine demonstrate the possibility of large reductions (>20%) in execution time.
Dynamic granularity estimation is a new technique for automatically identifying expressions in functional languages for parallel evaluation. Expressions with little computation relative to thread-creation costs should evaluate sequentially for maximum performance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.