Proceedings of the 2015 ACM SIGPLAN International Conference on Object-Oriented Programming, Systems, Languages, and Applicatio 2015
DOI: 10.1145/2814270.2814312
|View full text |Cite
|
Sign up to set email alerts
|

Optimizing hash-array mapped tries for fast and lean immutable JVM collections

Abstract: The data structures under-pinning collection API (e.g. lists, sets, maps) in the standard libraries of programming languages are used intensively in many applications.The standard libraries of recent Java Virtual Machine languages, such as Clojure or Scala, contain scalable and well-performing immutable collection data structures that are implemented as Hash-Array Mapped Tries (HAMTs). HAMTs already feature efficient lookup, insert, and delete operations, however due to their tree-based nature their memory foo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
23
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
4
1

Relationship

2
3

Authors

Journals

citations
Cited by 16 publications
(24 citation statements)
references
References 29 publications
1
23
0
Order By: Relevance
“…HAMTs constitute the basis for purely functional collections that are incrementally constructed and may refer to the unaltered parts of previous states [11,20]. In previous work we introduced the Compressed Hash-Array Mapped Prefixtree (CHAMP) [28], a cache-oblivious and canonical HAMT variant that improves the runtime efficiency of iteration (1.3-6.7 x) and equality checking (3-25.4 x) over its predecessor, while at the same time reducing memory footprints.…”
Section: Related Workmentioning
confidence: 99%
See 4 more Smart Citations
“…HAMTs constitute the basis for purely functional collections that are incrementally constructed and may refer to the unaltered parts of previous states [11,20]. In previous work we introduced the Compressed Hash-Array Mapped Prefixtree (CHAMP) [28], a cache-oblivious and canonical HAMT variant that improves the runtime efficiency of iteration (1.3-6.7 x) and equality checking (3-25.4 x) over its predecessor, while at the same time reducing memory footprints.…”
Section: Related Workmentioning
confidence: 99%
“…Yet, a feature model allows for customization towards specific workloads (e.g., sparse vectors). For certain efficiency trade-offs it is important to distinguish between HAMT encodings which store dataAsLeafs and encodings which allow for mixedNodes internally [28]. We currently generate unordered set, map, and multi-map data structures based on the state-of-the-art HAMT variants: HAMT [3], CHAMP [28], and HHAMT [27].…”
Section: Content : One-of(mixednodes Dataasleafs)mentioning
confidence: 99%
See 3 more Smart Citations