2006
DOI: 10.1007/11795490_3
|View full text |Cite
|
Sign up to set email alerts
|

A Lazy Concurrent List-Based Set Algorithm

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
191
0

Year Published

2006
2006
2023
2023

Publication Types

Select...
5
4

Relationship

1
8

Authors

Journals

citations
Cited by 161 publications
(192 citation statements)
references
References 8 publications
1
191
0
Order By: Relevance
“…This means that for each addition or deletion of a node in the list, a modification lock has to be obtained, whereas simple comparison operations do not require explicit locks. This idea of modification locks can be implemented using a novel concurrent data structure called the Lazy List [20]. We adjusted the basic structure for the use in skyline computation and show the resulting code in Figure 5.…”
Section: Lazy List Parallel Bnlmentioning
confidence: 99%
“…This means that for each addition or deletion of a node in the list, a modification lock has to be obtained, whereas simple comparison operations do not require explicit locks. This idea of modification locks can be implemented using a novel concurrent data structure called the Lazy List [20]. We adjusted the basic structure for the use in skyline computation and show the resulting code in Figure 5.…”
Section: Lazy List Parallel Bnlmentioning
confidence: 99%
“…If this latter optik trylock version fails, the predecessor's OPTIK lock is reverted, instead of unlocked, in order to avoid false conflicts with other concurrent operations. Notice that due to OPTIK, (i) no deleted flag is required (as in [22]), and (ii) the OPTIK lock of the deleted node is never released, which prohibits updates from reusing this node. Essentially, the linearization point of a deletion is the actual write on the pred->next pointer in line 24.…”
Section: Optik-based Linked Listmentioning
confidence: 99%
“…A vast amount of work has been dedicated to the development of correct and scalable CSDS algorithms [3][4][5][7][8][9][10]14,17].…”
Section: Introductionmentioning
confidence: 99%
“…Our work is based on the vast amount of prior practical work that points to a single direction for achieving scalability: Strip down synchronization (i.e., every construct that induces coordination of concurrent threads), which is a major impediment to scalability. To achieve minimal synchronization, all existing patterns for designing concurrent data structures do, directly or indirectly, promote concurrent designs that are close to their sequential counterparts: concrete CSDS algorithms [10,13], RCU [17], RLU [16], OPTIK [8], ASCY [4], etc.…”
Section: Introductionmentioning
confidence: 99%