1988
DOI: 10.1109/12.2244
|View full text |Cite
|
Sign up to set email alerts
|

Matching language and hardware for parallel computation in the Linda Machine

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
11
0

Year Published

1989
1989
2007
2007

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 45 publications
(11 citation statements)
references
References 15 publications
0
11
0
Order By: Relevance
“…More precisely, we want to control when and for how long a failure should occur on each node. 1 For this reason, we extended our benchmark application by adding a set of special operations for starting and ending a kernel's failure.…”
Section: Set-up Of the Experimentsmentioning
confidence: 99%
See 1 more Smart Citation
“…More precisely, we want to control when and for how long a failure should occur on each node. 1 For this reason, we extended our benchmark application by adding a set of special operations for starting and ending a kernel's failure.…”
Section: Set-up Of the Experimentsmentioning
confidence: 99%
“…There has been much research on how to optimize the distribution and replication of tuples for parallel applications. A generic approach that balances the broadcasting of read and write operations is described in [1] and exemplifies tuple spaces for parallel applications.…”
Section: Related Workmentioning
confidence: 99%
“…However, update propagation in such systems (typically supported either by invalidate and resend on access or by RMI-style mechanisms) are inefficient (re-sending a large object or a log of operations (RMI)) and often infeasible 1 for data mining applications. Distributed shared memory systems [3,5,26,35,33,20,22,43] all support transparent sharing of data amongst remote processes, with efficient update propagation, but most require tight coupling of processes with sharing that is not address-independent. None of the above systems support flexible client-controlled coherence, client-controlled memory placement (due to their address-dependent nature), or anytime updates.…”
Section: Related Workmentioning
confidence: 99%
“…Linda 1], which is the most important example of this kind of language, provides the abstraction of a shared, content-addressable memory that can be accessed by a n y process. While Linda is architecture independent, and thus holds those characteristics of high-levelness that facilitate the parallel programming job and the portability of programs, the tuning of each Linda application for each speci c target machine is the responsibility o f t h e programmer.…”
Section: Related Workmentioning
confidence: 99%