1996
DOI: 10.1007/3-540-61422-2_128
|View full text |Cite
|
Sign up to set email alerts
|

Using sparsification for parametric minimum spanning tree problems

Abstract: Two applications of sparsification to parametric computing are given. The first is a fast algorithm for enumerating all distinct minimum spanning trees in a graph whose edge weights vary linearly with a parameter. The second is an asymptotically optimal algorithm for the minimum ratio spanning tree problem, as well as other search problems, on dense graphs.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
28
0

Year Published

2009
2009
2021
2021

Publication Types

Select...
5
2

Relationship

2
5

Authors

Journals

citations
Cited by 21 publications
(28 citation statements)
references
References 34 publications
0
28
0
Order By: Relevance
“…Finally, our problem should not be confused with other spanning tree games found in cooperative games and mechanism design theory [8,11], with parametric spanning tree problems [7,6], or with two-stage stochastic minimum spanning tree problems [5].…”
Section: Related Workmentioning
confidence: 99%
“…Finally, our problem should not be confused with other spanning tree games found in cooperative games and mechanism design theory [8,11], with parametric spanning tree problems [7,6], or with two-stage stochastic minimum spanning tree problems [5].…”
Section: Related Workmentioning
confidence: 99%
“…The parametric minimum spanning tree problem (with linear edge weights) has polynomially many solutions that can be constructed in polynomial time [1,15,18]. In contrast, the parametric shortest path problem is not polynomial, at least if the output must be represented as an explicit list of paths: it has a number of solutions and running time that are exponential in log 2 n on n-vertex graphs [10].…”
Section: Related Workmentioning
confidence: 99%
“…where Π(δ X = v) stands for the possibility degree that δ X = v. Since the statement "X is optimal under w w w" is equivalent to the condition δ X (w w w) = 0, we get the following relationships between optimality degrees (6), (7) and deviation (10) Using (3) we can express µ∆ X in the following way: N(X is optimal) = 1 − inf{λ : δ λ X = 0} (13) and N(X is optimal) = 0 if δ 1 X > 0. Exactly the same reasoning can be applied to elements.…”
Section: The Optimality Evaluation and Fuzzy Deviation Intervalmentioning
confidence: 99%
“…Exactly the same reasoning can be applied to elements. It is enough to replace X with f in formulae (10)- (13).…”
Section: The Optimality Evaluation and Fuzzy Deviation Intervalmentioning
confidence: 99%
See 1 more Smart Citation