2012
DOI: 10.1007/978-3-642-31594-7_30
|View full text |Cite
|
Sign up to set email alerts
|

Parameterized Approximation via Fidelity Preserving Transformations

Abstract: Abstract.We motivate and describe a new parameterized approximation paradigm which studies the interaction between performance ratio and running time for any parameterization of a given optimization problem. As a key tool, we introduce the concept of α-shrinking transformation, for α ≥ 1. Applying such transformation to a parameterized problem instance decreases the parameter value, while preserving approximation ratio of α (or α-fidelity). For example, it is well-known that Vertex Cover cannot be approximated… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

2
30
0

Year Published

2012
2012
2018
2018

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 18 publications
(32 citation statements)
references
References 32 publications
2
30
0
Order By: Relevance
“…The observation that a lossy pre-processing can simultaneously achieve a better size bound than normal kernelization algorithms as well as a better approximation factor than the ratio of the best approximation algorithms is not new. In particular, motivated by this observation Fellows et al [30] initiated the study of lossy kernelization. Fellows et al [30] proposed a definition of lossy kernelization called α-fidelity kernels.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…The observation that a lossy pre-processing can simultaneously achieve a better size bound than normal kernelization algorithms as well as a better approximation factor than the ratio of the best approximation algorithms is not new. In particular, motivated by this observation Fellows et al [30] initiated the study of lossy kernelization. Fellows et al [30] proposed a definition of lossy kernelization called α-fidelity kernels.…”
Section: Introductionmentioning
confidence: 99%
“…In particular, motivated by this observation Fellows et al [30] initiated the study of lossy kernelization. Fellows et al [30] proposed a definition of lossy kernelization called α-fidelity kernels. Essentially, an α-fidelity kernel is a polynomial time pre-processing procedure such that an optimal solution to the reduced instance translates to an α-approximate solution to the original.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Approximation algorithmics has a highly developed theory (having produced concepts such as MaxSNP-hardness and the famous PCP theory) for proving (relative to some plausible complexity-theoretic assumption) lower bounds on the approximation factors [1,44,46]. We remark that there exist frameworks combining kernelization and approximation algorithms, namely α-fidelity kernelization [22] and lossy kernelization [35].…”
Section: Introductionmentioning
confidence: 99%
“…This issue has already been considered for other FPT problems, in particular for the min vertex cover problem. In [2,3,13] several parameterized approximation algorithms running faster than (exact) FPT algorithms and achieving ratios better than the ratio 2 (achievable in polynomial time) are given. Note that [3,13] ask as open question if similar results can be achieved for edge dominating set.…”
Section: Introductionmentioning
confidence: 99%