1992
DOI: 10.1007/3-540-55719-9_72
|View full text |Cite
|
Sign up to set email alerts
|

Reductions to sets of low information content

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

6
10
0

Year Published

1995
1995
2013
2013

Publication Types

Select...
5
4

Relationship

1
8

Authors

Journals

citations
Cited by 27 publications
(16 citation statements)
references
References 35 publications
6
10
0
Order By: Relevance
“…The left set method turned out to be a well suited tooi to prove collapse results concerning sparse sets. Using this method similar results were obtained for polynomial-time conjunctive réductions [3,32]. Also the proof in [3] showing that no bounded Turing hard set for NP conjunctively reduces to a sparse set unless P = NP uses the left set technique.…”
supporting
confidence: 56%
See 1 more Smart Citation
“…The left set method turned out to be a well suited tooi to prove collapse results concerning sparse sets. Using this method similar results were obtained for polynomial-time conjunctive réductions [3,32]. Also the proof in [3] showing that no bounded Turing hard set for NP conjunctively reduces to a sparse set unless P = NP uses the left set technique.…”
supporting
confidence: 56%
“…Using this method similar results were obtained for polynomial-time conjunctive réductions [3,32]. Also the proof in [3] showing that no bounded Turing hard set for NP conjunctively reduces to a sparse set unless P = NP uses the left set technique. Furthermore, it makes use of the fact that the sets in R^T (R?c (SPARSE)) are monotonously reducible to a sparse set.…”
supporting
confidence: 56%
“…The consequences in Theorem 6.1 are much stronger than what is known to follow from P = NP. If P = NP, then no ≤ p btt -hard or ≤ p c -hard set is sparse [26,3], but it is not known whether hard sets under disjunctive reductions or unbounded Turing reductions can be sparse.…”
Section: Hard Sets For Npmentioning
confidence: 99%
“…Results (1) and (2) improved previous separations due to Watanabe [18]. The classes in (2) and (3) were reduced to disjunctions, which can be learned by Littlestone's Winnow algorithm [9]. We obtain further results in this direction using more sophisticated learning algorithms and concept classes that generalize disjunctions.…”
Section: Introductionsupporting
confidence: 67%