Proceedings of the Forty-Second ACM Symposium on Theory of Computing 2010
DOI: 10.1145/1806689.1806758
|View full text |Cite
|
Sign up to set email alerts
|

Complexity theory for operators in analysis

Abstract: We propose a new framework for discussing computational complexity of problems involving uncountably many objects, such as real numbers, sets and functions, that can be represented only by approximation. The key idea is to use a certain class of string functions, which we call regular functions, as names representing these objects. These are more expressive than infinite sequences, which served as names in prior work that formulated complexity in more restricted settings. An important advantage of using regula… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
27
0

Year Published

2013
2013
2022
2022

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 22 publications
(27 citation statements)
references
References 31 publications
0
27
0
Order By: Relevance
“…Indeed, points of a given size necessarily live in a compact subset so if the size of each point is a natural number then the space must be a countable union of compact sets. Recently Kawamura and Cook [11] developed a framework applicable to the space C[0, 1] (which is not σ-compact), using higher-order complexity theory and in particular second-order polynomials. In particular their theory enables them to prove uniform versions of older results about the complexity of solving differential equations, as well as new results [12,13].…”
Section: Introductionmentioning
confidence: 99%
“…Indeed, points of a given size necessarily live in a compact subset so if the size of each point is a natural number then the space must be a countable union of compact sets. Recently Kawamura and Cook [11] developed a framework applicable to the space C[0, 1] (which is not σ-compact), using higher-order complexity theory and in particular second-order polynomials. In particular their theory enables them to prove uniform versions of older results about the complexity of solving differential equations, as well as new results [12,13].…”
Section: Introductionmentioning
confidence: 99%
“…A typical feature of such results is that they assert the existence of computable objects, but the computational complexity of these objects is unbounded. This is also the case here: using similar techniques as in [41], we can strengthen Theorem 5.1 (ii) to assert for every nonempty co-semi-decidable and convex A ⊆ K the existence of a polynomial-time computable nonexpansive f : K → K such that Fix(f ) = A, at least in the case where K is computably compact, and so in particular in the finite-dimensional case (if K is not computably compact there is no uniform majorant on the names of the points in K, so one would have to work in the framework of second-order complexity [39]). This allows us to characterise the computational complexity of fixed points of Lipschitz-continuous polynomial-time computable functions according to their Lipschitz constant.…”
Section: Characterisation Of the Fixed Point Sets Of Computable Nonexmentioning
confidence: 99%
“…This defines the class of (polynomial-time) computable functions from Reg to Reg. We can suitably define some other complexity classes related to nondeterminism or space complexity, as well as the notions of reduction and hardness [15].…”
Section: Definitionmentioning
confidence: 99%
“…[19]). This was due to the absence of a sufficiently general theory of secondorder polynomial-time computability -a gap which was filled by Cook and the first author in [15]. This theory can be considered as a refinement of the computability theory.…”
Section: Introductionmentioning
confidence: 99%