2023
DOI: 10.1002/pamm.202200174
|View full text |Cite
|
Sign up to set email alerts
|

Which neural networks can be computed by an algorithm? – Generalised hardness of approximation meets Deep Learning

Abstract: Classical hardness of approximation (HA) is the phenomenon that, assuming P ≠ NP, one can easily compute an ϵ‐approximation to the solution of a discrete computational problem for ϵ > ϵ0 > 0, but for ϵ < ϵ0 – where ϵ0 is the approximation threshold – it becomes intractable. Recently, a similar yet more general phenomenon has been documented in AI: Generalised hardness of approximation (GHA). This phenomenon includes the following occurrence: For any approximation threshold ϵ1 > 0, there are AI problems for whi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 39 publications
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?