2021
DOI: 10.48550/arxiv.2102.12201
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

On the Parameterized Complexity of Learning First-Order Logic

Abstract: We analyse the complexity of learning first-order definable concepts in a model-theoretic framework for supervised learning introduced by (Grohe and Turán, TOCS 2004). Previous research on the complexity of learning in this framework focussed on the question of when learning is possible in time sublinear in the background structure.Here we study the parameterized complexity of the learning problem. We obtain a hardness result showing that exactly learning first-order definable concepts is at least as hard as t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 11 publications
0
1
0
Order By: Relevance
“…The presence of ontologies makes the problem different from our work; adapting our general synthesis approach to the world of description logics remains future work. There is also prior work studying the complexity of learning logical concepts by characterizing the VC-dimension of logical hypothesis classes [Grohe and Turán 2004], work on parameterized complexity for logical separation problems in the PAC model [van Bergerem et al 2021], learning MSO-definable concepts on strings [Grohe et al 2017] and concepts definable in first-order logic with counting [van Bergerem 2019], learning temporal logic formulas from examples [Neider and Gavran 2018], and learning quantified invariants for arrays .…”
Section: Related Workmentioning
confidence: 99%
“…The presence of ontologies makes the problem different from our work; adapting our general synthesis approach to the world of description logics remains future work. There is also prior work studying the complexity of learning logical concepts by characterizing the VC-dimension of logical hypothesis classes [Grohe and Turán 2004], work on parameterized complexity for logical separation problems in the PAC model [van Bergerem et al 2021], learning MSO-definable concepts on strings [Grohe et al 2017] and concepts definable in first-order logic with counting [van Bergerem 2019], learning temporal logic formulas from examples [Neider and Gavran 2018], and learning quantified invariants for arrays .…”
Section: Related Workmentioning
confidence: 99%