2010 International Computer Symposium (ICS2010) 2010
DOI: 10.1109/compsym.2010.5685525
|View full text |Cite
|
Sign up to set email alerts
|

Web-based interactive module with multiple representations for learning geometry theorem proving

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(5 citation statements)
references
References 12 publications
0
5
0
Order By: Relevance
“…Current research delves into exploring mathematical theorem datasets (e.g., INT [190], Feit-Thompson [64], and IsarStep [92]) to develop novel machine learning-based theoremproving strategies. For instance, the format of IsarStep is shown in 14.…”
Section: Theorem Provingmentioning
confidence: 99%
See 1 more Smart Citation
“…Current research delves into exploring mathematical theorem datasets (e.g., INT [190], Feit-Thompson [64], and IsarStep [92]) to develop novel machine learning-based theoremproving strategies. For instance, the format of IsarStep is shown in 14.…”
Section: Theorem Provingmentioning
confidence: 99%
“…For instance, the format of IsarStep is shown in 14. Huang et al [64] introduced the Feit-Thompson dataset, encompassing 1,602 lemmas, expanding into 83,478 proof states for Yilun Zhao 1 Yunxiang Li 2 Chenying Li 3 Rui Zhang 4 1 Yale University 2 The Chinese University of Hong Kong 3 Northeastern University 4 Penn State University ilun.zhao@yale.edu 1155124348@link.cuhk.edu.hk li.chenyin@northeastern.edu rmz5227@psu.edu Abstract reasoning over hybrid data containxtual and tabular content (e.g., finans) has recently attracted much atten-NLP community. However, existing nswering (QA) benchmarks over hynly include a single flat table in each and thus lack examples of multirical reasoning across multiple hitables.…”
Section: Theorem Provingmentioning
confidence: 99%
“…But given any set of initial axioms, many problems will inevitably be out of reach of search. 15 Abstraction learning provides a means to progress much beyond the limit of search methods.…”
Section: Discussionmentioning
confidence: 99%
“…In particular, deep neural networks trained via RL emerged as a tool to learn policies (distributions over actions to take given a state) and value functions (an estimate of rewards that can be obtained from a given state) from raw representations, such as strings or pixels. Both policies and value functions can be used to guide search algorithms, making deep RL suitable for learning to search in large-scale problems such as finding proofs [14][15][16]. Given the availability of proofs generated by human mathematicians in large formalization projects, researchers have explored the use of human-written proofs as supervised training data to guide proof search [14,16,17].…”
Section: Related Workmentioning
confidence: 99%
“…It is the sub-model π(g|s ′ ) mapping from the encoded state s ′ ∈ R D to a subset of goals g ⊆ s ′ . A majority of approaches apply Breadth First Search (Bansal et al 2019a;Huang et al 2019) or Best First Search for goal selection. TACTICZERO (Wu et al 2021a) showed an improvement by considering the likelihood of proving distinct sets of goals (fringes) equivalent to the original goal.…”
Section: Learning Approachmentioning
confidence: 99%