2023
DOI: 10.7557/18.6805
|View full text |Cite
|
Sign up to set email alerts
|

Learning to solve arithmetic problems with a virtual abacus

Abstract: Acquiring mathematical skills is considered a key challenge for modern Artificial Intelligence systems. Inspired by the way humans discover numerical knowledge, here we introduce a deep reinforcement learning framework that allows to simulate how cognitive agents could gradually learn to solve arithmetic problems by interacting with a virtual abacus. The proposed model successfully learn to perform multi-digit additions and subtractions, achieving an error rate below 1% even when operands are much longer than … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(3 citation statements)
references
References 3 publications
0
3
0
Order By: Relevance
“…Another recent approach proposed to extend the regular transformer architecture to promote problem decomposition into reusable building blocks, applied in sequential steps. This allowed a significant increase in performance in terms of simple arithmetic and list processing tasks [56]. A different modular neurosymbolic architecture was shown to be able to improve on out-of-distribution nested arithmetical expressions when trained only on a subset including the simplest cases, by iteratively applying a learned substitution rule on the input string [57].…”
Section: Ad Hoc Architecturesmentioning
confidence: 99%
See 2 more Smart Citations
“…Another recent approach proposed to extend the regular transformer architecture to promote problem decomposition into reusable building blocks, applied in sequential steps. This allowed a significant increase in performance in terms of simple arithmetic and list processing tasks [56]. A different modular neurosymbolic architecture was shown to be able to improve on out-of-distribution nested arithmetical expressions when trained only on a subset including the simplest cases, by iteratively applying a learned substitution rule on the input string [57].…”
Section: Ad Hoc Architecturesmentioning
confidence: 99%
“…The idea of granting access to an external memory to solve complex problems is reminiscent of the notion of "material representation" introduced in anthropology, which has been recently elaborated on in the context of numerical cognition [63]. According to this view, abstract mathematical concepts would be a relatively recent cultural achievement, which have emerged thanks to the spread of numerical manipulation tools [64]. This perspective has recently been explored in computational models based on deep reinforcement learning, which can simulate the active interaction of a learning agent with external numerical representation devices [65,66].…”
Section: Generic Deep Learning Architecturesmentioning
confidence: 99%
See 1 more Smart Citation