2019
DOI: 10.1007/978-3-030-20518-8_5
|View full text |Cite
|
Sign up to set email alerts
|

Numerosity Representation in InfoGAN: An Empirical Study

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
6
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
1

Relationship

3
2

Authors

Journals

citations
Cited by 5 publications
(7 citation statements)
references
References 4 publications
1
6
0
Order By: Relevance
“…Overall, these results are well-aligned with the existing empirical literature on human behavior, which suggests that numerosity estimates are distributed around the target mean and variability tends to increase with numerosity [42,44], and that numerosity estimation can be altered by confounding non-numerical magnitudes [21,25]. Notably, the synthetic images produced by the Transformer are much more precise compared to samples produced by other deep generative models, such as VAEs or GANs [26,34]. Moreover, differently from previous approaches, here we demonstrate that the generation process can be biased toward a specific numerosity, suggesting that attention mechanisms play a key role in allowing a more precise processing of numerosity information.…”
Section: Resultssupporting
confidence: 80%
See 2 more Smart Citations
“…Overall, these results are well-aligned with the existing empirical literature on human behavior, which suggests that numerosity estimates are distributed around the target mean and variability tends to increase with numerosity [42,44], and that numerosity estimation can be altered by confounding non-numerical magnitudes [21,25]. Notably, the synthetic images produced by the Transformer are much more precise compared to samples produced by other deep generative models, such as VAEs or GANs [26,34]. Moreover, differently from previous approaches, here we demonstrate that the generation process can be biased toward a specific numerosity, suggesting that attention mechanisms play a key role in allowing a more precise processing of numerosity information.…”
Section: Resultssupporting
confidence: 80%
“…Notably, the synthetic images produced by the Transformer are much more precise compared to samples produced by other deep generative models, such as VAEs or GANs [ 26 , 34 ]. Moreover, differently from previous approaches, here we demonstrate that the generation process can be biased toward a specific numerosity, suggesting that attention mechanisms play a key role in allowing a more precise processing of numerosity information.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Furthermore, our exploration of different architectures and learning hyperparameters suggests that numerosity comparison is a challenging task for deep learning models, although the present results do not exclude the possibility that more advanced architectures (e.g., incorporating ad-hoc pre-processing stages or convolutional mechanisms) could achieve higher performance. In this respect, it should be noted that even state-of-the-art models, such as those based on generative adversarial networks, have shown unable to explicitly represent numerosity as a fully disentangled factor 62 . The response variability exhibited by the different deep learning architectures also suggests that this framework could be used to study the factors contributing to the emergence of individual differences in human observers, which is crucial for developing personalized computational models that may predict learning outcomes (see 63 for a recent application to learning to read and dyslexia).…”
Section: Discussionmentioning
confidence: 99%
“…Several questions remain under investigation: Is it possible to fully disentangle numerosity from continuous magnitudes by only relying on unsupervised learning (Zanetti et al, 2019)? Can generative models generalize to unseen numerosities (Zhao et al, 2018)?…”
Section: Computational Models Of Basic Quantification Skillsmentioning
confidence: 99%