2022 Conference on Cognitive Computational Neuroscience 2022
DOI: 10.32470/ccn.2022.1155-0
|View full text |Cite
|
Sign up to set email alerts
|

How Well Do Contrastive Learning Algorithms Model Human Real-time and Life-long Learning?

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 20 publications
(6 citation statements)
references
References 0 publications
0
6
0
Order By: Relevance
“…Existing work validates the impact of training tasks on model behavior and representations. Even when restricted to training on ImageNet images, the training objective and/or data augmentation can affect how well models match human similarity judgments of images (Muttenthaler, Dippel, Linhardt, Vandermeulen, & Kornblith, 2023), categorization patterns , performance on real-time and life-long learning benchmarks (Zhuang et al, 2022), and feature preferences , and also how well they predict primate physiology and human fMRI (Konkle & Alvarez, 2022) data. Still, it is possible to enrich DNN training tasks much further, even for object categorization (Sun, Shrivastava, Singh, & Gupta, 2017).…”
Section: Detailed and Holisticmentioning
confidence: 99%
“…Existing work validates the impact of training tasks on model behavior and representations. Even when restricted to training on ImageNet images, the training objective and/or data augmentation can affect how well models match human similarity judgments of images (Muttenthaler, Dippel, Linhardt, Vandermeulen, & Kornblith, 2023), categorization patterns , performance on real-time and life-long learning benchmarks (Zhuang et al, 2022), and feature preferences , and also how well they predict primate physiology and human fMRI (Konkle & Alvarez, 2022) data. Still, it is possible to enrich DNN training tasks much further, even for object categorization (Sun, Shrivastava, Singh, & Gupta, 2017).…”
Section: Detailed and Holisticmentioning
confidence: 99%
“…Indeed, this approach has been shown to improve vision models’ ability to capture primate neural responses ( Mehrer et al, 2021 ). It will also be important to investigate the role of the learning algorithms that the models use and their training objective, as both likely affect the representations that the models learn (e.g., see Zhuang et al, 2022 , for evidence from vision). Specifically, Zhuang et al (2022) showed that in an object categorization task, the negative sampling objective function, which maximizes the similarity between objects in the same category while minimizing the similarity between objects in different categories in the internal representation of the model, can alleviate model failure in capturing human visual behavior, which occurs under the standard objective function.…”
Section: Discussionmentioning
confidence: 99%
“…It will also be important to investigate the role of the learning algorithms that the models use and their training objective, as both likely affect the representations that the models learn (e.g., see Zhuang et al, 2022 , for evidence from vision). Specifically, Zhuang et al (2022) showed that in an object categorization task, the negative sampling objective function, which maximizes the similarity between objects in the same category while minimizing the similarity between objects in different categories in the internal representation of the model, can alleviate model failure in capturing human visual behavior, which occurs under the standard objective function. This failure is due to the presence of categories that are infrequent in the training data, and this finding can be relevant for language, which also contains infrequent elements (words and constructions) amid more common ones.…”
Section: Discussionmentioning
confidence: 99%
“…Indeed, this approach has been shown to improve vision models’ ability to capture primate neural responses (Mehrer et al 2021). Further, it will be important to investigate the role of the learning algorithms that the models use and their training objective , as both likely affect the representations that the models learn (e.g., see Zhuang et al, 2022 for evidence from vision).…”
Section: Discussionmentioning
confidence: 99%