2022
DOI: 10.3389/fnbeh.2022.815461
|View full text |Cite
|
Sign up to set email alerts
|

Lateral Habenula Responses During Eye Contact in a Reward Conditioning Task

Abstract: For many animals, social interaction may have intrinsic reward value over and above its utility as a means to the desired end. Eye contact is the starting point of interactions in many social animals, including primates, and abnormal patterns of eye contact are present in many mental disorders. Whereas abundant previous studies have shown that negative emotions such as fear strongly affect eye contact behavior, modulation of eye contact by reward has received scant attention. Here we recorded eye movement patt… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1

Relationship

2
3

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 51 publications
0
4
0
Order By: Relevance
“…Each condition was specified by a combination of environment cue, action cue, and fractal object, thus giving rise to a quantifiable reward prediction (RP) that updated with the onset of each trial event. Supplementary Tables 1-2 of the previous study ( Lee and Hikosaka, 2022 ) summarize how reward (in microliters of juice) and punishment predictions (in milliseconds of airpuff) were determined according to all combinations of task conditions. Because all the stimuli and outcomes appeared pseudorandomly, we could compute the theoretical RP at each step ( Figure S1 ).…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Each condition was specified by a combination of environment cue, action cue, and fractal object, thus giving rise to a quantifiable reward prediction (RP) that updated with the onset of each trial event. Supplementary Tables 1-2 of the previous study ( Lee and Hikosaka, 2022 ) summarize how reward (in microliters of juice) and punishment predictions (in milliseconds of airpuff) were determined according to all combinations of task conditions. Because all the stimuli and outcomes appeared pseudorandomly, we could compute the theoretical RP at each step ( Figure S1 ).…”
Section: Resultsmentioning
confidence: 99%
“…The environments were large (circular, diameter: 40°) grayscale landscape and face images from Google Earth ( https://www.google.com/earth ) and OpenAerialMap ( https://openaerialmap.org ) that were used in previous studies ( Kunimatsu et al., 2019 ; Maeda et al., 2018 ) and Face Database ( https://fei.edu.br/∼cet/facedatabase.html ). The neuronal and behavioral data in the face environment were published in the related paper ( Lee and Hikosaka, 2022 ). Monkeys experienced 2 sets of stimuli (total 32 environments and 160 fractal objects) in separated blocks.…”
Section: Methodsmentioning
confidence: 99%
“… 30 , 31 , 32 Furthermore, a previous study showed that monkeys looked more at human faces associated with large reward than faces associated with small reward from 150 ms after face image presentation. 40 Our recent studies showed that the visual responses in STRt neurons lead to the automatic detection of individual high-valued objects. 24 The STRt neurons modulate saccadic eye movements based on long-term value memory through the direct and indirect pathways.…”
Section: Discussionmentioning
confidence: 99%
“… 48 Though humans and social animals identify individuals by not only faces but also physical-characteristics, voices, and odor cues in the natural environment, 49 , 50 , 51 several studies support that poor-quality face images such as grayscale face images are sufficient for face identification and discrimination. 17 , 40 It is known that these abstract face images are rapidly processed by the subcortical visual pathway (SC-pulvinar-amygdala) with short latency. 52 , 53 Because the neurons in the amygdala project to the STRt, 54 the signals through the subcortical pathway may be conveyed to the STRt for integration of the abstract face images with memory information.…”
Section: Discussionmentioning
confidence: 99%