2001
DOI: 10.1016/s0006-8993(01)02457-x
|View full text |Cite
|
Sign up to set email alerts
|

Cortical responses to object-motion and visually-induced self-motion perception

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
10
0

Year Published

2004
2004
2022
2022

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 22 publications
(11 citation statements)
references
References 17 publications
1
10
0
Order By: Relevance
“…Conversely, neurons in the pigeon's accessory optic system are sensitive to elements of self-motion. A similar functional segregation of motion processing has been also found between areas MT and MST in monkeys (Duffy, 1998;Tanaka et al, 1986;Tanaka & Saito, 1989) and the temporooccipital and temporoparietal cortex in humans (Wiest et al, 2001). At the behavioral level, the stimulus conditions which tend to produce the illusion of self-motion (i.e., movement of elements in peripheral vision that are perceived to be the background of the visual scene) are quite different from conditions which typically generate perceived object-motion (i.e., movement of elements in central vision that are perceived to be the foreground of the visual scene) (Brandt, Koenig, & Dichgans, 1973;Brandt, Wist, & Dichgans, 1975).…”
Section: Crosstalk Between the Processing Of Object-motion And The Prsupporting
confidence: 73%
“…Conversely, neurons in the pigeon's accessory optic system are sensitive to elements of self-motion. A similar functional segregation of motion processing has been also found between areas MT and MST in monkeys (Duffy, 1998;Tanaka et al, 1986;Tanaka & Saito, 1989) and the temporooccipital and temporoparietal cortex in humans (Wiest et al, 2001). At the behavioral level, the stimulus conditions which tend to produce the illusion of self-motion (i.e., movement of elements in peripheral vision that are perceived to be the background of the visual scene) are quite different from conditions which typically generate perceived object-motion (i.e., movement of elements in central vision that are perceived to be the foreground of the visual scene) (Brandt, Koenig, & Dichgans, 1973;Brandt, Wist, & Dichgans, 1975).…”
Section: Crosstalk Between the Processing Of Object-motion And The Prsupporting
confidence: 73%
“…Human neuroimaging investigations of visual self-motion have largely focused on neural responses to optic-flow ( de Jong et al, 1994 ; Brandt et al, 1998 ; Previc et al, 2000 ; Rutschmann et al, 2000 ; Peuskens et al, 2001 ; Wiest et al, 2001 ; Kleinschmidt et al, 2002 ; Deutschländer et al, 2004 ; Kovács et al, 2008 ; Wall and Smith, 2008 ; Cardin and Smith, 2010 , 2011 ; Pitzalis et al, 2010 , 2013 ; Becker-Bense et al, 2012 ; Cardin et al, 2012 ; Arnoldussen et al, 2013 ). These studies have described optic-flow sensitivity in multiple cortical regions, including the human medial superior temporal area (hMST) in the visual motion complex hMT+ ( Peuskens et al, 2001 ; Smith et al, 2006 ), the cortical vestibular area in the parieto-insular vestibular cortex (PIVC; Cardin and Smith, 2010 ), and the ventral intraparietal area (VIP; Peuskens et al, 2001 ; Wall and Smith, 2008 ; Cardin and Smith, 2010 ), which correspond to results from several monkey studies (e.g., MST: Saito et al, 1986 ; Tanaka et al, 1986 , 1989 ; Tanaka and Saito, 1989 ; Duffy and Wurtz, 1991a , b , 1995 ; Graziano et al, 1994 ; Lagae et al, 1994 ; Page and Duffy, 2003 ; PIVC: Akbarian et al, 1988 ; VIP: Schaafsma and Duysens, 1996 ; Schaafsma et al, 1997 ; Bremmer et al, 2002 ).…”
Section: Introductionmentioning
confidence: 99%
“…Our TRFP encodes predicted features of an external object relevant for the robot's interaction with the world, for example the position of an object of interest. The perception of self-motion as well as the perception of object motion can each be mapped to specific cortical regions [26], [27]. The SMP and the TRFP both utilize our version of the multiple timescale recurrent neural network (MTRNN) for the learning and prediction of spatiotemporal patterns.…”
Section: Our Approachmentioning
confidence: 99%