How do dogs understand human words? At a basic level, understanding would require the discrimination of words from non-words. To determine the mechanisms of such a discrimination, we trained 12 dogs to retrieve two objects based on object names, then probed the neural basis for these auditory discriminations using awake-fMRI. We compared the neural response to these trained words relative to “oddball” pseudowords the dogs had not heard before. Consistent with novelty detection, we found greater activation for pseudowords relative to trained words bilaterally in the parietotemporal cortex. To probe the neural basis for representations of trained words, searchlight multivoxel pattern analysis (MVPA) revealed that a subset of dogs had clusters of informative voxels that discriminated between the two trained words. These clusters included the left temporal cortex and amygdala, left caudate nucleus, and thalamus. These results demonstrate that dogs’ processing of human words utilizes basic processes like novelty detection, and for some dogs, may also include auditory and hedonic representations.
Dogs may follow their nose, but they learn associations to many types of sensory stimuli. Are some modalities learned better than others? We used awake fMRI in 19 dogs over a series of three experiments to measure reward-related learning of visual, olfactory, and verbal stimuli. Neurobiological learning curves were generated for individual dogs by measuring activation over time within three regions of interest: the caudate nucleus, amygdala, and parietotemporal cortex. The learning curves showed that dogs formed stimulus-reward associations in as little as 22 trials. Consistent with neuroimaging studies of associative learning, the caudate showed a main effect for reward-related stimuli, but not a significant interaction with modality. However, there were significant differences in the time courses, suggesting that although multiple modalities are represented in the caudate, the rates of acquisition and habituation are modality-dependent and are potentially gated by their salience in the amygdala. Visual and olfactory modalities resulted in the fastest learning, while verbal stimuli were least effective, suggesting that verbal commands may be the least efficient way to train dogs.
In working and practical contexts, dogs rely upon their ability to discriminate a target odor from distracting odors and other sensory stimuli. Using awake fMRI in 18 dogs, we examined the neural mechanisms underlying odor discrimination between two odors and a mixture of the odors. Neural activation was measured during the presentation of a target odor (A) associated with a food reward, a distractor odor (B) associated with nothing, and a mixture of the two odors (A+B). Changes in neural activation during the presentations of the odor stimuli in individual dogs were measured over time within three regions known to be involved with odor processing: the caudate nucleus, the amygdala, and the olfactory bulbs. Average activation within the amygdala showed that dogs maximally differentiated between odor stimuli based on the stimulus-reward associations by the first run, while activation to the mixture (A+B) was most similar to the no-reward (B) stimulus. To clarify the neural representation of odor mixtures in the dog brain, we used a random forest classifier to compare multilabel (elemental) vs. multiclass (configural) models. The multiclass model performed much better than the multilabel (weighted-F1 0.44 vs. 0.14), suggesting the odor mixture was processed configurally. Analysis of the subset of high-performing dogs’ brain classification metrics revealed a network of olfactory information-carrying brain regions that included the amygdala, piriform cortex, and posterior cingulate. These results add further evidence for the configural processing of odor mixtures in dogs and suggest a novel way to identify high-performers based on brain classification metrics.
Given humans' habitual use of screens, they rarely consider potential differences when viewing two dimensional (2D) stimuli and real-world versions of dimensional stimuli. Dogs also have access to many forms of screens and touch pads, with owners even subscribing to dog-directed content. Humans understand that 2D stimuli are representations of real-world objects, but do dogs? In canine cognition studies, 2D stimuli are almost always used to study what is normally 3D, like faces, and may assume that both 2D and 3D stimuli are represented in the brain the same way. Here, we used awake fMRI of 15 dogs to examine the neural mechanisms underlying dogs' perception of two-and three-dimensional objects after the dogs were trained on either a two-or three-dimensional version of the objects. Activation within reward processing regions and parietal cortex of the dog brain to 2D and 3D versions of objects was determined by their training experience, as dogs trained on one dimensionality showed greater activation to the dimension on which they were trained. These results show that dogs do not automatically generalize between two-and three-dimensional stimuli and caution against implicit assumptions when using pictures or videos with dogs.
The perception and representation of objects in the world are foundational to all animals. The relative importance of objects' physical properties versus how the objects are interacted with continues to be debated. Neural evidence in humans and nonhuman primates suggests animate-inanimate and face-body dimensions of objects are represented in the temporal cortex. However, because primates have opposable thumbs and interact with objects in similar ways, the question remains as to whether this similarity represents the evolution of a common cognitive process or whether it reflects a similarity of physical interaction. Here, we used functional magnetic resonance imaging (fMRI) in dogs to test whether the type of interaction affects object processing in an animal that interacts primarily with its mouth. In Study 1, we identified object-processing regions of cortex by having dogs passively view movies of faces and objects. In Study 2, dogs were trained to interact with two new objects with either the mouth or the paw. Then, we measured responsivity in the object regions to the presentation of these objects. Mouth-objects elicited significantly greater activity in object regions than paw-objects. Mouth-objects were also associated with activity in somatosensory cortex, suggesting dogs were anticipating mouthing interactions. These findings suggest that object perception in dogs is affected by how dogs expect to interact with familiar objects.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.