Several norms for how people should assess a question's usefulness have been proposed, notably Bayesian diagnosticity, information gain (mutual information), Kullback-Liebler distance, probability gain (error minimization), and impact (absolute change). Several probabilistic models of previous experiments on categorization, covariation assessment, medical diagnosis, and the selection task are shown to not discriminate among these norms as descriptive models of human intuitions and behavior. Computational optimization found situations in which information gain, probability gain, and impact strongly contradict Bayesian diagnosticity. In these situations, diagnosticity's claims are normatively inferior. Results of a new experiment strongly contradict the predictions of Bayesian diagnosticity. Normative theoretical concerns also argue against use of diagnosticity. It is concluded that Bayesian diagnosticity is normatively flawed and empirically unjustified.
We used functional magnetic resonance imaging (fMRI) to map the cortical representations of executed reaching, observed reaching, and imagined reaching in humans. Whereas previous studies have mostly examined hand actions related to grasping, hand-object interactions, or local finger movements, here we were interested in reaching only (i.e. the transport phase of the hand to a particular location in space), without grasping. We hypothesized that mirror neuron areas specific to reaching-related representations would be active in all three conditions. An overlap between executed, observed, and imagined reaching activations was found in dorsal premotor cortex as well as in the superior parietal lobe and the intraparietal sulcus, in accord with our hypothesis. Activations for observed reaching were more dorsal than activations typically reported in the literature for observation of hand-object interactions (grasping). Our results suggest that the mirror neuron system is specific to the type of hand action performed, and that these fronto-parietal activations are a putative human homologue of the neural circuits underlying reaching in macaques. The parietal activations reported here for executed, imagined, and observed reaching are also consistent with previous functional imaging studies on planned reaching and delayed pointing movements, and extend the proposed localization of human reach-related brain areas to observation as well as imagery of reaching.
When distinguishing whether a face displays a certain emotion, some regions of the face may contain more useful information than others. Here we ask whether people differentially attend to distinct regions of a face when judging different emotions. Experiment 1 measured eye movements while participants discriminated between emotional (joy, anger, fear, sadness, shame, and disgust) and neutral facial expressions. Participant eye movements primarily fell in five distinct regions (eyes, upper nose, lower nose, upper lip, nasion). Distinct fixation patterns emerged for each emotion, such as a focus on the lips for joyful faces and a focus on the eyes for sad faces. These patterns were strongest for emotional faces but were still present when viewers sought evidence of emotion within neutral faces, indicating a goal-driven influence on eye-gaze patterns. Experiment 2 verified that these fixation patterns tended to reflect attention to the most diagnostic regions of the face for each emotion. Eye movements appear to follow both stimulus-driven and goal-driven perceptual strategies when decoding emotional information from a face.
Reaching toward a visual target involves at least two sources of information. One is the visual feedback from the hand as it approaches the target. Another is proprioception from the moving limb, which informs the brain of the location of the hand relative to the target even when the hand is not visible. Where these two sources of information are represented in the human brain is unknown. In the present study, we investigated the cortical representations for reaching with or without visual feedback from the moving hand, using functional magnetic resonance imaging. To identify reach-dominant areas, we compared reaching with saccades. Our results show that a reach-dominant region in the anterior precuneus (aPCu), extending into medial intraparietal sulcus, is equally active in visual and nonvisual reaching. A second region, at the superior end of the parieto-occipital sulcus (sPOS), is more active for visual than for nonvisual reaching. These results suggest that aPCu is a sensorimotor area whose sensory input is primarily proprioceptive, while sPOS is a visuomotor area that receives visual feedback during reaching. In addition to the precuneus, medial, anterior intraparietal, and superior parietal cortex were also activated during both visual and nonvisual reaching, with more anterior areas responding to hand movements only and more posterior areas responding to both hand and eye movements. Our results suggest that cortical networks for reaching are differentially activated depending on the sensory conditions during reaching. This indicates the involvement of multiple parietal reach regions in humans, rather than a single homogenous parietal reach region.
Framing effects are well established: Listeners' preferences depend on how outcomes are described to them, or framed. Less well understood is what determines how speakers choose frames. Two experiments revealed that reference points systematically influenced speakers' choices between logically equivalent frames. For example, speakers tended to describe a 4-ounce cup filled to the 2-ounce line as half full if it was previously empty but described it as half empty if it was previously full. Similar results were found when speakers could describe the outcome of a medical treatment in terms of either mortality or survival (e.g., 25% die vs. 75% survive). Two additional experiments showed that listeners made accurate inferences about speakers' reference points on the basis of the selected frame (e.g., if a speaker described a cup as half empty, listeners inferred that the cup used to be full). Taken together, the data suggest that frames reliably convey implicit information in addition to their explicit content, which helps explain why framing effects are so robust.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.