Little is known about how attention changes the cortical representation of sensory information in humans. Based on neurophysiological evidence, we hypothesized that attention causes tuning changes to expand the representation of attended stimuli at the cost of unattended stimuli. To investigate this issue we used functional MRI (fMRI) to measure how semantic representation changes when searching for different object categories in natural movies. We find that many voxels across occipito-temporal and fronto-parietal cortex shift their tuning toward the attended category. These tuning shifts expand the representation of the attended category and of semantically-related but unattended categories, and compress the representation of categories semantically-dissimilar to the target. Attentional warping of semantic representation occurs even when the attended category is not present in the movie, thus the effect is not a target-detection artifact. These results suggest that attention dynamically alters visual representation to optimize processing of behaviorally relevant objects during natural vision.
Acquiring images of the same anatomy with multiple different contrasts increases the diversity of diagnostic information available in an MR exam. Yet, the scan time limitations may prohibit the acquisition of certain contrasts, and some contrasts may be corrupted by noise and artifacts. In such cases, the ability to synthesize unacquired or corrupted contrasts can improve diagnostic utility. For multi-contrast synthesis, the current methods learn a nonlinear intensity transformation between the source and target images, either via nonlinear regression or deterministic neural networks. These methods can, in turn, suffer from the loss of structural details in synthesized images. Here, in this paper, we propose a new approach for multi-contrast MRI synthesis based on conditional generative adversarial networks. The proposed approach preserves intermediate-to-high frequency details via an adversarial loss, and it offers enhanced synthesis performance via pixel-wise and perceptual losses for registered multi-contrast images and a cycle-consistency loss for unregistered images. Information from neighboring cross-sections are utilized to further improve synthesis quality. Demonstrations on T 1-and T 2-weighted images from healthy subjects and patients clearly indicate the superior performance of the proposed approach compared to the previous state-of-the-art methods. Our synthesis approach can help improve the quality and versatility Manuscript
Purpose: Neural networks have received recent interest for reconstruction of undersampled MR acquisitions. Ideally, network performance should be optimized by drawing the training and testing data from the same domain. In practice, however, large datasets comprising hundreds of subjects scanned under a common protocol are rare. The goal of this study is to introduce a transfer-learning approach to address the problem of data scarcity in training deep networks for accelerated MRI. Methods: Neural networks were trained on thousands (upto 4 thousand) of samples from public datasets of either natural images or brain MR images. The networks were then fine-tuned using only tens of brain MR images in a distinct testing domain.Domain-transferred networks were compared to networks trained directly in the testing domain. Network performance was evaluated for varying acceleration factors (4-10), number of training samples (0.5-4k), and number of fine-tuning samples (0-100). Results: The proposed approach achieves successful domain transfer between MR images acquired with different contrasts (T 1 -and T 2 -weighted images) and between natural and MR images (ImageNet and T 1 -or T 2 -weighted images). Networks obtained via transfer learning using only tens of images in the testing domain achieve nearly identical performance to networks trained directly in the testing domain using thousands (upto 4 thousand) of images. Conclusion: The proposed approach might facilitate the use of neural networks for MRI reconstruction without the need for collection of extensive imaging datasets. K E Y W O R D Saccelerated MRI, compressive sensing, deep learning, image reconstruction, transfer learning
Utilization of external motion tracking devices is an emerging technology in head motion correction for MRI. However, cross-calibration between the reference frames of the external tracking device and the MRI scanner can be tedious and remains a challenge in practical applications. In this study, we present two hybrid methods, which both combine prospective, optical-based motion correction with retrospective entropy-based autofocusing in order to remove residual motion artifacts. Our results revealed that in the presence of cross-calibration errors between the optical tracking device and the MR scanner, application of retrospective correction on prospectively corrected data significantly improves image quality. As a result of this hybrid prospective & retrospective motion correction approach, the requirement for a high-quality calibration scan can be significantly relaxed, even to the extent that it is possible to perform external prospective motion tracking without any prior cross-calibration step if a crude approximation of cross-calibration matrix exists. Moreover, the motion tracking system, which is used to reduce the dimensionality of the autofocusing problem, benefits the retrospective approach at the same time.
Generative adversarial models with convolutional neural network (CNN) backbones have recently been established as state-of-the-art in numerous medical image synthesis tasks. However, CNNs are designed to perform local processing with compact filters, and this inductive bias compromises learning of contextual features. Here, we propose a novel generative adversarial approach for medical image synthesis, ResViT, that leverages the contextual sensitivity of vision transformers along with the precision of convolution operators and realism of adversarial learning. ResViT's generator employs a central bottleneck comprising novel aggregated residual transformer (ART) blocks that synergistically combine residual convolutional and transformer modules. Residual connections in ART blocks promote diversity in captured representations, while a channel compression module distills task-relevant information. A weight sharing strategy is introduced among ART blocks to mitigate computational burden. A unified implementation is introduced to avoid the need to rebuild separate synthesis models for varying source-target modality configurations. Comprehensive demonstrations are performed for synthesizing missing sequences in multi-contrast MRI, and CT images from MRI. Our results indicate superiority of ResViT against competing CNN-and transformer-based methods in terms of qualitative observations and quantitative metrics.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.