Human hearing loss is a common neurosensory disorder about which many basic research and clinically relevant questions are unresolved. This review on hereditary deafness focuses on three examples considered at first glance to be uncomplicated, however, upon inspection, are enigmatic and ripe for future research efforts. The three examples of clinical and genetic complexities are drawn from studies of (1) Pendred syndrome/DFNB4 (PDS, OMIM 274600), (2) Perrault syndrome (deafness and infertility) due to mutations of CLPP (PRTLS3, OMIM 614129), and (3) the unexplained extensive clinical variability associated with TBC1D24 mutations. At present, it is unknown how different mutations of TBC1D24 cause nonsyndromic deafness (DFNB86, OMIM 614617), epilepsy (OMIM 605021), epilepsy with deafness, or DOORS syndrome (OMIM 220500) that is characterized by deafness, onychodystrophy (alteration of toenail or fingernail morphology), osteodystrophy (defective development of bone), intellectual disability and seizures. A comprehensive understanding of the multifaceted roles of each gene associated with human deafness is expected to provide future opportunities for restoration as well as preservation of normal hearing.
Funding Acknowledgements Type of funding sources: None. Background Real-time cine imaging does not require breath-holding and is a robust cine imaging technique in the presence of irregular heartbeats. It is a good alternative to the conventional breath-hold retro-gated cine for simplified acquisition and improved patient comfort. Real-time acquisition is achieved with the single-shot BSSFP readout without retro-gating. To maintain good temporal and spatial resolution, higher acceleration (e.g. >4x parallel imaging) is required. As a result, the real-time cine images experience reduced signal-to-noise ratio (SNR), which limits its clinical acceptance. Purpose We developed a novel deep learning model architecture, the Convolutional Neural Network Transformer (CNNT), to improve the quality of real-time cine, under 4x, 5x and 6x acceleration. Method Convolutional Neural Networks (CNN) are widely used in CMR research to process cardiac images. Cardiac images are often acquired as a time series with strong inter-phase correlation. We combined the CNN with the more recent transformer model to develop a novel CNNT architecture. It takes in the entire 2D+T time series as input and has advantages of CNN for efficient computation and spatial invariance. It further inherits the advantages of attention layer in the transformer and is able to efficiently utilize the temporal correlation within a time series. A CNNT model is developed to improve the SNR of real-time cine imaging. N=10 patients were scanned at a heart center, with 4x, 5x and 6x acceleration. Typical imaging parameters are: FOV 360×270mm2, flip angle 50°, acquired matrix size 160×90 for R=4 acceleration, 192×108 for R=5 and 6, temporal resolution 40ms for R=4, 42ms for R=5 and 35ms for R=6. The real-time images went through a TGRAPPA reconstruction [1] and the CNNT model. The SNR of TGRAPPA was measured with SNR units [2]. The Monte-Carlo pseudo-replica test was used to measure SNR for the CNNT model. For every cine series, two phases were picked for the end-systole and end-diastole. For every image picked, two region-of-interests were drawn in the myocardium and in the LV blood pool. The CNNT model was deployed inline on the MR scanner using the Gadgetron InlineAI [3]. Results Figure 1 gives real-time cine images for three accelerations, reconstructed with TGRAPPA and CNNT. The parallel imaging TGRAPPA reconstruction suffers significant SNR loss from elevated g-factor and less acquired data. The deep learning CNNT model recovered SNR even at the very high 6x acceleration, without observed loss of boundary sharpness. Table 1 lists the SNR measurement results. The TGRAPPA SNR decreased ∼4x from R=4 to R=6 for both the blood and myocardium. For the blood, the CNNT increased the SNR by 170%, 335%, 371% at R=4, 5 and 6. For the myocardium, the SNR increases were 335%, 634% and 828%. Conclusion We developed a convolutional neural network transformer model to recover the SNR for real-time cine imaging at higher acceleration.
Funding Acknowledgements Type of funding sources: Public grant(s) – National budget only. Main funding source(s): Supported in part by the Division of Intramural Research of the National Heart, Lung, and Blood Institute, National Institutes of Health (grants Z1A-HL006214-05 and Z1A-HL006242-02). Background Dark blood late gadolinium enhancement (DB-LGE) imaging shows superior delineation of myocardial infarction (MI), especially at the sub-endocardial boundary. Our previous study [1] developed a free-breathing DB-LGE with the single shot SSFP readout, phase sensitive inversion recovery (PSIR) reconstruction, and respiratory motion corrected averaging. To compensate the potential signal-to-noise ratio loss, our previous DB-LGE doubled the measurements, thereby increasing the acquisition time. Purpose In this study, we developed a deep learning image enhancement model using a novel neural network architecture called the convolutional neural network transformer (CNNT) to improve the image quality of DB-LGE and to reduce the acquisition time by decreasing the number of measurements. Methods A novel image enhancement model was developed using a novel network architecture called the Convolutional Neural Network Transformer (CNNT) proposed by us. This architecture is suitable for the 2D+Time CMR acquisition, by exploiting the temporal correlation between images over multiple averages. The evaluation was first retrospectively conducted on a cohort of 12 patients acquired with the original protocol [1] using the full 16 measurements. For every subject, a complete short-axis stack (typically 12 slices) was acquired to cover the entire left ventricular. The imaging data was reconstructed in three ways. Original: using all acquired 16 measurements. This is our base-line protocol. Original 50%: using only the first 8 measurements. CNNT 50%: using only the first 8 averages, but performing the CNNT deep learning image enhancement before MOCO PSIR reconstruction. Two experienced imaging researchers (PK and MF, >10 years of experience for both) scored all DB-LGE images for the overall quality, diagnostic confidence and delineation of MI/boundaries (5 = excellent, 4 = good, 3 = fair, 2 = poor, and 1 = non-diagnostic). The CNNT DB-LGE was deployed to the MR scanner using the Gadgetron InlineAI [2]. Results Figure 1 gives examples of DB-LGE with three reconstruction methods. The CNNT image has higher SNR and well delineated MI. The Original images with the longest acquisition have good quality and the Original-50% acquired with 8 measurements are good quality but have reduced SNR. The mean scores for overall image quality, diagnostic confidence and MI delineation of two reviewers were 4.88±0.23, 4.88±0.23, 4.83±0.25 for CNNT and 4.96±0.14, 4.96±0.14, 4.67±0.39 for the original approach. No significant differences were found between the original and the CNNT (P>0.15 for all). Figure 2 shows an acute MI patient prospectively acquired with the 50% scan time reduction, with and without the CNNT enhancement. The resulting PSIR images well delineate the MVO due to the acute MI, with improved SNR. Conclusion A novel CNNT model was proposed and evaluated to speed up the free-breathing MOCO DB LGE by 50% without sacrificing image quality.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.