To assess feasibility of training a convolutional neural network (CNN) to automate liver segmentation across different imaging modalities and techniques used in clinical practice and to apply this technique to enable automation of liver biometry. Materials and Methods: A two-dimensional U-Net CNN was trained for liver segmentation in two stages by using 330 abdominal MRI and CT examinations. First, the neural network was trained with unenhanced multiecho spoiled gradient-echo images from 300 MRI examinations to yield multiple signal weightings. Then, transfer learning was used to generalize the CNN with additional images from 30 contrast material-enhanced MRI and CT examinations. Performance of the CNN was assessed by using a distinct multiinstitutional dataset curated from multiple sources (498 subjects). Segmentation accuracy was evaluated by computing Dice scores. These segmentations were used to compute liver volume from CT and T1-weighted MRI examinations and to estimate hepatic proton density fat fraction (PDFF) from multiecho T2*-weighted MRI examinations. Quantitative volumetry and PDFF estimates were compared between automated and manual segmentation by using Pearson correlation and Bland-Altman statistics. Results: Dice scores were 0.94 ± 0.06 for CT (n = 230), 0.95 ± 0.03 (n = 100) for T1-weighted MRI, and 0.92 ± 0.05 for T2*weighted MRI (n = 168). Liver volume measured with manual and automated segmentation agreed closely for CT (95% limits of agreement: −298 mL, 180 mL) and T1-weighted MRI (95% limits of agreement: −358 mL, 180 mL). Hepatic PDFF measured by the two segmentations also agreed closely (95% limits of agreement: −0.62%, 0.80%). Conclusion: By using a transfer-learning strategy, this study has demonstrated the feasibility of a CNN to be generalized to perform liver segmentation across different imaging techniques and modalities. With further refinement and validation, CNNs may have broad applicability for multimodal liver volumetry and hepatic tissue characterization.
Electronic skins, often with recognition
and sensing capabilities
that are beyond those associated with biological skin, provide important
medical information for human health. However, how to make electronic
skins with both tactile and touchless perceptions for applications
in real-time health monitoring is a challenge due to biological complexity.
Herein, flexible bimodal e-skins are demonstrated using a lamellated
MXene/chitosan film as the kernel sensing layer. These biomimetic
hybrid films show excellent biocompatibility in a cytotoxicity test,
indicating a reduced risk of application in the human body. The flexible
devices display two modes that can detect pressure (with a limit of
detection (LoD) of 3 Pa, stability >3500 times, and response time
of 143 ms) and humidity (stability >20 days). The bifunctional
sensor
can also be used in detecting and discriminating electrophysiological
signals (including voice recognition, human pulses, and finger bending)
and biochemical molecules (respiratory rate). This work may lead to
the application of biocomposite materials in multifunctional flexible
sensing technology.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.