Patient-derived organoids (PDOs) serve as excellent tools for
personalized drug screening to predict clinical outcomes of cancer
treatment. However, current methods for efficient quantification of
drug response are limited. Herein, we develop a method for label-free,
continuous tracking imaging and quantitative analysis of drug efficacy
using PDOs. A self-developed optical coherence tomography (OCT) system
was used to monitor the morphological changes of PDOs within 6 days of
drug administration. OCT image acquisition was performed every
24 h. An analytical method for organoid segmentation and
morphological quantification was developed based on a deep learning
network (EGO-Net) to simultaneously analyze multiple morphological
organoid parameters under the drug's effect. Adenosine triphosphate
(ATP) testing was conducted on the last day of drug treatment.
Finally, a corresponding aggregated morphological indicator (AMI) was
established using principal component analysis (PCA) based on the
correlation analysis between OCT morphological quantification and ATP
testing. Determining the AMI of organoids allowed quantitative
evaluation of the PDOs responses to gradient concentrations and
combinations of drugs. Results showed that there was a strong
correlation (correlation coefficient >90%) between the
results using the AMI of organoids and those from ATP testing, which
is the standard test used for bioactivity measurement. Compared with
single-time-point morphological parameters, the introduction of
time-dependent morphological parameters can reflect drug efficacy with
improved accuracy. Additionally, the AMI of organoids was found to
improve the efficiency of 5-fluorouracil(5FU) against tumor cells by
allowing the determination of the optimum concentration, and the
discrepancies in response among different PDOs using the same drug
combinations could also be measured. Collectively, the AMI established
by OCT system combined with PCA could quantify the multidimensional
morphological changes of organoids under the drug's effect, providing
a simple and efficient tool for drug screening in PDOs.
Organoids are advancing the development of accurate prediction of drug efficacy and toxicity in vitro. These advancements are attributed to the ability of organoids to recapitulate key structural and functional features of organs and parent tumor. Specifically, organoids are self-organized assembly with a multi-scale structure of 30–800 μm, which exacerbates the difficulty of non-destructive three-dimensional (3D) imaging, tracking and classification analysis for organoid clusters by traditional microscopy techniques. Here, we devise a 3D imaging, segmentation and analysis method based on Optical coherence tomography (OCT) technology and deep convolutional neural networks (CNNs) for printed organoid clusters (Organoid Printing and optical coherence tomography-based analysis, OPO). The results demonstrate that the organoid scale influences the segmentation effect of the neural network. The multi-scale information-guided optimized EGO-Net we designed achieves the best results, especially showing better recognition workout for the biologically significant organoid with diameter ≥50 μm than other neural networks. Moreover, OPO achieves to reconstruct the multiscale structure of organoid clusters within printed microbeads and calibrate the printing errors by segmenting the printed microbeads edges. Overall, the classification, tracking and quantitative analysis based on image reveal that the growth process of organoid undergoes morphological changes such as volume growth, cavity creation and fusion, and quantitative calculation of the volume demonstrates that the growth rate of organoid is associated with the initial scale. The new method we proposed enable the study of growth, structural evolution and heterogeneity for the organoid cluster, which is valuable for drug screening and tumor drug sensitivity detection based on organoids.
Large-scale variations may cause a serious problem in crowd counting. In recent years, most methods for this problem use convolutional neural networks with a fixed scale for encoding and decoding image features. The scale of the convolutional layer is usually manually adjusted and may have to deal with image features on unfitted scales. In this paper, a method called scale-aware convolutional neural network(SCNet) is proposed, which adds a scale selection mechanism to the dilated convolutional operation. Shared weight multibranch is used to deal with features on different scales, and an attention mechanism is introduced to determine the weights of the branches that fit the scale. Experimental results demonstrate that the proposed SCNet outperforms most existing methods.This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.