Recent years have witnessed tremendous growth in the application of machine learning (ML) and deep learning (DL) techniques in medical physics. Embracing the current big data era, medical physicists equipped with these state‐of‐the‐art tools should be able to solve pressing problems in modern radiation oncology. Here, a review of the basic aspects involved in ML/DL model building, including data processing, model training, and validation for medical physics applications is presented and discussed. Machine learning can be categorized based on the underlying task into supervised learning, unsupervised learning, or reinforcement learning; each of these categories has its own input/output dataset characteristics and aims to solve different classes of problems in medical physics ranging from automation of processes to predictive analytics. It is recognized that data size requirements may vary depending on the specific medical physics application and the nature of the algorithms applied. Data processing, which is a crucial step for model stability and precision, should be performed before training the model. Deep learning as a subset of ML is able to learn multilevel representations from raw input data, eliminating the necessity for hand crafted features in classical ML. It can be thought of as an extension of the classical linear models but with multilayer (deep) structures and nonlinear activation functions. The logic of going “deeper" is related to learning complex data structures and its realization has been aided by recent advancements in parallel computing architectures and the development of more robust optimization methods for efficient training of these algorithms. Model validation is an essential part of ML/DL model building. Without it, the model being developed cannot be easily trusted to generalize to unseen data. Whenever applying ML/DL, one should keep in mind, according to Amara’s law, that humans may tend to overestimate the ability of a technology in the short term and underestimate its capability in the long term. To establish ML/DL role into standard clinical workflow, models considering balance between accuracy and interpretability should be developed. Machine learning/DL algorithms have potential in numerous radiation oncology applications, including automatizing mundane procedures, improving efficiency and safety of auto‐contouring, treatment planning, quality assurance, motion management, and outcome predictions. Medical physicists have been at the frontiers of technology translation into medicine and they ought to be prepared to embrace the inevitable role of ML/DL in the practice of radiation oncology and lead its clinical implementation.
The major aim of radiation therapy is to provide curative or palliative treatment to cancerous malignancies while minimizing damage to healthy tissues. Charged particle radiotherapy utilizing carbon ions or protons is uniquely suited for this task due to its ability to achieve highly conformal dose distributions around the tumor volume. For these treatment modalities, uncertainties in the localization of patient anatomy due to inter- and intra-fractional motion present a heightened risk of undesired dose delivery. A diverse range of mitigation strategies have been developed and clinically implemented in various disease sites to monitor and correct for patient motion, but much work remains. This review provides an overview of current clinical practices for inter and intra-fractional motion management in charged particle therapy, including motion control, current imaging and motion tracking modalities, as well as treatment planning and delivery techniques. We also cover progress to date on emerging technologies including particle-based radiography imaging, novel treatment delivery methods such as tumor tracking and FLASH, and artificial intelligence and discuss their potential impact towards improving or increasing the challenge of motion mitigation in charged particle therapy.
Pancreatic cancer is one of the deadliest cancers, with a 5-year survival rate of <10%. The current approach to confirming a tissue diagnosis, endoscopic ultrasound-guided fine-needle aspiration (EUS-FNA), requires a time-consuming, qualitative cytology analysis and may be limited because of sampling error. We designed and engineered a miniaturized optoelectronic sensor to assist in situ, real-time, and objective evaluation of human pancreatic tissues during EUS-FNA. A proof-of-concept prototype sensor, compatible with a 19-gauge hollow-needle commercially available for EUS-FNA, was constructed using microsized optoelectronic chips and microfabrication techniques to perform multisite tissue optical sensing. In our bench-top verification and pilot validation during surgery on freshly excised human pancreatic tissues (four patients), the fabricated sensors showed a comparable performance to our previous fiber-based system. The flexibility in source-detector configuration using microsized chips potentially allows for various light-based sensing techniques inside a confined channel such as a hollow needle or endoscopy.
Purpose: Modern inverse radiotherapy treatment planning requires nonconvex, large-scale optimizations that must be solved within a clinically feasible timeframe. We have developed and tested a quantum-inspired, stochastic algorithm for intensity-modulated radiotherapy (IMRT): quantum tunnel annealing (QTA). By modeling the likelihood probability of accepting a higher energy solution after a particle tunneling through a potential energy barrier, QTA features an additional degree of freedom (the barrier width, w) not shared by traditional stochastic optimization methods such as Simulated Annealing (SA). This additional degree of freedom can improve convergence rates and achieve a more efficient and, potentially, effective treatment planning process. Methods: To analyze the character of the proposed QTA algorithm, we chose two stereotactic body radiation therapy (SBRT) liver cases of variable complexity. The "easy" first case was used to confirm functionality, while the second case, with a more challenging geometry, was used to characterize and evaluate the QTA algorithm performance. Plan quality was assessed using dose-volume histogram-based objectives and dose distributions. Due to the stochastic nature of the solution search space, extensive tests were also conducted to determine the optimal smoothing technique, ensuring balance between plan deliverability and the resulting plan quality. QTA convergence rates were investigated in relation to the chosen barrier width function, and QTA and SA performances were compared regarding sensitivity to the choice of solution initializations, annealing schedules, and complexity of the dose-volume constraints. Finally, we investigated the extension from beamlet intensity optimization to direct aperture optimization (DAO). Influence matrices were calculated using the Eclipse scripting application program interface (API), and the optimizations were run on the University of Michigan's high-performance computing cluster, Flux. Results: Our results indicate that QTA's barrier-width function can be tuned to achieve faster convergence rates. The QTA algorithm reached convergence up to 46.6% faster than SA for beamlet intensity optimization and up to 26.8% faster for DAO. QTA and SA were ultimately found to be equally insensitive to the initialization process, but the convergence rate of QTA was found to be more sensitive to the complexity of the dose-volume constraints. The optimal smoothing technique was found to be a combination of a Laplace-of-Gaussian (LOG) edge-finding filter implemented as a penalty within the objective function and a two-dimensional Savitzky-Golay filter applied to the final iteration; this achieved total monitor units more than 20% smaller than plans optimized by commercial treatment planning software. Conclusions: We have characterized the performance of a stochastic, quantum-inspired optimization algorithm, QTA, for radiotherapy treatment planning. This proof of concept study suggests that QTA can be tuned to achieve faster convergence than SA; therefore, ...
Advancements in data-driven technologies and the inclusion of information-rich multi omics features have significantly improved the performance of outcomes modeling in radiation oncology. For this current trend to be sustainable, challenges related to robust data modeling such as small sample size, low size to feature ratio, noisy data, as well as issues related to algorithmic modeling such as complexity, uncertainty, and interpretability, need to be mitigated if not resolved. Emerging computational technologies and new paradigms such as federated learning, human-in-the-loop, quantum computing, and novel interpretability methods show great potential in overcoming these challenges and bridging the gap towards precision outcome modeling in radiotherapy. Examples of these promising technologies will be presented and their potential role in improving outcome modeling will be discussed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.