An ordered mesoporous WO3 material with a highly crystalline framework was synthesized by using amphiphilic poly(ethylene oxide)-b-polystyrene (PEO-b-PS) diblock copolymers as a structure-directing agent through a solvent-evaporation-induced self-assembly method combined with a simple template-carbonization strategy. The obtained mesoporous WO3 materials have a large uniform mesopore size (ca. 10.9 nm) and a high surface area (ca. 121 m(2) g(-1)). The mesoporous WO3-based H2S gas sensor shows an excellent performance for H2S sensing at low concentration (0.25 ppm) with fast response (2 s) and recovery (38 s). The high mesoporosity and continuous crystalline framework are responsible for the excellent performance in H2S sensing.
PurposeIn the treatment planning process of intensity‐modulated radiation therapy (IMRT), a human planner operates the treatment planning system (TPS) to adjust treatment planning parameters, for example, dose volume histogram (DVH) constraints’ locations and weights, to achieve a satisfactory plan for each patient. This process is usually time‐consuming, and the plan quality depends on planer’s experience and available planning time. In this study, we proposed to model the behaviors of human planners in treatment planning by a deep reinforcement learning (DRL)‐based virtual treatment planner network (VTPN), such that it can operate the TPS in a human‐like manner for treatment planning.Methods and MaterialsUsing prostate cancer IMRT as an example, we established the VTPN using a deep neural network developed. We considered an in‐house optimization engine with a weighted quadratic objective function. Virtual treatment planner network was designed to observe an intermediate plan DVHs and decide the action to improve the plan by changing weights and threshold dose in the objective function. We trained the VTPN in an end‐to‐end DRL process in 10 patient cases. A plan score was used to measure plan quality. We demonstrated the feasibility and effectiveness of the trained VTPN in another 64 patient cases.ResultsVirtual treatment planner network was trained to spontaneously learn how to adjust treatment planning parameters to generate high‐quality treatment plans. In the 64 testing cases, with initialized parameters, quality score was 4.97 (±2.02), with 9.0 being the highest possible score. Using VTPN to perform treatment planning improved quality score to 8.44 (±0.48).ConclusionsTo our knowledge, this was the first time that intelligent treatment planning behaviors of human planner in external beam IMRT are autonomously encoded in an artificial intelligence system. The trained VTPN is capable of behaving in a human‐like way to produce high‐quality plans.
Inverse treatment planning in radiation therapy is formulated as solving optimization problems. The objective function and constraints consist of multiple terms designed for different clinical and practical considerations. Weighting factors of these terms are needed to define the optimization problem. While a treatment planning optimization engine can solve the optimization problem with given weights, adjusting the weights to yield a high-quality plan is typically performed by a human planner. Yet the weight-tuning task is labor intensive, time consuming, and it critically affects the final plan quality. An automatic weight-tuning approach is strongly desired. The procedure of weight adjustment to improve the plan quality is essentially a decision-making problem. Motivated by the tremendous success in deep learning for decision making with human-level intelligence, we propose a novel framework to adjust the weights in a human-like manner. This study uses inverse treatment planning in high-dose-rate brachytherapy (HDRBT) for cervical cancer as an example. We develop a weight-tuning policy network (WTPN) that observes dose volume histograms of a plan and outputs an action to adjust organ weighting factors, similar to the behaviors of a human planner. We train the WTPN via end-to-end deep reinforcement learning. Experience replay is performed with the epsilon greedy algorithm. After training is completed, we apply the trained WTPN to guide treatment planning of five testing patient cases. It is found that the trained WTPN successfully learns the treatment planning goals and is able to guide the weight tuning process. On average, the quality score of plans generated under the WTPN's guidance is improved by ~8.5% compared to the initial plan with arbitrarily set weights, and by 10.7% compared to the plans generated by human planners. To our knowledge, this is the first time that a tool is developed to adjust organ weights for the treatment planning optimization problem in a human-like fashion based on intelligence learnt from a training process. This is different from existing strategies based on pre-defined rules. The study demonstrates potential feasibility to develop intelligent treatment planning approaches via deep reinforcement learning.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.