PurposeWe calculated setup margins for whole breast radiotherapy during voluntary deep‐inspiration breath‐hold (vDIBH) using real‐time surface imaging (SI).Methods and MaterialsPatients (n = 58) with a 27‐to‐31 split between right‐ and left‐sided cancers were analyzed. Treatment beams were gated using AlignRT by registering the whole breast region‐of‐interest to the surface generated from the simulation CT scan. AlignRT recorded (three‐dimensional) 3D displacements and the beam‐on‐state every 0.3 s. Means and standard deviations of the displacements during vDIBH for each fraction were used to calculate setup margins. Intra‐DIBH stability and the intrafraction reproducibility were estimated from the medians of the 5th to 95th percentile range of the translations in each breath‐hold and fraction, respectively.ResultsA total of 7269 breath‐holds were detected over 1305 fractions in which a median dose of 200 cGy was delivered. Each fraction was monitored for 5.95 ± 2.44 min. Calculated setup margins were 4.8 mm (A/P), 4.9 mm (S/I), and 6.4 mm (L/R). The intra‐DIBH stability and the intrafraction reproducibility were ≤0.7 mm and ≤2.2 mm, respectively. The isotropic margin according to SI (9.2 mm) was comparable to other institutions’ calculations that relied on x‐ray imaging and/or spirometry for patients with left‐sided cancer (9.8–11.0 mm). Likewise, intra‐DIBH variability and intrafraction reproducibility of breast surface measured with SI agreed with spirometry‐based positioning to within 1.2 and 0.36 mm, respectively.ConclusionsWe demonstrated that intra‐DIBH variability, intrafraction reproducibility, and setup margins are similar to those reported by peer studies who utilized spirometry‐based positioning.
DICOM header information is frequently used to classify medical image types; however, if a header is missing fields or contains incorrect data, the utility is limited. To expedite image classification, we trained convolutional neural networks (CNNs) in two classification tasks for thoracic radiographic views obtained from dual-energy studies: (a) distinguishing between frontal, lateral, soft tissue, and bone images and (b) distinguishing between posteroanterior (PA) or anteroposterior (AP) chest radiographs. CNNs with AlexNet architecture were trained from scratch. 1910 manually classified radiographs were used for training the network to accomplish task (a), then tested with an independent test set (3757 images). Frontal radiographs from the two datasets were combined to train a network to accomplish task (b); tested using an independent test set of 1000 radiographs. ROC analysis was performed for each trained CNN with area under the curve (AUC) as a performance metric. Classification between frontal images (AP/PA) and other image types yielded an AUC of 0.997 [95% confidence interval (CI): 0.996, 0.998]. Classification between PA and AP radiographs resulted in an AUC of 0.973 (95% CI: 0.961, 0.981). CNNs were able to rapidly classify thoracic radiographs with high accuracy, thus potentially contributing to effective and efficient workflow.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.