The pilot urine plutonium bioassay testing program engaged 100 atomic veterans and 58 persons representative of the United States general population using the fission track analysis technique, developed by Brookhaven National Laboratory. The methodology, collected data, scientific challenges, and test results are presented herein. The Defense Threat Reduction Agency and its predecessor agencies conducted this program in the late 1990s to the mid-2000s to determine the feasibility of bioassay testing to supplement dose reconstruction to estimate atomic veterans' doses under the agency's Nuclear Test Personnel Review Program. In summary, Pu urine bioassay using fission track analysis showed inconsistencies in sample measurement repeatability. To use the analysis as a tool to enhance current processes to perform accurate dose reconstructions requires extensive understanding and resolution of the measurement inconsistencies well beyond the scope of this pilot study. Therefore, fission track analysis (FTA) was found not feasible for implementation on a wide scale basis for atomic veterans.
The purpose of this paper is to assist those who might be confronted by non-normal and non-homoscedastic error distributions representable by continuous probability density functions. Methods are presented to demonstrate how mathematical algorithms can be developed to obtain a "best fit" calibration line and how uncertainty ranges in interpretations of unknowns can be obtained from the calibration. The data used to demonstrate these methods were obtained from Brookhaven National Laboratory fission track analysis data for plutonium in urine. Examination of the variability in the fission track analysis data, during the period of time that the demonstration data were collected, revealed that the deviations from the mean were neither normal nor lognormal, but the ratios of tracks divided by the median at each plutonium level were lognormally distributed. Consequently, the differences between the logarithms of observed tracks and the median were normally distributed. The new "best fit" line was obtained by minimizing a reduced chi-square statistic made up of the squared differences in logarithms, divided by the variance in logarithms and degrees of freedom. Thus, to detect a worker urine sample to be above the 58-person "control" population 95 percentile [about 3.2 microBq (85 aCi)] at the 95% probability level (0.05 Type H error) would now require an average of about 11 microBq (300 aCi) per sample, compared to 5 microBq per sample (132 aCi per sample) in a previous paper. This paper presents the algorithms used to obtain the new calibration line and the uncertainty distributions of interpretations at various analyte levels. The importance of maintaining process control over the statistical interpretation of bioassay data as well as for the radiochemical procedures for achieving the lowest feasible level of detection is demonstrated.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.