Abstract. Detection of long-term, linear trends is affected by a number of factors, including the size of trend to be detected, the time span of available data, and the magnitude of variability and autocorrelation of the noise in the data. The number of years of data necessary to detect a trend is strongly dependent on, and increases with, the magnitude of variance (o-2•) and autocorrelation coefficient (qb) of the noise. For a typical range of values of o-2• and 4> the number of years of data needed to detect a trend of 5%/decade can vary from -10 to >20 years, implying that in choosing sites to detect trends some locations are likely to be more efficient and cost-effective than others. Additionally, some environmental variables allow for an earlier detection of trends than other variables because of their low variability and autocorrelation. The detection of trends can be confounded when sudden changes occur in the data, such as when an instrument is changed or a volcano erupts. Sudden level shifts in data sets, whether due to artificial sources, such as changes in instrumentation or site location, or natural sources, such as volcanic eruptions or local changes to the environment, can strongly impact the number of years necessary to detect a given trend, increasing the number of years by as much as 50% or more. This paper provides formulae for estimating the number of years necessary to detect trends, along with the estimates of the impact of interventions on trend detection. The uncertainty associated with these estimates is also explored. The results presented are relevant for a variety of practical decisions in managing a monitoring station, such as whether to move an instrument, change monitoring protocols in the middle of a long-term monitoring program, or try to reduce uncertainty in the measurements by improved calibration techniques. The results are also useful for establishing reasonable expectations for trend detection and can be helpful in selecting sites and environmental variables for the detection of trends. An important implication of these results is that it will take several decades of high-quality data to detect the trends likely to occur in nature. IntroductionThe impact of human intervention in a changing environment has brought about increased concern for detecting trends in various types of environmental data. A variety of studies
Two major reasons for the popularity of the EM algorithm are that its maximum step involves only complete-data maximum likelihood estimation, which is often computationally simple, and that its convergence is stable, with each iteration increasing the likelihood. When the associated complete-data maximum likelihood estimation itself is complicated, EM is less attractive because the M-step is computationally unattractive. In many cases, however, complete-data maximum likelihood estimation is relatively simple when conditional on some function of the parameters being estimated. We introduce a class of generalized EM algorithms, which we call the ECM algorithm, for Expectation/Conditional Maximization (CM), that takes advantage of the simplicity of completedata conditional maximum likelihood estimation by replacing a complicated M-step of EM with several computationally simpler CM-steps. We show that the ECM algorithm shares all the appealing convergence properties of EM, such as always increasing the likelihood, and present several illustrative examples.
This paper provides a rationale and overview of procedures used to develop the National Latino and Asian American Study (NLAAS). The NLAAS is nationally representative community household survey that estimates the prevalence of mental disorders and rates of mental health service utilization of Latinos and Asian Americans in the United States. The central aims of the NLAAS are to: 1) describe the lifetime and 12-month prevalence of psychiatric disorders and the rates of mental health services use for Latino and Asian American populations using nationwide representative samples of Latinos and Asian Americans, 2) assess the associations among social position, environmental context, and psychosocial factors with the prevalence of psychiatric disorders and utilization rates of mental health services, and 3) compare the lifetime and 12-month prevalence of psychiatric disorders, and utilization of mental health services of Latinos and Asian Americans with national representative samples of non-Latino whites (from the National Comorbidity Study-Replication;
Rubin's multiple imputation is a three-step method for handling complex missing data, or more generally, incomplete-data problems, which arise frequently in medical studies. At the first step, m (> 1) completed-data sets are created by imputing the unobserved data m times using m independent draws from an imputation model, which is constructed to reasonably approximate the true distributional relationship between the unobserved data and the available information, and thus reduce potentially very serious nonresponse bias due to systematic difference between the observed data and the unobserved ones. At the second step, m complete-data analyses are performed by treating each completed-data set as a real complete-data set, and thus standard complete-data procedures and software can be utilized directly. At the third step, the results from the m complete-data analyses are combined in a simple, appropriate way to obtain the so-called repeated-imputation inference, which properly takes into account the uncertainty in the imputed values. This paper reviews three applications of Rubin's method that are directly relevant for medical studies. The first is about estimating the reporting delay in acquired immune deficiency syndrome (AIDS) surveillance systems for the purpose of estimating survival time after AIDS diagnosis. The second focuses on the issue of missing data and noncompliance in randomized experiments, where a school choice experiment is used as an illustration. The third looks at handling nonresponse in United States National Health and Nutrition Examination Surveys (NHANES). The emphasis of our review is on the building of imputation models (i.e. the first step), which is the most fundamental aspect of the method.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.