We analyze a sample of optical light curves for 100 quasars, 70 of which have black hole mass estimates. Our sample is the largest and broadest used yet for modeling quasar variability. The sources in our sample have z < 2.8, 10 42 λL λ (5100Å) 10 46 , and 10 6 M BH /M ⊙ 10 10 . We model the light curves as a continuous time stochastic process, providing a natural means of estimating the characteristic time scale and amplitude of quasar variations. We employ a Bayesian approach to estimate the characteristic time scale and amplitude of flux variations; our approach is not affected by biases introduced from discrete sampling effects. We find that the characteristic time scales stongly correlate with black hole mass and luminosity, and are consistent with disk orbital or thermal time scales. In addition, the amplitude of short time scale variations is significantly anti-correlated with black hole mass and luminosity. We interpret the optical flux fluctuations as resulting from thermal fluctuations that are driven by an underlying stochastic process, such as a turbulent magnetic field. In addition, the intranight variations in optical flux implied by our empirical model are 0.02 mag, consistent with current microvariability observations of radio-quiet quasars. Our stochastic model is therefore able to unify both long and short time scale optical variations in radio-quiet quasars as resulting from the same underlying process, while radio-loud quasars have an additional variability component that operates on time scales 1 day.
The likelihood ratio test (LRT) and the related F test, 1 popularized in astrophysics by Eadie et al. (1971), Bevington (1969), Lampton, Margon, and Bowyer (1976), Cash (1979), andAvni et al. (1978) do not (even asymptotically) adhere to their nominal χ 2 and F distributions in many statistical tests common in astrophysics, thereby casting many marginal line or source detections and non-detections into doubt. Although the above references illustrate the many legitimate uses of these statistics, in some important cases it can be impossible to compute the correct false positive rate. For example, it has become common practice to use the LRT or the F test for detecting a line in a spectral model or a source above background despite the lack of certain required regularity conditions. (These applications were not originally suggested by Cash (1979) or by Bevington (1969)). In these and other settings that involve testing a hypothesis that is on the boundary of the parameter space, contrary to common practice, the nominal χ 2 distribution for the LRT or the F distribution for the F test should not be used. In this paper, we characterize an important class of problems where the LRT and the F test fail and illustrate this non-standard behavior. We briefly sketch several possible acceptable alternatives, focusing on Bayesian posterior predictive probability-values. We present this method in some detail, as it is a simple, robust, and intuitive approach. This alternative method is illustrated using the gamma-ray burst of May 8, 1997 (GRB 970508) to investigate the presence of an Fe K emission line during the initial phase of the observation.There are many legitimate uses of the LRT and the F test in astrophysics, and even when these tests are inappropriate, there remain several statistical alternatives (e.g., judicious use of error bars and Bayes factors). Nevertheless, there are numerous cases of the inappropriate use of the LRT and similar tests in the literature, bringing substantive scientific results into question. * The authors gratefully acknowledge funding for this project partially provided by NSF grants DMS-97-05157 and DMS-01-04129, and by NASA Contract NAS8-39073 (CXC).1 The F test for an additional term in a model, as defined in Bevington (1969) on pp. 208-209, is the ratio 6 A probability-value or p-value is the probability of observing a value of the test statistic (such as χ 2 ) as extreme or more extreme than the value actually observed given that the null model holds (e.g. χ 2 30 ≥ 2.0) Small p-values are taken as evidence against the null model; i.e., p-values are used to calibrate tests. Posterior predictive p-values are a Bayesian analogue; see Section 4.2.3 usually contaminated with background counts, degraded by instrument response, and altered by the effective area of the instrument and interstellar absorption. Thus, we model the observed counts in a detector channel l as independent Poisson 7 random variables with expectation
The Chandra Source Catalog (CSC) is a general purpose virtual X-ray astrophysics facility that provides access to a carefully selected set of generally useful quantities for individual X-ray sources, and is designed to satisfy the needs of a broad-based group of scientists, including those who may be less familiar with astronomical data analysis in the X-ray regime. The first release of the CSC includes information about 94,676 distinct X-ray sources detected in a subset of public Advanced CCD Imaging Spectrometer imaging observations from roughly the first eight years of the Chandra mission. This release of the catalog includes point and compact sources with observed spatial extents 30. The catalog (1) provides access to the best estimates of the X-ray source properties for detected sources, with good scientific fidelity, and directly supports scientific analysis using the individual source data; (2) facilitates analysis of a wide range of statistical properties for classes of X-ray sources; and (3) provides efficient access to calibrated observational data and ancillary data products for individual X-ray sources, so that users can perform detailed further analysis using existing tools. The catalog includes real X-ray sources detected with flux estimates that are at least 3 times their estimated 1σ uncertainties in at least one energy band, while maintaining the number of spurious sources at a level of 1 false source per field for a 100 ks observation. For each detected source, the CSC provides commonly tabulated quantities, including source position, extent, multi-band fluxes, hardness ratios, and variability statistics, derived from the observations in which the source is detected. In addition to these traditional catalog elements, for each X-ray source the CSC includes an extensive set of file-based data products that can be manipulated interactively, including source images, event lists, light curves, and spectra from each observation in which a source is detected.
We present the use of continuous-time autoregressive moving average (CARMA) models as a method for estimating the variability features of a light curve, and in particular its power spectral density (PSD). CARMA models fully account for irregular sampling and measurement errors, making them valuable for quantifying variability, forecasting and interpolating light curves, and variability-based classification. We show that the PSD of a CARMA model can be expressed as a sum of Lorentzian functions, which makes them extremely flexible and able to model a broad range of PSDs. We present the likelihood function for light curves sampled from CARMA processes, placing them on a statistically rigorous foundation, and we present a Bayesian method to infer the probability distribution of the PSD given the measured light curve. Because calculation of the likelihood function scales linearly with the number of data points, CARMA modeling scales to current and future massive time-domain data sets. We conclude by applying our CARMA modeling approach to light curves for an X-ray binary, two active galactic nuclei, a long-period variable star, and an RR Lyrae star in order to illustrate their use, applicability, and interpretation.
A commonly used measure to summarize the nature of a photon spectrum is the so-called hardness ratio, which compares the numbers of counts observed in different passbands. The hardness ratio is especially useful to distinguish between and categorize weak sources as a proxy for detailed spectral fitting. However, in this regime classical methods of error propagation fail, and the estimates of spectral hardness become unreliable. Here we develop a rigorous statistical treatment of hardness ratios that properly deals with detected photons as independent Poisson random variables and correctly deals with the non-Gaussian nature of the error propagation. The method is Bayesian in nature and thus can be generalized to carry out a multitude of source-populationYbased analyses. We verify our method with simulation studies and compare it with the classical method. We apply this method to real-world examples, such as the identification of candidate quiescent low-mass X-ray binaries in globular clusters and tracking the time evolution of a flare on a low-mass star.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.