Wavelets are scaleable, oscillatory functions that deviate from zero only within a limited spatial regime and have average value zero, and thus may be used to simultaneously characterize the shape, location, and strength of astronomical sources. But in addition to their use as source characterizers, wavelet functions are rapidly gaining currency within the source detection field. Wavelet-based source detection involves the correlation of scaled wavelet functions with binned, two-dimensional image data. If the chosen wavelet function exhibits the property of vanishing moments, significantly nonzero correlation coefficients will be observed only where there are high-order variations in the data; e.g., they will be observed in the vicinity of sources. Source pixels are identified by comparing each correlation coefficient with its probability sampling distribution, which is a function of the (estimated or a priori-known) background amplitude.In this paper, we describe the mission-independent, wavelet-based source detection algorithm WAVDETECT, part of the freely available Chandra Interactive Analysis of Observations (CIAO) software package. Our algorithm uses the Marr, or "Mexican Hat" wavelet function, but may be adapted for use with other wavelet functions. Aspects of our algorithm include: (1) the computation of local, exposure-corrected normalized (i.e. flat-fielded) background maps; (2) the correction for exposure variations within the field-of-view (due to, e.g., telescope support ribs or the edge of the field); (3) its applicability within the low-counts regime, as it does not require a minimum number of background counts per pixel for the accurate computation of source detection thresholds; (4) the generation of a source list in a manner that does not depend upon a detailed knowledge of the point spread function (PSF) shape; and (5) error analysis. These features make our algorithm considerably more general than previous methods developed for the analysis of X-ray image data, especially in the low count regime. We demonstrate the robustness of WAVDETECT by applying it to an image from an idealized detector with a spatially invariant Gaussian PSF and an exposure map similar to that of the Einstein IPC; to Pleiades Cluster data collected by the ROSAT PSPC; and to simulated Chandra ACIS-I image of the Lockman Hole region. 4 The CIAO software package may downloaded from http://asc.harvard.edu/ciao/. WAVDETECT is composed of WTRANSFORM, a source detector, and WRECON, a source list generator; these programs may be run separately.5 While it can operate to a limited extent if nothing at all is known about the PSF, our algorithm is most effective if characteristic PSF sizes, e.g. the radii of circles containing 50% of the encircled energy for different off-axis angles, are computable.12 These maps are later combined into a single map used in the calculation of source properties. See §3.2.1.13 If one does not provide an exposure map, a flat one is assumed, to account for the edge of the FOV.
No abstract
The ever-increasing quality and complexity of astronomical data underscores the need for new and powerful data analysis applications. This need has led to the development of Sherpa, a modeling and fitting program in the CIAO software package that enables the analysis of multi-dimensional, multi-wavelength data. In this paper, we present an overview of Sherpa's features, which include: support for a wide variety of input and output data formats, including the new Model Descriptor List (MDL) format; a model language which permits the construction of arbitrarily complex model expressions, including ones representing instrument characteristics; a wide variety of fit statistics and methods of optimization, model comparison, and parameter estimation; multi-dimensional visualization, provided by ChIPS; and new interactive analysis capabilities provided by embedding the S-Lang interpreted scripting language. We conclude by showing example Sherpa analysis sessions.
We use the multi-epoch radial velocities acquired by the APOGEE survey to perform a large scale statistical study of stellar multiplicity for field stars in the Milky Way, spanning the evolutionary phases between the main sequence and the red clump. We show that the distribution of maximum radial velocity shifts (∆RV max ) for APOGEE targets is a strong function of log g, with main sequence stars showing ∆RV max as high as ∼300 km s −1 , and steadily dropping down to ∼30 km s −1 for log g∼0, as stars climb up the Red Giant Branch (RGB). Red clump stars show a distribution of ∆RV max values comparable to that of stars at the tip of the RGB, implying they have similar multiplicity characteristics. The observed attrition of high ∆RV max systems in the RGB is consistent with a lognormal period distribution in the main sequence and a multiplicity fraction of 0.35, which is truncated at an increasing period as stars become physically larger and undergo mass transfer after Roche Lobe Overflow during H shell burning. The ∆RV max distributions also show that the multiplicity characteristics of field stars are metallicity dependent, with metal-poor ([Fe/H] −0.5) stars having a multiplicity fraction a factor 2-3 higher than metal-rich ([Fe/H] 0.0) stars. This has profound implications for the formation rates of interacting binaries observed by astronomical transient surveys and gravitational wave detectors, as well as the habitability of circumbinary planets.
Many scientific investigations of photometric galaxy surveys require redshift estimates, whose uncertainty properties are best encapsulated by photometric redshift (photo-z) posterior probability density functions (PDFs). A plethora of photo-z PDF estimation methodologies abound, producing discrepant results with no consensus on a preferred approach. We present the results of a comprehensive experiment comparing twelve photo-z algorithms applied to mock data produced forLarge Synoptic Survey Telescope The Rubin Observatory Legacy Survey of Space and Time (lsst) Dark Energy Science Collaboration (desc). By supplying perfect prior information, in the form of the complete template library and a representative training set as inputs to each code, we demonstrate the impact of the assumptions underlying each technique on the output photo-z PDFs. In the absence of a notion of true, unbiased photo-z PDFs, we evaluate and interpret multiple metrics of the ensemble properties of the derived photo-z PDFs as well as traditional reductions to photo-z point estimates. We report systematic biases and overall over/under-breadth of the photo-z PDFs of many popular codes, which may indicate avenues for improvement in the algorithms or implementations. Furthermore, we raise attention to the limitations of established metrics for assessing photo-z PDF accuracy; though we identify the conditional density estimate (CDE) loss as a promising metric of photo-z PDF performance in the case where true redshifts are available but true photo-z PDFs are not, we emphasize the need for science-specific performance metrics.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.