We have publicly released a blinded mix of simulated SNe, with types (Ia, Ib, Ic, II) selected in proportion to their expected rate. The simulation is realized in the griz filters of the Dark Energy Survey (DES) with realistic observing conditions (sky noise, point spread function and atmospheric transparency) based on years of recorded conditions at the DES site. Simulations of non-Ia type SNe are based on spectroscopically confirmed light curves that include unpublished non-Ia samples donated from the Carnegie Supernova Project (CSP), the Supernova Legacy Survey (SNLS), and the Sloan Digital Sky Survey-II (SDSS-II). We challenge scientists to run their classification algorithms and report a type for each SN. A spectroscopically confirmed subset is provided for training. The goals of this challenge are to (1) learn the relative strengths and weaknesses of the different classification algorithms, (2) use the results to improve classification algorithms, and ( 3) understand what spectroscopically confirmed sub-sets are needed to properly train these algorithms. The challenge is available at www.hep.anl.gov/SNchallenge, and the due date for classifications is May 1, 2010.
Supernova cosmology without spectroscopic confirmation is an exciting new frontier which we address here with the Bayesian Estimation Applied to Multiple Species (BEAMS) algorithm and the full three years of data from the Sloan Digital Sky Survey II Supernova Survey (SDSS-II SN). BEAMS is a Bayesian framework for using data from multiple species in statistical inference when one has the probability that each data point belongs to a given species, corresponding in this context to different types of supernovae with their probabilities derived from their multi-band lightcurves. We run the BEAMS algorithm on both Gaussian and more realistic SNANA simulations with of order 10 4 supernovae, testing the algorithm against various pitfalls one might expect in the new and somewhat uncharted territory of photometric supernova cosmology. We compare the performance of BEAMS to that of both mock spectroscopic surveys and photometric samples which have been cut using typical selection criteria. The latter typically are either biased due to contamination or have significantly larger contours in the cosmological parameters due to small data-sets. We then apply BEAMS to the 792 SDSS-II photometric supernovae with host spectroscopic redshifts. In this case, BEAMS reduces the area of the Ωm, ΩΛ contours by a factor of three relative to the case where only spectroscopically confirmed data are used (297 supernovae). In the case of flatness, the constraints obtained on the matter density applying BEAMS to the photometric SDSS-II data are Ω BEAMS m = 0.194 ± 0.07. This illustrates the potential power of BEAMS for future large photometric supernova surveys such as LSST.
Future photometric supernova surveys will produce vastly more candidates than can be followed up spectroscopically, highlighting the need for effective classification methods based on light curves alone. Here we introduce boosting and kernel density estimation techniques which have minimal astrophysical input, and compare their performance on 20 000 simulated Dark Energy Survey light curves. We demonstrate that these methods perform very well provided a representative sample of the full population is used for training. Interestingly, we find that they do not require the redshift of the host galaxy or candidate supernova. However, training on the types of spectroscopic subsamples currently produced by supernova surveys leads to poor performance due to the resulting bias in training, and we recommend that special attention be given to the creation of representative training samples. We show that given a typical non‐representative training sample, S, one can expect to pull out a representative subsample of about 10 per cent of the size of S, which is large enough to outperform the methods trained on all of S.
Classifying transients based on multi-band lightcurves is a challenging but crucial problem in the era of GAIA and LSST since the sheer volume of transients will make spectroscopic classification unfeasible. We present a nonparametric classifier that predicts the transient's class given training data. It implements two novel components: the use of the BAGIDIS wavelet methodology -a characterization of functional data using hierarchical wavelet coefficients -as well as the introduction of a ranked probability classifier on the wavelet coefficients that handles both the heteroscedasticity of the data in addition to the potential non-representativity of the training set. The classifier is simple to implement while a major advantage of the BAGIDIS wavelets is that they are translation invariant. Hence, BAGIDIS does not need the light curves to be aligned to extract features. Further, BAGIDIS is nonparametric so it can be used effectively in blind searches for new objects. We demonstrate the effectiveness of our classifier against the Supernova Photometric Classification Challenge to correctly classify supernova lightcurves as Type Ia or non-Ia. We train our classifier on the spectroscopically-confirmed subsample (which isn't representative) and show that it works well for supernova with observed lightcurve timespans greater than 100 days (roughly 55% of the dataset). For such data, we obtain a Ia efficiency of 80.5% and a purity of 82.4%, yielding a highly competitive challenge score of 0.49. This indicates that our "model-blind" approach may be particularly suitable for the general classification of astronomical transients in the era of large synoptic sky surveys.
Sequential quantile estimation refers to incorporating observations into quantile estimates in an incremental fashion thus furnishing an online estimate of one or more quantiles at any given point in time. Sequential quantile estimation is also known as online quantile estimation. This area is relevant to the analysis of data streams and to the one-pass analysis of massive data sets. Applications include network traffic and latency analysis, real time fraud detection and high frequency trading. We introduce new techniques for online quantile estimation based on Hermite series estimators in the settings of static quantile estimation and dynamic quantile estimation. In the static quantile estimation setting we apply the existing Gauss-Hermite expansion in a novel manner. In particular, we exploit the fact that Gauss-Hermite coefficients can be updated in a sequential manner. To treat dynamic quantile estimation we introduce a novel expansion with an exponentially weighted estimator for the Gauss-Hermite coefficients which we term the Exponentially Weighted Gauss-Hermite (EWGH) expansion. These algorithms go beyond existing sequential quantile estimation algorithms in that they allow arbitrary quantiles (as opposed to pre-specified quantiles) to be estimated at any point in time. In doing so we provide a solution to online distribution function and online quantile function estimation on data streams. In particular we derive an analytical expression for the CDF and prove consistency results for the CDF under certain conditions. In addition we analyse the associated quantile estimator. Simulation studies and tests on real data reveal the Gauss-Hermite based algorithms to be competitive with a leading existing algorithm.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.