2018
DOI: 10.1051/0004-6361/201732134
|View full text |Cite
|
Sign up to set email alerts
|

Machine learning in APOGEE

Abstract: Context. The volume of data generated by astronomical surveys is growing rapidly. Traditional analysis techniques in spectroscopy either demand intensive human interaction or are computationally expensive. In this scenario, machine learning, and unsupervised clustering algorithms in particular, offer interesting alternatives. The Apache Point Observatory Galactic Evolution Experiment (APOGEE) offers a vast data set of near-infrared stellar spectra, which is perfect for testing such alternatives. Aims. Our rese… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
6
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
7
2

Relationship

2
7

Authors

Journals

citations
Cited by 22 publications
(6 citation statements)
references
References 40 publications
0
6
0
Order By: Relevance
“…The project is collecting hundreds of thousands of stellar spectra with high signal-to-noise ratios across the Milky Way, focusing on the regions where dust causes dramatic extinction at optical wavelengths, namely the disk and the central parts of the Galaxy. The APOGEE observations are providing a chemical map of our Galaxy with unprecedented quality (Badenes et al 2018;Fernández-Alvar et al 2018;Fu et al 2018;García-Dias et al 2018;Garcııa-Pérez et al 2018;Hayes et al 2018;Mackereth et al 2019;Palicio et al 2018;Souto et al 2018;Weinberg et al 2018;Wilson et al 2018).…”
Section: Introductionmentioning
confidence: 99%
“…The project is collecting hundreds of thousands of stellar spectra with high signal-to-noise ratios across the Milky Way, focusing on the regions where dust causes dramatic extinction at optical wavelengths, namely the disk and the central parts of the Galaxy. The APOGEE observations are providing a chemical map of our Galaxy with unprecedented quality (Badenes et al 2018;Fernández-Alvar et al 2018;Fu et al 2018;García-Dias et al 2018;Garcııa-Pérez et al 2018;Hayes et al 2018;Mackereth et al 2019;Palicio et al 2018;Souto et al 2018;Weinberg et al 2018;Wilson et al 2018).…”
Section: Introductionmentioning
confidence: 99%
“…The v-measure score and homogeneity score are transparent to permutations of the labels, but the accuracy score needs all the clusters to be cross-matched. In this case, we matched each group of stars found by the unsupervised tool to the star cluster with the highest number of member stars inside the group, as was done in Sánchez Almeida & Allende Prieto (2013) and Garcia- Dias et al (2018). However, in this work when the number of clusters in the real dataset did not match the number of clusters in the predicted model, or when the objects in one group did not match any of the available clusters, we assigned the group to the cluster with the highest number of coincident objects, even when the cluster had previously been assigned to another group.…”
Section: Clustering Algorithmsmentioning
confidence: 99%
“…For example, supervised spectral classification has been adopted in works such as those by Bailer-Jones et al (1998); Singh et al (1998); Bailer-Jones (2001); Rodríguez et al (2004); Giridhar et al (2006); Manteiga et al (2009), andNavarro et al (2012). Unsupervised spectral classification was also explored in works such as Sánchez Almeida Bovy (2017); Traven et al (2017); Valentini et al (2017); Garcia- Dias et al (2018); Reis et al (2018), and Price-Jones & Bovy (2019).…”
Section: Introductionmentioning
confidence: 99%
“…Among all classification methods, K-means clustering has been the most commonly used in analyzing astronomical data, e.g., searching for extremely metal-poor galaxies (Sánchez Almeida & Allende Prieto 2013), classification of stellar spectra obtained by SDSS/SEGUE and APOGEE (Sánchez Almeida et al 2016;Garcia-Dias et al 2018). Classical methods including K-means belong to "hard classification," which adopts a nonprobabilistic model and deduces the classification result from the decision function, where the sample is definitely assigned to one class.…”
Section: Introductionmentioning
confidence: 99%