2021
DOI: 10.1093/mnras/stab1513
|View full text |Cite
|
Sign up to set email alerts
|

Benchmarking and scalability of machine-learning methods for photometric redshift estimation

Abstract: Obtaining accurate photometric redshift (photo-z) estimations is an important aspect of cosmology, remaining a prerequisite of many analyses. In creating novel methods to produce photo-z estimations, there has been a shift towards using machine learning techniques. However, there has not been as much of a focus on how well different machine learning methods scale or perform with the ever-increasing amounts of data being produced. Here, we introduce a benchmark designed to analyse the performance and scalabilit… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
11
0
1

Year Published

2021
2021
2025
2025

Publication Types

Select...
6
2
1

Relationship

2
7

Authors

Journals

citations
Cited by 25 publications
(13 citation statements)
references
References 37 publications
0
11
0
1
Order By: Relevance
“…Inception blocks proposed by Szegedy et al (2014) can extract information in different scales parallelly and effectively combine them. Pasquet et al (2019) and Henghes et al (2021) and our previous work build their networks based on inception block to predict photometric redshift from images and achieve quite accurate results. Therefore, we construct Bayesian inception blocks with flipout layers.…”
Section: Network Architecturementioning
confidence: 99%
“…Inception blocks proposed by Szegedy et al (2014) can extract information in different scales parallelly and effectively combine them. Pasquet et al (2019) and Henghes et al (2021) and our previous work build their networks based on inception block to predict photometric redshift from images and achieve quite accurate results. Therefore, we construct Bayesian inception blocks with flipout layers.…”
Section: Network Architecturementioning
confidence: 99%
“…The larger size of the Galaxy Zoo dataset allowed more general learning of galactic features and helped increase the predictiveness of the XAI based methods. A smaller dataset would reduce the performance of the initial model (Henghes et al 2021) and thus would reduce the ability of XAI methods to extract galactic features.…”
Section: Discussionmentioning
confidence: 99%
“…A convolutional neural network (CNN) and an inception module CNN which both used the image data as the input. A random forest (RF) and extremely randomised trees (ERT) that had previously been found to be the best performing traditional methods (Henghes et al 2021) and which only used magnitude features. And two experimental mixed-input models, which combined a CNN or inception module CNN with a multi-layer perception to use both the image data and magnitude features as inputs.…”
Section: Methodsmentioning
confidence: 99%
“…Benchmarking is the process of running a set of standardised tests to determine the relative performance of an object, in this case iteratively running the training and testing of different machine learning algorithms. Here, benchmarking was performed in a similar vein to Henghes et al (2021). We recorded the time taken throughout the machine learning process and varied the size of the training dataset to be able to compare the efficiency of the various models.…”
Section: Benchmarkingmentioning
confidence: 99%