2019
DOI: 10.1093/mnras/stz2477
|View full text |Cite
|
Sign up to set email alerts
|

Morpho-photometric redshifts

Abstract: Machine learning (ML) is one of two standard approaches (together with SED fitting) for estimating the redshifts of galaxies when only photometric information is available. ML photo-z solutions have traditionally ignored the morphological information available in galaxy images or partly included it in the form of hand-crafted features, with mixed results. We train a morphology-aware photometric redshift machine using modern deep learning tools. It uses a custom architecture that jointly trains on galaxy fluxes… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 13 publications
(9 citation statements)
references
References 13 publications
(19 reference statements)
0
9
0
Order By: Relevance
“…The galaxy colors are related to both the redshift and the morphology. However, the contradictory results obtained by other researchers for and against including morphological data to increase the redshift estimate accuracy (Tagliaferri et al 2003;Singal et al 2011;Menou 2019;Wilson et al 2020) suggest that the morphological properties provide limited information to attain this goal. Therefore, we used bootstrap resampling to analyze the effect of the galaxy classification probabilities passed onto the redshift module in our neural network model.…”
Section: Effect Of the Classification On The Redshift Estimatementioning
confidence: 96%
See 1 more Smart Citation
“…The galaxy colors are related to both the redshift and the morphology. However, the contradictory results obtained by other researchers for and against including morphological data to increase the redshift estimate accuracy (Tagliaferri et al 2003;Singal et al 2011;Menou 2019;Wilson et al 2020) suggest that the morphological properties provide limited information to attain this goal. Therefore, we used bootstrap resampling to analyze the effect of the galaxy classification probabilities passed onto the redshift module in our neural network model.…”
Section: Effect Of the Classification On The Redshift Estimatementioning
confidence: 96%
“…In contrast, Tagliaferri et al (2003) include photometry, surface brightnesses, and Petrosian radii and fluxes, reducing the errors by a factor of about 30%. More recently, Menou (2019) presents a nonsequential network with two inputs and one output, a convolutional branch to analyze images, and a branch composed of a dense set of layers to deal with the photometric data, reporting an improvement in the redshift estimates. For our redshift analysis, we take into account the probability yielded by the Classification module.…”
Section: Photometric Redshiftsmentioning
confidence: 99%
“…The galaxy colors are related to both the redshift and the morphology. However, the contradictory results obtained by other researchers for and against including morphological data to increase the redshift estimate accuracy (Tagliaferri et al 2003;Singal et al 2011;Menou 2019;Wilson et al 2020) suggest that the morphological properties provide limited information to attain this goal. Therefore, we used bootstrap resampling to analyze the effect of the galaxy classification probabilities passed on the redshift module in our neural network model.…”
Section: Effect Of the Classification On The Redshift Estimatementioning
confidence: 96%
“…Galaxy morphology from images can supplement photometric measurements when neural networks are used. More recently, some more advanced methods have been used, including the use of different flavors of deep convolutional networks to derive photometric redshift directly from multi-band images (D'Isanto & Polsterer 2018, Pasquet et al 2019, and to build a morphology-aware photo-z estimator (Menou 2019). Comparisons of performance are presented by Dahlen et al (2013), Rau et al (2015), Salvato et al (2019), andSchmidt et al (2020).…”
Section: The Photo-z Conundrummentioning
confidence: 99%