2017
DOI: 10.1051/0004-6361/201629159
|View full text |Cite
|
Sign up to set email alerts
|

A neural network gravitational arc finder based on the Mediatrix filamentation method

Abstract: Context. Automated arc detection methods are needed to scan the ongoing and next-generation wide-field imaging surveys, which are expected to contain thousands of strong lensing systems. Arc finders are also required for a quantitative comparison between predictions and observations of arc abundance. Several algorithms have been proposed to this end, but machine learning methods have remained as a relatively unexplored step in the arc finding process. Aims. In this work we introduce a new arc finder based on p… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
31
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6

Relationship

1
5

Authors

Journals

citations
Cited by 38 publications
(31 citation statements)
references
References 95 publications
(173 reference statements)
0
31
0
Order By: Relevance
“…We have three dense layers (units: 32, 64, and 128), five convolutional layers (filter sizes: 8, 16, 32, 64, and 128) using the ReLu activation function (Nair & Hinton 2010), and an extra convolutional layer (filter: 1) using the softmax function (Bishop 2006), f (z) = exp (z)/ exp z j , as the output for the decoder. Each convolutional layer apart from the last layer (output) is followed with an upsampling layer which has the opposite function to the pooling layer that is used for recovering the resolution.…”
Section: Convolutional Autoencoder (Cae)mentioning
confidence: 99%
See 3 more Smart Citations
“…We have three dense layers (units: 32, 64, and 128), five convolutional layers (filter sizes: 8, 16, 32, 64, and 128) using the ReLu activation function (Nair & Hinton 2010), and an extra convolutional layer (filter: 1) using the softmax function (Bishop 2006), f (z) = exp (z)/ exp z j , as the output for the decoder. Each convolutional layer apart from the last layer (output) is followed with an upsampling layer which has the opposite function to the pooling layer that is used for recovering the resolution.…”
Section: Convolutional Autoencoder (Cae)mentioning
confidence: 99%
“…The Bayesian Gaussian mixture model (BGM) is a variational Gaussian mixture model (Kullback & Leibler 1951;Attias 2000;Bishop 2006) which maximises the evidence lower bound (ELBO) (Kullback & Leibler 1951) in the loglikelihood. In this study, we apply the BGM from the scikitlearn library 3 (Pedregosa et al 2011).…”
Section: Bayesian Gaussian Mixture Model (Bgm)mentioning
confidence: 99%
See 2 more Smart Citations
“…Chen et al [3] and Levin and Narendra [4] demonstrated that nonlinear systems can be identified using neural networks. Furthermore, free open-source libraries such as the Fast Artificial Neural Network Library (FANN) [5] for network learning have already enabled researchers in various fields to use neural networks [6][7][8][9][10][11][12][13][14][15][16][17][18][19][20][21][22]. In fact, neural networks have recently been used for the identification of a wide range of nonlinear systems, including biological systems [23][24][25][26][27][28][29][30][31][32][33][34][35][36].…”
Section: Introductionmentioning
confidence: 99%