Large-scale imaging surveys will increase the number of galaxy-scale strong lensing candidates by maybe three orders of magnitudes beyond the number known today. Finding these rare objects will require picking them out of at least tens of millions of images, and deriving scientific results from them will require quantifying the efficiency and bias of any search method. To achieve these objectives automated methods must be developed. Because gravitational lenses are rare objects, reducing false positives will be particularly important. We present a description and results of an open gravitational lens finding challenge. Participants were asked to classify 100,000 candidate objects as to whether they were gravitational lenses or not with the goal of developing better automated methods for finding lenses in large data sets. A variety of methods were used including visual inspection, arc and ring finders, support vector machines (SVM) and convolutional neural networks (CNN). We find that many of the methods will be easily fast enough to analyse the anticipated data flow. In test data, several methods are able to identify upwards of half the lenses after applying some thresholds on the lens characteristics such as lensed image brightness, size or contrast with the lens galaxy without making a single false-positive identification. This is significantly better than direct inspection by humans was able to do. Having multi-band, ground based data is found to be better for this purpose than single-band space based data with lower noise and higher resolution, suggesting that multi colour data is crucial. Multi-band space based data will be superior to ground based data. The most difficult challenge for a lens finder is differentiating between rare, irregular and ring-like face-on galaxies and true gravitational lenses. The degree to which the efficiency and biases of lens finders can be quantified largely depends on the realism of the simulated data on which the finders are trained.Article number, page 1 of 26
We present an algorithm using principal component analysis (PCA) to subtract galaxies from imaging data and also two algorithms to find strong, galaxy-scale gravitational lenses in the resulting residual image. The combined method is optimised to find full or partial Einstein rings. Starting from a pre-selection of potential massive galaxies, we first perform a PCA to build a set of basis vectors. The galaxy images are reconstructed using the PCA basis and subtracted from the data. We then filter the residual image with two different methods. The first uses a curvelet (curved wavelets) filter of the residual images to enhance any curved/ring feature. The resulting image is transformed in polar coordinates, centred on the lens galaxy. In these coordinates, a ring is turned into a line, allowing us to detect very faint rings by taking advantage of the integrated signal-to-noise in the ring (a line in polar coordinates). The second way of analysing the PCA-subtracted images identifies structures in the residual images and assesses whether they are lensed images according to their orientation, multiplicity, and elongation. We applied the two methods to a sample of simulated Einstein rings as they would be observed with the ESA Euclid satellite in the VIS band. The polar coordinate transform allowed us to reach a completeness of 90% for a purity of 86%, as soon as the signal-to-noise integrated in the ring was higher than 30 and almost independent of the size of the Einstein ring. Finally, we show with real data that our PCA-based galaxy subtraction scheme performs better than traditional subtraction based on model fitting to the data. Our algorithm can be developed and improved further using machine learning and dictionary learning methods, which would extend the capabilities of the method to more complex and diverse galaxy shapes.
We present e-MERLIN and EVN observations which reveal unambiguous jet activity within radio quiet quasar HS 0810+2554. With an intrinsic flux density of 880 nJy, this is the faintest radio source ever imaged. The findings present new evidence against the idea that radio loud and radio quiet quasars are powered by different underlying radio emission mechanisms, showing instead that the same AGN mechanism can operate as the dominant source of radio emission even in the very lowest radio luminosity quasars. Thanks to strong gravitational lensing, our source is not only visible, but with VLBI is imaged to a scale of just 0.27 pc: the highest ever resolution image of a radio quiet quasar. Brightness temperatures of at least 8.4 × 10 6 K are associated with two highly compact components. Subsequent modelling of the lensed system has revealed that the components are linearly aligned on opposing sides of the optical quasar core, with the typical morphology of a compact symmetric object (CSO). Given that this source has been found to fall on the radio-FIR correlation, we suggest that the radio-FIR correlation cannot always be used to rule out AGN activity in favour of star-formation activity. The correlation -or at least its scatter -may conceal the coexistence of kinetic and radiative feedback modes in AGN. Modelling of the lensing mass itself points to a non-smooth mass distribution, hinting at the presence of dark matter substructure which has manifested as astrometric perturbations of the VLBI lensed images.
The imminent advent of very large-scale optical sky surveys, such as Euclid and LSST, makes it important to find efficient ways of discovering rare objects such as strong gravitational lens systems, where a background object is multiply gravitationally imaged by a foreground mass. As well as finding the lens systems, it is important to reject false positives due to intrinsic structure in galaxies, and much work is in progress with machine learning algorithms such as neural networks in order to achieve both these aims. We present and discuss a Support Vector Machine (SVM) algorithm which makes use of a Gabor filterbank in order to provide learning criteria for separation of lenses and non-lenses, and demonstrate using blind challenges that under certain circumstances it is a particularly efficient algorithm for rejecting false positives. We compare the SVM engine with a large-scale human examination of 100000 simulated lenses in a challenge dataset, and also apply the SVM method to survey images from the Kilo-Degree Survey.1 The exact number depends on details of the source population; populations with steep number counts, noticeably at submillimetre wavelengths, have much higher lensing rates (e.g. Blain 1996).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.