1. Systematic comparison of neural networks used in discovering strong gravitational lenses
- Author
-
More, Anupreeta, Canameras, Raoul, Jaelani, Anton T., Shu, Yiping, Ishida, Yuichiro, Wong, Kenneth C., Inoue, Kaiki Taro, Schuldt, Stefan, and Sonnenfeld, Alessandro
- Subjects
Astrophysics - Astrophysics of Galaxies ,Astrophysics - Cosmology and Nongalactic Astrophysics - Abstract
Efficient algorithms are being developed to search for strong gravitational lens systems owing to increasing large imaging surveys. Neural networks have been successfully used to discover galaxy-scale lens systems in imaging surveys such as the Kilo Degree Survey, Hyper-Suprime Cam (HSC) Survey and Dark Energy Survey over the last few years. Thus, it has become imperative to understand how some of these networks compare, their strengths and the role of the training datasets as most of the networks make use of supervised learning algorithms. In this work, we present the first-of-its-kind systematic comparison and benchmarking of networks from four teams that have analysed the HSC Survey data. Each team has designed their training samples and developed neural networks independently but coordinated apriori in reserving specific datasets strictly for test purposes. The test sample consists of mock lenses, real (candidate) lenses and real non-lenses gathered from various sources to benchmark and characterise the performance of each of the network. While each team's network performed much better on their own constructed test samples compared to those from others, all networks performed comparable on the test sample with real (candidate) lenses and non-lenses. We also investigate the impact of swapping the training samples amongst the teams while retaining the same network architecture. We find that this resulted in improved performance for some networks. These results have direct implications on measures to be taken for lens searches with upcoming imaging surveys such as the Rubin-Legacy Survey of Space and Time, Roman and Euclid., Comment: 13 pages, 8 figures
- Published
- 2024