Back to Search Start Over

A multi-band AGN-SFG classifier for extragalactic radio surveys using machine learning

Authors :
Karsten, J.
Wang, L.
Margalef-Bentabol, B.
Best, P. N.
Kondapally, R.
La Marca, A.
Morganti, R.
Röttgering, H. J. A.
Vaccari, M.
Sabater, J.
Source :
A&A 675, A159 (2023)
Publication Year :
2023

Abstract

Extragalactic radio continuum surveys play an increasingly more important role in galaxy evolution and cosmology studies. While radio galaxies and radio quasars dominate at the bright end, star-forming galaxies (SFGs) and radio-quiet Active Galactic Nuclei (AGNs) are more common at fainter flux densities. Our aim is to develop a machine learning classifier that can efficiently and reliably separate AGNs and SFGs in radio continuum surveys. We perform supervised classification of SFGs vs AGNs using the Light Gradient Boosting Machine (LGBM) on three LOFAR Deep Fields (Lockman Hole, Bootes and ELAIS-N1), which benefit from a wide range of high-quality multi-wavelength data and classification labels derived from extensive spectral energy distribution (SED) analyses. Our trained model has a precision of 0.92(0.01) and a recall of 0.87(0.02) for SFGs. For AGNs, the model has slightly worse performance, with a precision of 0.87(0.02) and recall of 0.78(0.02). These results demonstrate that our trained model can successfully reproduce the classification labels derived from detailed SED analysis. The model performance decreases towards higher redshifts, mainly due to smaller training sample sizes. To make the classifier more adaptable to other radio galaxy surveys, we also investigate how our classifier performs with a poorer multi-wavelength sampling of the SED. In particular, we find that the far-infrared (FIR) and radio bands are of great importance. We also find that higher S/N in some photometric bands leads to a significant boost in the model's performance. In addition to using the 150 MHz radio data, our model can also be used with 1.4 GHz radio data. Converting 1.4 GHz to 150 MHz radio data reduces performance by about 4% in precision and 3% in recall. The final trained model is publicly available at https://github.com/Jesper-Karsten/MBASC<br />Comment: 14 pages 9 figures Accepted for publication in A&A

Details

Database :
arXiv
Journal :
A&A 675, A159 (2023)
Publication Type :
Report
Accession number :
edsarx.2306.05062
Document Type :
Working Paper
Full Text :
https://doi.org/10.1051/0004-6361/202346770