Back to Search Start Over

Semantic segmentation network stacking with genetic programming

Authors :
Bakurov, I
Buzzelli, M
Schettini, R
Castelli, M
Vanneschi, L
Bakurov I.
Buzzelli M.
Schettini R.
Castelli M.
Vanneschi L.
Bakurov, I
Buzzelli, M
Schettini, R
Castelli, M
Vanneschi, L
Bakurov I.
Buzzelli M.
Schettini R.
Castelli M.
Vanneschi L.
Publication Year :
2023

Abstract

Semantic segmentation consists of classifying each pixel of an image and constitutes an essential step towards scene recognition and understanding. Deep convolutional encoder–decoder neural networks now constitute state-of-the-art methods in the field of semantic segmentation. The problem of street scenes’ segmentation for automotive applications constitutes an important application field of such networks and introduces a set of imperative exigencies. Since the models need to be executed on self-driving vehicles to make fast decisions in response to a constantly changing environment, they are not only expected to operate reliably but also to process the input images rapidly. In this paper, we explore genetic programming (GP) as a meta-model that combines four different efficiency-oriented networks for the analysis of urban scenes. Notably, we present and examine two approaches. In the first approach, we represent solutions as GP trees that combine networks’ outputs such that each output class’s prediction is obtained through the same meta-model. In the second approach, we propose representing solutions as lists of GP trees, each designed to provide a unique meta-model for a given target class. The main objective is to develop efficient and accurate combination models that could be easily interpreted, therefore allowing gathering some hints on how to improve the existing networks. The experiments performed on the Cityscapes dataset of urban scene images with semantic pixel-wise annotations confirm the effectiveness of the proposed approach. Specifically, our best-performing models improve systems’ generalization ability by approximately 5% compared to traditional ensembles, 30% for the less performing state-of-the-art CNN and show competitive results with respect to state-of-the-art ensembles. Additionally, they are small in size, allow interpretability, and use fewer features due to GP’s automatic feature selection.

Details

Database :
OAIster
Notes :
STAMPA, English
Publication Type :
Electronic Resource
Accession number :
edsoai.on1427431348
Document Type :
Electronic Resource