Back to Search Start Over

Comparing 3D, 2.5D, and 2D Approaches to Brain Image Auto-Segmentation

Authors :
Arman Avesta
Sajid Hossain
MingDe Lin
Mariam Aboian
Harlan M. Krumholz
Sanjay Aneja
Source :
Bioengineering, Vol 10, Iss 2, p 181 (2023)
Publication Year :
2023
Publisher :
MDPI AG, 2023.

Abstract

Deep-learning methods for auto-segmenting brain images either segment one slice of the image (2D), five consecutive slices of the image (2.5D), or an entire volume of the image (3D). Whether one approach is superior for auto-segmenting brain images is not known. We compared these three approaches (3D, 2.5D, and 2D) across three auto-segmentation models (capsule networks, UNets, and nnUNets) to segment brain structures. We used 3430 brain MRIs, acquired in a multi-institutional study, to train and test our models. We used the following performance metrics: segmentation accuracy, performance with limited training data, required computational memory, and computational speed during training and deployment. The 3D, 2.5D, and 2D approaches respectively gave the highest to lowest Dice scores across all models. 3D models maintained higher Dice scores when the training set size was decreased from 3199 MRIs down to 60 MRIs. 3D models converged 20% to 40% faster during training and were 30% to 50% faster during deployment. However, 3D models require 20 times more computational memory compared to 2.5D or 2D models. This study showed that 3D models are more accurate, maintain better performance with limited training data, and are faster to train and deploy. However, 3D models require more computational memory compared to 2.5D or 2D models.

Details

Language :
English
ISSN :
23065354
Volume :
10
Issue :
2
Database :
Directory of Open Access Journals
Journal :
Bioengineering
Publication Type :
Academic Journal
Accession number :
edsdoj.9d15ac8e2d74cf783536d434d46a870
Document Type :
article
Full Text :
https://doi.org/10.3390/bioengineering10020181