Back to Search Start Over

Automatic multi-view pose estimation in focused cardiac ultrasound.

Authors :
Freitas, João
Gomes-Fonseca, João
Tonelli, Ana Claudia
Correia-Pinto, Jorge
Fonseca, Jaime C.
Queirós, Sandro
Source :
Medical Image Analysis. May2024, Vol. 94, pN.PAG-N.PAG. 1p.
Publication Year :
2024

Abstract

Focused cardiac ultrasound (FoCUS) is a valuable point-of-care method for evaluating cardiovascular structures and function, but its scope is limited by equipment and operator's experience, resulting in primarily qualitative 2D exams. This study presents a novel framework to automatically estimate the 3D spatial relationship between standard FoCUS views. The proposed framework uses a multi-view U-Net-like fully convolutional neural network to regress line-based heatmaps representing the most likely areas of intersection between input images. The lines that best fit the regressed heatmaps are then extracted, and a system of nonlinear equations based on the intersection between view triplets is created and solved to determine the relative 3D pose between all input images. The feasibility and accuracy of the proposed pipeline were validated using a novel realistic in silico FoCUS dataset, demonstrating promising results. Interestingly, as shown in preliminary experiments, the estimation of the 2D images' relative poses enables the application of 3D image analysis methods and paves the way for 3D quantitative assessments in FoCUS examinations. • Focused cardiac ultrasound is primarily qualitative and bidimensional. • Novel framework for multi-view pose estimation in focused cardiac ultrasound. • Validated using a multi-view realistic in-silico cardiac ultrasound dataset. • Results demonstrate the framework's feasibility and accuracy. • Preliminary experiment corroborates its potential for 3D quantitative analysis. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
13618415
Volume :
94
Database :
Academic Search Index
Journal :
Medical Image Analysis
Publication Type :
Academic Journal
Accession number :
176588656
Full Text :
https://doi.org/10.1016/j.media.2024.103146