Back to Search Start Over

A deep audio-visual model for efficient dynamic video summarization.

Authors :
El-Nagar, Gamal
El-Sawy, Ahmed
Rashad, Metwally
Source :
Journal of Visual Communication & Image Representation. Apr2024, Vol. 100, pN.PAG-N.PAG. 1p.
Publication Year :
2024

Abstract

The adage "a picture is worth a thousand words" resonates in the digital video domain, suggesting that a video could be seen as a composition of millions of these words. Videos are composed of countless frames. Video summarization creates cohesive visual units in scenes by condensing shots from segments. Video summarization gains prominence by condensing lengthy videos while retaining crucial content. Despite effective techniques using keyframes or keyshots in video summarization, integrating audio components is imperative. This paper focuses on integrating deep learning techniques to generate dynamic summaries enriched with audio. To address that gap, an efficient model employs audio-visual features, enriching summarization for more robust and informative video summaries. The model selects keyshots based on their significance scores, safeguarding essential content. Assigning these scores to specific video shots is a pivotal yet demanding task for video summarization. The model's evaluation occurs on benchmark datasets, TVSum and SumMe. Experimental outcomes reveal its efficacy, showcasing considerable performance enhancements. On the TVSum, SumMe datasets, an F-Score metric of 79.33% and 66.78%, respectively, is achieved, surpassing previous state-of-the-art techniques. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
10473203
Volume :
100
Database :
Academic Search Index
Journal :
Journal of Visual Communication & Image Representation
Publication Type :
Academic Journal
Accession number :
176784560
Full Text :
https://doi.org/10.1016/j.jvcir.2024.104130