Back to Search Start Over

Learning Graph Representation with Randomized Neural Network for Dynamic Texture Classification

Authors :
Jarbas Joaci de Mesquita Sá Junior
Lucas Correia Ribas
Odemir Martinez Bruno
Antoine Manzanera
Instituto de Ciencias Matematicas e de Computaçao
Instituto de Ciências Mathemàticas e de Computação [São Carlos] (ICMC-USP)
Universidade de São Paulo (USP)-Universidade de São Paulo (USP)
Universidade Federal do Ceará = Federal University of Ceará (UFC)
Unité d'Informatique et d'Ingénierie des Systèmes (U2IS)
École Nationale Supérieure de Techniques Avancées (ENSTA Paris)
Instituto de Física de São Carlos (IFSC-USP)
Universidade de São Paulo (USP)
Source :
Applied Soft Computing, Applied Soft Computing, Elsevier, 2021, Repositório Institucional da USP (Biblioteca Digital da Produção Intelectual), Universidade de São Paulo (USP), instacron:USP
Publication Year :
2021
Publisher :
HAL CCSD, 2021.

Abstract

Dynamic textures (DTs) are pseudo periodic data on a space × time support, that can represent many natural phenomena captured from video footages. Their modeling and recognition are useful in many applications of computer vision. This paper presents an approach for DT analysis combining a graph-based description from the Complex Network framework, and a learned representation from the Randomized Neural Network (RNN) model. First, a directed space × time graph modeling with only one parameter (radius) is used to represent both the motion and the appearance of the DT. Then, instead of using classical graph measures as features, the DT descriptor is learned using a RNN, that is trained to predict the gray level of pixels from local topological measures of the graph. The weight vector of the output layer of the RNN forms the descriptor. Several structures are experimented for the RNNs, resulting in networks with final characteristics of a single hidden layer of 4, 24, or 29 neurons, and input layers 4 or 10 neurons, meaning 6 different RNNs. Experimental results on DT recognition conducted on Dyntex++ and UCLA datasets show a high discriminatory power of our descriptor, providing an accuracy of 99.92%, 98.19%, 98.94% and 95.03% on the UCLA-50, UCLA-9, UCLA-8 and Dyntex++ databases, respectively. These results outperform various literature approaches, particularly for UCLA-50. More significantly, our method is competitive in terms of computational efficiency and descriptor size. It is therefore a good option for real-time dynamic texture segmentation, as illustrated by experiments conducted on videos acquired from a moving boat.

Details

Language :
English
ISSN :
15684946
Database :
OpenAIRE
Journal :
Applied Soft Computing, Applied Soft Computing, Elsevier, 2021, Repositório Institucional da USP (Biblioteca Digital da Produção Intelectual), Universidade de São Paulo (USP), instacron:USP
Accession number :
edsair.doi.dedup.....6e9ff61d4c5d80ca41ca29cbd74b9bc0