Back to Search Start Over

Multi-source multimodal deep learning to improve situation awareness : an application of emergency traffic management : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Emergency Management at Massey University, Wellington, New Zealand

Authors :
Rahubadde Kankanamge, Raj Prasanna
Hewa Algiriyage, Rangika Nilani
Rahubadde Kankanamge, Raj Prasanna
Hewa Algiriyage, Rangika Nilani
Publication Year :
2023

Abstract

Traditionally, disaster management has placed a great emphasis on institutional warning systems, and people have been treated as victims rather than active participants. However, with the evolution of communication technology, today, the general public significantly contributes towards performing disaster management tasks challenging traditional hierarchies in information distribution and acquisition. With mobile phones and Social Media (SM) platforms widely being used, people in disaster scenes act as non-technical sensors that provide contextual information in multiple modalities (e.g., text, image, audio and video) through these content-sharing applications. Research has shown that the general public has extensively used SM applications to report injuries or deaths, damage to infrastructure and utilities, caution, evacuation needs and missing or trapped people during disasters. Disaster responders significantly depend on data for their Situation Awareness (SA) or the dynamic understanding of “the big picture” in space and time for decision-making. However, despite the benefits, processing SM data for disaster response brings multiple challenges. Among them, the most significant challenge is that SM data contain rumours, fake information and false information. Thus, responding agencies have concerns regarding utilising SM for disaster response. Therefore, a high volume of important, real-time data that is very useful for disaster responders’ SA gets wasted. In addition to SM, many other data sources produce information during disasters, including CCTV monitoring, emergency call centres, and online news. The data from these sources come in multiple modalities such as text, images, video, audio and meta-data. To date, researchers have investigated how such data can be automatically processed for disaster response using machine learning and deep learning approaches using a single source/ single modality of data, and only a few have investigated the use of multiple so

Details

Database :
OAIster
Publication Type :
Electronic Resource
Accession number :
edsoai.on1393201211
Document Type :
Electronic Resource