Back to Search Start Over

Cognitive Image Fusion and Assessment

Authors :
Alexander Toet
Source :
Image Fusion, Ukimura, O., Image fusion, 303-340
Publication Year :
2011
Publisher :
InTech, 2011.

Abstract

The increasing availability and deployment of imaging sensors operating in multiple spectral bands has led to a requirement for methods that combine the signals from these sensors in an effective and ergonomic way for presentation to the human operator. Effective combinations of complementary and partially redundant multispectral imagery can provide information that is not directly evident from the individual input images. Image fusion for human inspection should combine information from two or more images of a scene into a single composite image that is more informative than each of the input images alone, and that requires minimal cognitive effort to understand. The fusion process should therefore maximize the amount of relevant information in the fused image, while minimizing the amount of irrelevant details, uncertainty and redundancy in the output. Thus, image fusion should preserve task relevant information from the source images, prevent the occurrence of artifacts or inconsistencies in the fused image, and suppress irrelevant features (e.g. noise) from the source images (Smith & Heather, 2005). The representation of fused imagery should optimally agree with human cognition, so that humans can quickly grasp the gist and meaning of the displayed scenes. For instance, the representation of spatial details should effortlessly elicit the recognition of known Gestalts, and the color schemes used should be natural (ecologically correct) and thus agree with human intuition. Irrelevant details (clutter) should be suppressed to minimize cognitive workload and to maximize recognition speed. Some potential benefits of image fusion are: wider spatial and temporal coverage, decreased uncertainty, improved reliability, and increased robustness of the system. Image fusion has applications in defense for situation awareness (Toet et al., 1997b), surveillance (Riley & Smith, 2006), target tracking (Zou & Bhanu, 2005), intelligence gathering (O'Brien & Irvine, 2004), and person authentication (Kong et al., 2007). Other important applications are found in industry and medicine (for a recent survey of different applications of image fusion techniques see Blum & Liu, 2006). The way images are combined depends on the specific application and on the type of information that is relevant in the given context (Smith & Heather, 2005). By examining the effects of several image fusion methods on different cognitive tasks, Krebs et al. (Krebs & Ahumada, 2002) showed that the benefits of sensor fusion are task dependent. However, until now the human end user has not been involved in the design process and the development of image fusion algorithms to any great extent. Mostly, image fusion algorithms are developed in isolation, and the human end-user is little more than an

Details

Language :
English
Database :
OpenAIRE
Journal :
Image Fusion, Ukimura, O., Image fusion, 303-340
Accession number :
edsair.doi.dedup.....7a6110a25ef8391a4211ffc6ab297d65