154 results on '"Alexander Toet"'
Search Results
2. Virtual Visits
- Author
-
Marina Alvarez, Alexander Toet, and Sylvie Dijkstra-Soudarissanane
- Published
- 2022
- Full Text
- View/download PDF
3. A neural network framework for cognitive bias
- Author
-
Anne-Marie Brouwer, Johan Egbert (Hans) Korteling, and Alexander Toet
- Subjects
media_common.quotation_subject ,rationality ,lcsh:BF1-990 ,Rationality ,human irrationality ,heuristics ,Social and Behavioral Sciences ,information processing ,050105 experimental psychology ,decision making ,03 medical and health sciences ,0302 clinical medicine ,Perception ,cognitive biases ,Training ,Psychology ,0501 psychology and cognitive sciences ,General Psychology ,media_common ,Artificial neural network ,05 social sciences ,Cognitive Psychology ,Information processing ,Viewpoints ,neural information processing ,neural networks ,Cognitive bias ,FOS: Psychology ,lcsh:Psychology ,Heuristics ,Relevant information ,030217 neurology & neurosurgery ,Cognitive psychology - Abstract
Human decision making shows systematic simplifications and deviations from the tenets of rationality (‘heuristics’) that may lead to suboptimal decisional outcomes (‘cognitive biases’). There are currently three prevailing theoretical perspectives on the origin of heuristics and cognitive biases: a cognitive-psychological, an ecological and an evolutionary perspective. However, these perspectives are mainly descriptive and none of them provides an overall explanatory framework for the underlying mechanisms of cognitive biases.To enhance our understanding of cognitive heuristics and biases we propose a neural network framework for cognitive biases, which explains why our brain systematically tends to default to heuristic (‘Type 1’) decision making. We argue that many cognitive biases arise from intrinsic brain mechanisms that are fundamental for the working of biological neural networks. In order to substantiate our viewpoint, we discern and explain four basic neural network principles: (1) Association, (2) Compatibility (3) Retainment, and (4) Focus. These principles are inherent to (all) neural networks which were originally optimized to perform concrete biological, perceptual, and motor functions. They form the basis for our inclinations to associate and combine (unrelated) information, to prioritize information that is compatible with our present state (such as knowledge, opinions and expectations), to retain given information that sometimes could better be ignored, and to focus on dominant information while ignoring relevant information that is not directly activated. The supposed mechanisms are complementary and not mutually exclusive. For different cognitive biases they may all contribute in varying degrees to distortion of information. The present viewpoint not only complements the earlier three viewpoints, but also provides a unifying and binding framework for many cognitive bias phenomena.
- Published
- 2022
- Full Text
- View/download PDF
4. Grasping Temperature: Thermal Feedback in VR Robot Teleoperation
- Author
-
Leonor Fermoselle, Alexander Toet, Nirul Hoeba, Jeanine van Bruggen, Nanda van der Stap, Frank B. ter Haar, and Jan Van Erp
- Published
- 2022
- Full Text
- View/download PDF
5. Gaze Behavior as an Objective Measure to Assess Social Presence During Immersive Mediated Communication
- Author
-
Ivo Stuldreher, Linsey Roijendijk, Maarten Michel, and Alexander Toet
- Abstract
Immersive communication systems provide increasingly realistic virtual environments, which may afford immersive social interactions that approach the quality of face-to-face (F2F) meetings by eliciting a sense of social presence; the feeling of being physically together with another person and having an affective and intellectual connection. To optimize a system’s ability to convey social presence, there is a need for tools that efficiently and reliably measure the degree to which users experience social presence. Currently, the most widely used tools to measure (social) presence are questionnaires. As their ecological validity is questionable, there is a need for objective and non-intrusive measures to measure social presence during naturalistic social interactions. In our study, we aimed to identify a set of determinants of social presence that enable the assessment of a system’s ability to convey social presence, preferably using easy to use, off-the-shelf tools. Considering eye gaze behavior is modulated by social presence and can be measured with relative ease for both F2F and mediated communication, we propose to use three eye gaze measures as an accessible means to assess the level of social presence a system can elicit.
- Published
- 2022
- Full Text
- View/download PDF
6. Cognitive Biases
- Author
-
J.E. (Hans) Korteling and Alexander Toet
- Published
- 2022
- Full Text
- View/download PDF
7. Connected Through Mediated Social Touch: '
- Author
-
Martijn T, van Hattum, Gijs, Huisman, Alexander, Toet, and Jan B F, van Erp
- Abstract
In recent years, there has been a significant increase in research on mediated communication
- Published
- 2021
8. The Relative Importance of Social Cues in Immersive Mediated Communication
- Author
-
Omar Aziz Niamut, Tina Mioch, Jan B. F. van Erp, Alexander Toet, and Navya N. Sharan
- Subjects
Body language ,Proxemics ,Facial expression ,Communication ,business.industry ,Mediated communication ,Context (language use) ,Interpersonal communication ,Social cue ,Psychology ,business ,Paralanguage - Abstract
Effective interpersonal communication is important to maintain relationships and build trust, empathy, and confidence. In this digital age, communication has become mediated, which filters out many of the social cues that are essential to facilitate interpersonal communication. This paper investigates the extent to which social cues influence social presence in mediated, bidirectional, multiparty interaction. Literature related to six social cues – paralinguistic cues, linguistic cues, body language, eye movements, facial expressions, and proxemic cues – was reviewed. These cues were ranked based on how relevant they are in creating a sense of social presence in mediated social communication (MSC). The most relevant cue was eye movements, followed by facial expression and linguistic cues, and lastly, body language and proxemic cues. Paralinguistic cues could not be ranked due to sparse literature in the context of MSC. Further research is required to better understand how social cues can be incorporated into MSC systems.
- Published
- 2021
- Full Text
- View/download PDF
9. Holistic Framework for Quality Assessment of Mediated Social Communication
- Author
-
Alexander Toet, Tina Mioch, Simon N.B. Gunkel, Omar Niamut, and Jan B.F. van Erp
- Abstract
Modern immersive multisensory communication systems can provide compelling mediated social communication experiences that approach face-to-face (F2F) communication. Existing frameworks to assess the quality of mediated social communication experiences are typically targeted at specific communication technologies and do not address all relevant aspects of social presence (i.e., the feeling of being in the presence of, and having an affective and intellectual connection with, other persons). Also, they are typically unsuitable for application to social communication in virtual (VR), augmented (AR) or mixed (MR) reality. Here we present a comprehensive and general holistic mediated social communication (H-MSC) framework and associated questionnaire (the H-MSC-Q) for measuring the quality of mediated social communication. The H-MSC framework comprises both the experience of Spatial Presence (i.e., the perceived fidelity, internal and external plausibility, and cognitive, reasoning and behavioral affordances of an environment) and the experience of Social Presence (i.e., perceived mutual proximity, intimacy, credibility, reasoning and behavior of the communication partners). Since social presence is inherently bidirectional (involving a sense of mutual awareness) the H-MSC-Q distinguishes between the internal (‘own’) and external (‘the other’) assessment perspectives. The H-MSC-Q is efficient and parsimonious, using only a single item to tap into each of the relevant processing levels in the human brain: sensory, emotional, cognitive, reasoning, and behavioral. It is also sufficiently general to measure social presence experienced with any (including VR, AR, and MR) type of multi-sensory (visual, auditory, haptic, and olfactory) mediated communication system.
- Published
- 2021
- Full Text
- View/download PDF
10. Distraction for the eye and ear
- Author
-
Mark Bray, Alexander Toet, Bill Macken, Simon Rushton, Aline Bompas, Dylan Marc Jones, and Philip Morgan
- Subjects
Perception ,media_common.quotation_subject ,Distraction ,Vulnerability ,Auditory stimuli ,Human Factors and Ergonomics ,Cognition ,Sensory system ,Context (language use) ,Psychology ,Adaptability ,media_common ,Cognitive psychology - Abstract
The ways that extraneous visual and auditory stimuli impair human performance are reviewed with aim of distinguishing those sensory, perceptual and cognitive effects relevant to the design of human-machine systems. Although commonly regarded as disruptive, distractions reflect the adaptability of the organism to changing circumstances. Depending on the context, our knowledge of the ways in which distraction works can be exploited in the form of alarms or other attention-getting devices, or resisted by changing the physical and psychological properties of the stimuli. The research described here draws from contemporary research on distraction. The review underscores the vulnerability of performance even from stimuli of modest magnitude while acknowledging that distraction is a necessary consequence of our adaptive brain that leads to effects that are (and sometimes, but not always) beneficial to safety, efficiency and wellbeing. Low intensity distractors are particularly sensitive to the context in which they occur. The mechanisms outlined can be exploited either to grab attention (and even temporarily disable the individual, but more usefully to warn or redirect the individual) or to modify it in subtle ways across the gamut of human activity.
- Published
- 2020
- Full Text
- View/download PDF
11. Do food cinemagraphs evoke stronger appetitive responses than stills?
- Author
-
Alexander Toet, Daisuke Kaneko, M. van Schaik, and Johannes Bernardus Fransiscus van Erp
- Subjects
lcsh:TX901-946.5 ,Visual Arts and Performing Arts ,digestive, oral, and skin physiology ,05 social sciences ,Appetite ,Food advertisements ,lcsh:Visual arts ,lcsh:N1-9211 ,Affect (psychology) ,050105 experimental psychology ,Affect ,Wanting ,Food ,Food products ,Image motion ,0502 economics and business ,050211 marketing ,0501 psychology and cognitive sciences ,lcsh:Hospitality industry. Hotels, clubs, restaurants, etc. Food service ,Liking ,Psychology ,Cinemagraphs ,Food Science ,Cognitive psychology - Abstract
Viewing images of food triggers the desire to eat and this effect increases when images represent food in a more vivid way. Cinemagraphs are a new medium that is intermediate between photographs and videos: most of the frame is static, while some details are animated in a seamless loop, resulting in a vivid viewing experience. On social media cinemagraphs are increasingly used for food-related communication. Given their vivid appearance we hypothesized that food cinemagraphs may evoke stronger appetitive responses than their static counterparts (stills). This would make them a promising medium for food advertisements on the Internet or on digital menu boards. In this study we measured the ‘wanting’ (appetitive) and ‘liking’ (affective) responses to both cinemagraph and stills representing a wide range of different food products. Our results show that food cinemagraphs slightly increase ‘wanting’ scores while not affecting ‘liking’ scores, compared to similar stills. Although we found no overall main effect of image dynamics on ‘liking’, we did observe a significant effect for some individual food items. The effects of image dynamics on ‘wanting’ and ‘liking’ appear to be product specific: while dynamic images were scored higher on ‘wanting’ or ‘liking’ for some products, static images were scored higher on these factors for other products. Observer responses to a free association task indicate that image dynamics can affect the appeal of a food product in two ways: by emphasizing its hedonic qualities (lusciousness, freshness) and by enhancing the observers’ awareness of their own core affect (‘liking’) for the product. We conclude that the effective use of cinemagraphs in food advertisements therefore requires a careful consideration of the characteristics (hedonic aspects) of the food product that are to be highlighted through image motion and the inherent preferences (core liking) of the target group.
- Published
- 2019
- Full Text
- View/download PDF
12. Fundamental limitations of AR symbology in accidented terrain
- Author
-
Alexander Toet, Maarten A. Hogervorst, Frank L. Kooi, and Piet Bijl
- Subjects
Situation awareness ,Human head ,Software deployment ,Human–computer interaction ,Computer science ,Terrain ,Augmented reality ,Transparency (human–computer interaction) ,Cognitive load ,Domain (software engineering) - Abstract
While AR has successfully been deployed in the military air domain for decades, its use in the ground domain poses serious challenges. Some of these challenges result from technological limitations. However, others are more difficult or even impossible to resolve since they reflect fundamental human characteristics. The toughest-to-solve limitations are caused by our physiology, anatomy, and cognition. Eye physiology limitations are masking, contrast, and occlusion. The anatomical shape of the human head forces optics to be mounted in front of the inherently glare-protecting eye sockets. The problems of the brain with respect to AR are i) it’s not ‘build’ to perceive transparency, ii) its limited cognitive capacity, and iii) we intuitively use a world-referenced system. In this paper, we provide an in-depth analysis of these human factor limitations. Conclusion: AR does not come for free. Fundamental human limitations seriously constrain see-through AR systems for the infantry and should be considered in their design and deployment.
- Published
- 2021
- Full Text
- View/download PDF
13. Towards Augmented Reality-Based Remote Family Visits in Nursing Homes
- Author
-
Bram Smeets, Eva Abels, Alexander Toet, Tessa Klunder, Audrey van der Weerden, and Hans Maarten Stokking
- Subjects
2019-20 coronavirus outbreak ,Nursing ,Social contact ,Coronavirus disease 2019 (COVID-19) ,User experience design ,business.industry ,Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) ,Augmented reality ,Nursing homes ,Psychology ,business ,Focus group - Abstract
Family visiting restrictions in nursing homes due to COVID-19-related measures have a major impact on elderly and their families. As an alternative communication means, TNO is developing an augmented reality (AR)-based solution to realize high-quality virtual social contact. To investigate its suitability for remote family visits in nursing homes, the AR-based solution will be compared to regular video calling in a user study involving elderly and their family members. Based on focus groups with elderly, family and caretakers, user experience (UX) indicators have been established to evaluate these virtual family visits, of which social presence was the most prominent. Remote family visits via AR-based and regular video calling are expected to result in different UX. It is hypothesized that participants will report the highest levels of social presence in the AR condition. If AR-based video calling is indeed preferred, TNO will continue and upscale the development of this technology.
- Published
- 2021
- Full Text
- View/download PDF
14. Review of camouflage assessment techniques
- Author
-
Alexander Toet and Maarten A. Hogervorst
- Subjects
Visual search ,Relation (database) ,Computer science ,business.industry ,Survivability ,Machine learning ,computer.software_genre ,Field (computer science) ,Ranking ,Camouflage ,Objective approach ,Eye tracking ,Artificial intelligence ,business ,computer - Abstract
In military operations signature reduction techniques such as camouflage nets, low-emissive paints, and camouflage patterns are typically deployed to optimize the survivability of high value assets by minimizing their detectability. Various methods have been developed to assess the effectiveness of these camouflage measures. There are three main approaches to the evaluation of camouflage measures: (1) a subjective approach through observer experiments, (2) an objective computational approach through image analysis, and (3) an objective approach through physical measurements. Although subjective evaluation methods have a direct relation with the operational practice, they are often difficult to implement because of time and budget restrictions, or simply because the associated conditions are not safe for the observers. Objective evaluation methods are typically based on the outcome of psychophysical laboratory experiments using simple artificial stimuli, presented under extremely restricted (impoverished) conditions, and in different experimental paradigms. Objective methods based on signal processing techniques have no obvious counterpart in human vision. So far, no attempts have been made to validate any of these objective metrics against the performance of human observers in realistic military scenarios. As a result, there are currently no standard and internationally accepted methods and procedures to evaluate camouflage equipment and techniques, and to indicate their military effectiveness. In this review paper we present an overview of the various subjective (psychophysical) and objective (computational, image or video based) evaluation methods that are currently available and that have been used to validate camouflage effectiveness. In addition, we will discuss the relative merits of field experiments versus laboratory experiments.
- Published
- 2020
- Full Text
- View/download PDF
15. Let's Get in Touch! Adding Haptics to Social VR
- Author
-
Leonor Fermoselle, Simon Gunkel, Frank ter ter Haar, Sylvie Dijkstra-Soudarissanane, Alexander Toet, Omar Niamut, and Nanda van van der Stap
- Subjects
Virtual Reality, Social VR, Tactile VR, Haptic Feedback - Abstract
Social VR shall allow natural communication between users with high social presence, as if users are in the same room. One way to increase social presence is to add haptic interaction to allow, for example, users to give each other a ”high-five” or to pass documents among them. In this paper, we present our web-based VR communication framework with an added haptic component to simulate touch. The goal of this framework is to enhance the VR communication experience and the social cues exchange between users in VR. We describe our method for rendering haptic feedback within the web-based framework and evaluate the perceived quality of our system with a user survey (with 119 participants). Our proof-of-concept system was rated positively, with the haptic component offering an enhanced quality of the VR experience for 78% of the participants.
- Published
- 2020
16. The EmojiGrid as a Rating Tool for the Affective Appraisal of Touch
- Author
-
Jan Van Erp and Alexander Toet
- Published
- 2020
- Full Text
- View/download PDF
17. Let’s Get in Touch! Adding Haptics to Social VR
- Author
-
Sylvie Dijkstra-Soudarissanane, Alexander Toet, Leonor Fermoselle, Omar Aziz Niamut, Simon Gunkel, Nanda van der Stap, and Frank ter ter Haar
- Subjects
User survey ,Haptic interaction ,Computer science ,02 engineering and technology ,Social cue ,Virtual reality ,Rendering (computer graphics) ,Perceived quality ,Human–computer interaction ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Natural communication ,Haptic technology - Abstract
Social VR shall allow natural communication between users with high social presence, as if users are in the same room. One way to increase social presence is to add haptic interaction to allow, for example, users to give each other a ”high-five” or to pass documents among them. In this paper, we present our web-based VR communication framework with an added haptic component to simulate touch. The goal of this framework is to enhance the VR communication experience and the social cues exchange between users in VR. We describe our method for rendering haptic feedback within the web-based framework and evaluate the perceived quality of our system with a user survey (with 119 participants). Our proof-of-concept system was rated positively, with the haptic component offering an enhanced quality of the VR experience for 78% of the participants.
- Published
- 2020
- Full Text
- View/download PDF
18. Explicit and Implicit Responses to Tasting Drinks Associated with Different Tasting Experiences
- Author
-
Anne-Marie Brouwer, Victor Kallen, Maarten A. Hogervorst, Jan B. F. van Erp, Daisuke Kaneko, and Alexander Toet
- Subjects
Male ,Time Factors ,030309 nutrition & dietetics ,Electroencephalography ,Audiology ,lcsh:Chemical technology ,Biochemistry ,Analytical Chemistry ,discriminative power ,Heart Rate ,Heart rate variability ,lcsh:TP1-1185 ,Instrumentation ,Food-evoked emotion ,Implicit measure ,0303 health sciences ,medicine.diagnostic_test ,Heart ,04 agricultural and veterinary sciences ,Middle Aged ,Physiological models ,040401 food science ,Atomic and Molecular Physics, and Optics ,Behavioral measures ,Facial Expression ,Taste ,explicit measure ,Female ,Wine tasting ,Psychology ,Arousal ,Adult ,medicine.medical_specialty ,implicit measure ,Explicit measure ,(neuro)physiological measure ,Acetic acid ,Discriminative power ,Article ,Beverages ,03 medical and health sciences ,Young Adult ,0404 agricultural biotechnology ,medicine ,Humans ,Electrical and Electronic Engineering ,Valence (psychology) ,behavioral measure ,Facial expression ,Analysis of Variance ,Behavior ,Food experience ,Disgust ,food-evoked emotion ,Physiological measures ,Behavioral measure - Abstract
Probing food experience or liking through verbal ratings has its shortcomings. We compare explicit ratings to a range of (neuro)physiological and behavioral measures with respect to their performance in distinguishing drinks associated with different emotional experience. Seventy participants tasted and rated the valence and arousal of eight regular drinks and a &ldquo, ground truth&rdquo, high-arousal, low-valence vinegar solution. The discriminative power for distinguishing between the vinegar solution and the regular drinks was highest for sip size, followed by valence ratings, arousal ratings, heart rate, skin conductance level, facial expression of &ldquo, disgust,&rdquo, pupil diameter, and Electroencephalogram (EEG) frontal alpha asymmetry. Within the regular drinks, a positive correlation was found between rated arousal and heart rate, and a negative correlation between rated arousal and Heart Rate Variability (HRV). Most physiological measures showed consistent temporal patterns over time following the announcement of the drink and taking a sip. This was consistent over all nine drinks, but the peaks were substantially higher for the vinegar solution than for the regular drinks, likely caused by emotion. Our results indicate that implicit variables have the potential to differentiate between drinks associated with different emotional experiences. In addition, this study gives us insight into the physiological temporal response patterns associated with taking a sip.
- Published
- 2019
19. Effects of Likeness and Synchronicity on the Ownership Illusion over a Moving Virtual Robotic Arm and Hand
- Author
-
Alexander Toet, Jan B. F. van Erp, Milene Catoire, Roelof J. E. van Dijk, Bouke N. Krom, and Human Media Interaction
- Subjects
InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,Movement (music) ,media_common.quotation_subject ,05 social sciences ,Illusion ,Visual appearance ,050105 experimental psychology ,03 medical and health sciences ,0302 clinical medicine ,Synchronicity ,Agency (sociology) ,0501 psychology and cognitive sciences ,Body ownership ,Psychology ,Robotic arm ,030217 neurology & neurosurgery ,media_common ,Cognitive psychology ,Haptic technology - Abstract
In this study we investigated body ownership over a virtual hand and arm as a function of their visual appearance (likeness) and synchronicity of visuo-tactile stimulation with a virtual electric toothbrush and a vibrotactile glove. In all conditions, participants controlled the movement of arm and fingers, maintaining synchronicity in motor-proprioceptive-visual signals. While the effects of varying likeness and temporal synchronicity of visual and haptic stimuli on the ownership illusion have both been investigated individually before, their relative contribution is still unknown. We find that likeness should be complete: making only the hand robotic reduces the subjective ownership illusion to same level as that of a full robotic arm and hand. Visuo-tactile synchronicity is not a hard prerequisite for an ownership illusion to occur: a high degree of agency with congruent motor-proprioceptive-visual cues and an arm/hand layout similar to one's own body can be sufficiently strong to overrule incongruent visuo-tactile cues. This work is part of a larger study on the relative contribution of factors such as likeness, viewing mode, tactile stimulation and degree of agency on the body ownership illusion. The results may contribute to the enhancement of dexterous performance in remote telemanipulation tasks.
- Published
- 2019
- Full Text
- View/download PDF
20. Affective rating of audio and video clips using the EmojiGrid
- Author
-
Jan Van Erp, Alexander Toet, and Human Media Interaction
- Subjects
Adult ,Male ,medicine.medical_specialty ,Emoji ,Emotions ,Motion Pictures ,EmojiGrid ,Stimulus (physiology) ,Audiology ,050105 experimental psychology ,General Biochemistry, Genetics and Molecular Biology ,Arousal ,Young Adult ,03 medical and health sciences ,0302 clinical medicine ,arousal ,medicine ,Humans ,0501 psychology and cognitive sciences ,valence ,General Pharmacology, Toxicology and Pharmaceutics ,Valence (psychology) ,Affective appraisal ,CLIPS ,Affective response ,computer.programming_language ,General Immunology and Microbiology ,05 social sciences ,Mean age ,Articles ,audio clips ,General Medicine ,affective response ,Sound ,video clips ,Self Report ,Psychology ,computer ,030217 neurology & neurosurgery ,Research Article - Abstract
Background: In this study we measured the affective appraisal of sounds and video clips using a newly developed graphical self-report tool: the EmojiGrid. The EmojiGrid is a square grid, labeled with emoji that express different degrees of valence and arousal. Users rate the valence and arousal of a given stimulus by simply clicking on the grid. Methods: In Experiment I, observers (N=150, 74 males, mean age=25.2±3.5) used the EmojiGrid to rate their affective appraisal of 77 validated sound clips from nine different semantic categories, covering a large area of the affective space. In Experiment II, observers (N=60, 32 males, mean age=24.5±3.3) used the EmojiGrid to rate their affective appraisal of 50 validated film fragments varying in positive and negative affect (20 positive, 20 negative, 10 neutral). Results: The results of this study show that for both sound and video, the agreement between the mean ratings obtained with the EmojiGrid and those obtained with an alternative and validated affective rating tool in previous studies in the literature, is excellent for valence and good for arousal. Our results also show the typical universal U-shaped relation between mean valence and arousal that is commonly observed for affective sensory stimuli, both for sound and video. Conclusions: We conclude that the EmojiGrid can be used as an affective self-report tool for the assessment of sound and video-evoked emotions.
- Published
- 2021
- Full Text
- View/download PDF
21. The TNO Multiband Image Data Collection
- Author
-
Alexander Toet
- Subjects
Realtime ,PCS - Perceptual and Cognitive Systems ,Vision ,Computer science ,Multispectral image ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,02 engineering and technology ,False color ,lcsh:Computer applications to medicine. Medical informatics ,01 natural sciences ,Image (mathematics) ,Night vision ,Color mapping ,0202 electrical engineering, electronic engineering, information engineering ,Image fusion ,Human & Operational Modelling ,Computer vision ,lcsh:Science (General) ,Fusion ,Multidisciplinary ,Data collection ,business.industry ,010401 analytical chemistry ,Pharmacology, Toxicology and Pharmaceutical Science ,Color fusion ,0104 chemical sciences ,Range (mathematics) ,lcsh:R858-859.7 ,020201 artificial intelligence & image processing ,ELSS - Earth, Life and Social Sciences ,Artificial intelligence ,business ,lcsh:Q1-390 - Abstract
Despite of the ongoing interest in the fusion of multi-band images for surveillance applications and a steady stream of publications in this area, there is only a very small number of static registered multi-band test images (and a total lack of dynamic image sequences) publicly available for the development and evaluation of image fusion algorithms. To fill this gap, the TNO Multiband Image Collection provides intensified visual (390â700 nm), near-infrared (700â1000 nm), and longwave infrared (8â12 µm) nighttime imagery of different military and surveillance scenarios, showing different objects and targets (e.g., people, vehicles) in a range of different (e.g., rural, urban) backgrounds. The dataset will be useful for the development of static and dynamic image fusion algorithms, color fusion algorithms, multispectral target detection and recognition algorithms, and dim target detection algorithms. Keywords: Image fusion, Color fusion, False color, Color mapping, Realtime, Fusion, Night vision
- Published
- 2017
- Full Text
- View/download PDF
22. Controlling readability of head-fixed large field-of-view displays
- Author
-
Frank L. Kooi and Alexander Toet
- Subjects
Visual acuity ,genetic structures ,Polarity (physics) ,Aniseikonia ,Eye movement ,Legibility ,Crowding ,Luminance ,Sensory Systems ,Ophthalmology ,Stereopsis ,medicine ,Optometry ,medicine.symptom ,Mathematics - Abstract
Background. The information displayed on a Head Mounted Display (HMD) can only be read by making eye movements, since head movements have no effect on the ocular image position. Aniseikonia (a common visual deficit) is expected to cause eye strain and limit the readability of large FoV binocular HMDs. As the FoV increases, the screen layout needs to optimize overall display readability by preventing clutter while taking common optometric conditions into account. Methods. We measured the ability to quickly determine the orientation of a target T (Т vs Ʇ) surrounded by 4 randomly oriented (up, down, left, right) flanker T’s as a function of target-flanker spacing and eccentricity, in conditions where the target had either the same or opposite opposite luminance polarity as the flankers. All 12 subjects scored normal on relevant optometric tests (stereopsis, visual acuity, Awaya aniseikonia test, phoria). An aniseikonic lens placed in front of one eye optically enlarged the image by 2½%, simulating a common optometric condition. The additional delay caused by the presence of the four flankers is adopted as the ‘Crowding component’ of the reaction time. Results. Compared to the Same polarity condition, Opposite polarity reduced the Crowding time by a factor of 2.3 (p< 0.001). The Crowding times can be described as an extension of Fitts’ law. Unexpectedly, the mild aniseikonia condition doubled the Crowding time (p< 0.001) and caused the highest level of eye strain (p< 0.001). Conclusion. For all eccentricities and target-flanker spacings, the Crowding time more than halved in the opposite polarity condition, while it doubled due to the addition of just 2½% aniseikonia. Practical implications. Even users with mild aniseikonia are likely to experience problems while reading a large FOV HMD. ‘Polarity decluttering’ can significantly enhance symbology legibility.
- Published
- 2019
- Full Text
- View/download PDF
23. Visual processing of symbology in head-fixed large Field-of-View displays
- Author
-
Alexander Toet, Frank L. Kooi, and S. Hoving
- Subjects
Visual processing ,Ophthalmology ,Large field of view ,Computer science ,Head (linguistics) ,business.industry ,Computer vision ,Artificial intelligence ,business ,Sensory Systems - Abstract
Background. A Head Mounted Display (HMD) is unlike all other displays fixed to the head, making eye movements the sole option to scan the display. While the largest saccades easily exceed 50 deg (Collewijn et al., 1988), naturally occurring saccades typically stay within 15 degrees (Adler & Stark, 1975). While attractive for many applications, a HMD also forms a liability: large-FoV HMDs are known to cause eye-strain (Kooi, 1997) and the rate of information uptake is expected to decrease towards the edges. Methods. We measured the ability of 12 subjects to quickly determine the orientation (Т vs Ʇ) of a target T surrounded by 4 randomly oriented (up, down, left, right) flanker T’s as a function of 1) target-flanker spacing or ‘crowding’ (small /medium/large), 2) flanker polarity, and 3) eccentricity (15/30/45 deg). The one-hour test was repeated in reverse order after a 15 min break. Visual comfort was assessed with questionnaires. Results. Reaction time increased with crowding, symbol eccentricity, and decreased with opposite target-flanker polarity (all p values < 0.001). Contrary to our expectations, reaction time decreased after the break, suggesting saccadic motility improves over time (Parsons & Ivry, 2018). Eye strain showed a small increase with eccentricity (p < 0.037). Conclusions. These results confirm that ocular motility appears to be trainable. The dynamics of HMD information uptake resembles Fitts’ law. Practical implications. Initial training reduces eye strain. Combined with the ocular motility data from the references, a 30 deg Field-of-View is a compromise between maximal overall symbology uptake and minimal eye strain.
- Published
- 2019
- Full Text
- View/download PDF
24. List of Contributors
- Author
-
Nounagnon F. Agbangla, Atahan Agrali, Cédric T. Albinet, Awad Aljuaid, Guillaume Andéol, Jean M. André, Pietro Aricò, Branthomme Arnaud, Romain Artico, Michel Audiffren, Hasan Ayaz, Fabio Babiloni, Wendy Baccus, Carryl L. Baldwin, Hubert Banville, Klaus Bengler, Bruno Berberian, Jérémy Bergeron-Boucher, Ali Berkol, Pierre Besson, Siddharth Bhatt, Arianna Bichicchi, Martijn Bijlsma, Nikolai W.F. Bode, Vincent Bonnemains, Gianluca Borghini, Guillermo Borragán, Marc-André Bouchard, Angela Bovo, Eric Brangier, Anne-Marie Brouwer, Heinrich H. Bülthoff, Christopher Burns, Vincent Cabibel, Tuna E. Çakar, Daniel Callan, Aurélie Campagne, Travis Carlson, William D. Casebeer, Deniz Zengin Çelik, Cindy Chamberland, Caroline P.C. Chanel, Peter Chapman, Luc Chatty, Laurent Chaudron, Philippe Chevrel, Lewis L. Chuang, Caterina Cinel, Bernard Claverie, Antonia S. Conti, Yves Corson, Johnathan Crépeau, Adrian Curtin, Frédéric Dehais, Arnaud Delafontaine, Gaétane Deliens, Arnaud Delorme, Stefano I. Di Domenico, Gianluca Di Flumeri, Jean-Marc Diverrez, Manh-Cuong Do, Mengxi Dong, Andrew T. Duchowski, Anirban Dutta, Lydia Dyer, Sonia Em, Kate Ewing, Stephen Fairclough, Brian Falcone, Tiago H. Falk, Sara Feldman, Ying Xing Feng, Victor S. Finomore, Nina Flad, Alice Formwalt, Alexandra Fort, Paul Fourcade, Marc A. Fournier, Jérémy Frey, C. Gabaude, Olivier Gagey, Marc Garbey, Liliana Garcia, Thibault Gateau, Lukas Gehrke, Nancy Getchell, Evanthia Giagloglou, Christiane Glatz, Kimberly Goodyear, Robert J. Gougelet, Jonas Gouraud, Klaus Gramann, Dhruv Grewal, Carlos Guerrero-Mosquera, Céline Guillaume, Martin Hachet, Alain Hamaoui, Gabriella M. Hancock, Peter A. Hancock, Ahmad Fadzil M. Hani, Amanda E. Harwood, Mitsuhiro Hayashibe, Terry Heiman-Patterson, Girod Hervé, Maarten A.J. Hogervorst, Amy L. Holloway, Jean-Louis Honeine, Keum-Shik Hong, Klas Ihme, Kurtulus Izzetoglu, Meltem Izzetoglu, Philip L. Jackson, Christophe Jallais, Christian P. Janssen, Branislav Jeremic, Meike Jipp, Evelyn Jungnickel, Hélio Kadogami, Gozde Kara, Waldemar Karwowski, Quinn Kennedy, Theresa T. Kessler, Muhammad J. Khan, Rayyan A. Khan, Marius Klug, Amanda E. Kraft, Michael Krein, Ute Kreplin, Bartlomiej Kroczek, Lauens R. Krol, Frank Krueger, Ombeline Labaune, Daniel Lafond, Claudio Lantieri, Paola Lanzi, Amine Laouar, Dargent Lauren, Rachel Leproult, Véronique Lespinet-Najib, Ling-Yin Liang, Fabien Lotte, Ivan Macuzic, Nicolas Maille, Horia A Maior, S. Malin, Alexandre Marois, Franck Mars, Nicolas Martin, Nadine Matton, Magdalena Matyjek, Kevin McCarthy, Ryan McKendrick, Tom McWilliams, Bruce Mehler, Ranjana Mehta, Ranjana K. Mehta, Mathilde Menoret, Yoshihiro Miyake, Alexandre Moly, Rabia Murtza, Makii Muthalib, Mark Muthalib, Noman Naseer, Jordan Navarro, Roger Newport, Anton Nijholt, Michal Ociepka, Morellec Olivier, Ahmet Omurtag, Banu Onaral, Hiroki Ora, Bob Oudejans, Özgürol Öztürk, Martin Paczynski, Nico Pallamin, Raja Parasuraman, Mark Parent, René Patesson, Kou Paul, Philippe Peigneux, Matthias Peissner, G. Pepin, Stephane Perrey, Vsevolod Peysakhovich, Markus Plank, Riccardo Poli, Kathrin Pollmann, Simone Pozzi, Nancy M. Puccinelli, Jean Pylouster, Kerem Rızvanoğlu, Martin Ragot, Bryan Reimer, Emanuelle Reynaud, Joohyun Rhee, Jochem W. Rieger, Anthony J. Ries, Benoit Roberge-Vallières, Achala H. Rodrigo, Anne L. Roggeveen, Ricardo Ron-Angevin, Guillaume Roumy, Raphaëlle N. Roy, Anthony C. Ruocco, Bartlett A. Russell, Jon Russo, Richard M. Ryan, Amanda Sargent, Kelly Satterfield, Ben D. Sawyer, Sébastien Scannella, Menja Scheer, Melissa Scheldrup, Alex Schilder, Nicolina Sciaraffa, Lee Sciarini, Magdalena Senderecka, Sarah Sharples, Tyler H. Shaw, Patricia A. Shewokis, Andrea Simone, Hichem Slama, Alastair D. Smith, Bertille Somon, Hiba Souissi, Moritz Späth, Kimberly L. Stowers, Clara Suied, Junfeng Sun, Rajnesh Suri, Tong Boon Tang, Yingying Tang, Emre O. Tartan, Nadège Tebbache, Franck Techer, Cengiz Terzibas, Catherine Tessier, Claudine Teyssedre, Hayley Thair, Jean-Denis Thériault, Alexander Toet, Shanbao Tong, Jonathan Touryan, Amy Trask, Sébastien Tremblay, Anirudh Unni, François Vachon, Davide Valeriani, Benoît Valéry, Helma van den Berg, Valeria Vignali, Mathias Vukelić, Jijun Wang, Max L. Wilson, Emily Wusch, Petros Xanthopoulos, Eric Yiou, Amad Zafar, Thorsten O. Zander, Matthias D. Ziegler, and Ivana Živanovic-Macuzic
- Published
- 2019
- Full Text
- View/download PDF
25. Fusion of Images from Different Electro-Optical Sensing Modalities for Surveillance and Navigation Tasks
- Author
-
Alexander Toet
- Published
- 2018
- Full Text
- View/download PDF
26. Effects of an Acute Social Stressor on Trustworthiness Judgements, Physiological and Subjective Measures– Differences Between Civilians and Military Personnel
- Author
-
Martijn Bijlsma, Anne-Marie Brouwer, Helma van den Berg, and Alexander Toet
- Subjects
Social stress ,Fight-or-flight response ,Military personnel ,Trustworthiness ,genetic structures ,media_common.quotation_subject ,Stressor ,Stress (linguistics) ,Pupil size ,Psychological resilience ,Psychology ,media_common ,Clinical psychology - Abstract
We induced acute social stress in military and civilian participants using a novel well-controlled paradigm. The paradigm resulted in strong increases of heart rate, skin conductance, and pupil size. Military participants showed weaker physiological and subjective stress response than civilians, suggesting stronger resilience to stress in military personnel compared to civilians. Both groups judged neutral faces less trustworthy after, compared to before the stressor, indicating an effect of stress on an in-principle unrelated task.
- Published
- 2018
- Full Text
- View/download PDF
27. Effects of mediated social touch on affective experiences and trust
- Author
-
Alexander Toet, Stefanie M. Erk, and Johannes Bernardus Fransiscus van Erp
- Subjects
Social touch ,media_common.quotation_subject ,lcsh:Medicine ,Psychiatry and Psychology ,computer.software_genre ,Trust ,General Biochemistry, Genetics and Molecular Biology ,Mediated touch ,Affective experience ,Psychophysics ,Medicine ,media_common ,Haptic technology ,Extraversion and introversion ,EWI-26744 ,Multimedia ,business.industry ,General Neuroscience ,lcsh:R ,Shared experience ,Interpersonal touch ,General Medicine ,Sadness ,Human–Computer Interaction ,Feeling ,IR-99257 ,General Agricultural and Biological Sciences ,Skin conductance ,business ,METIS-315559 ,Social psychology ,computer - Abstract
This study investigated whether communication via mediated hand pressure during a remotely shared experience (watching an amusing video) can (1) enhance recovery from sadness, (2) enhance the affective quality of the experience, and (3) increase trust towards the communication partner. Thereto participants first watched a sad movie clip to elicit sadness, followed by a funny one to stimulate recovery from sadness. While watching the funny clip they signaled a hypothetical fellow participant every time they felt amused. In the experimental condition the participants responded by pressing a hand-held two-way mediated touch device (a Frebble), which also provided haptic feedback via simulated hand squeezes. In the control condition they responded by pressing a button and they received abstract visual feedback. Objective (heart rate, galvanic skin conductance, number and duration of joystick or Frebble presses) and subjective (questionnaires) data were collected to assess the emotional reactions of the participants. The subjective measurements confirmed that the sad movie successfully induced sadness while the funny movie indeed evoked more positive feelings. Although their ranking agreed with the subjective measurements, the physiological measurements confirmed this conclusion only for the funny movie. The results show that recovery from movie induced sadness, the affective experience of the amusing movie, and trust towards the communication partner did not differ between both experimental conditions. Hence, feedback via mediated hand touching did not enhance either of these factors compared to visual feedback. Further analysis of the data showed that participants scoring low onExtraversion(i.e., persons that are more introvert) or low onTouch Receptivity(i.e., persons who do not like to be touched by others) felt better understood by their communication partner when receiving mediated touch feedback instead of visual feedback, while the opposite was found for participants scoring high on these factors. The implications of these results for further research are discussed, and some suggestions for follow-up experiments are presented.
- Published
- 2015
- Full Text
- View/download PDF
28. IR Contrast Enhancement Through Log-Power Histogram Modification
- Author
-
Alexander Toet and Tirui Wu
- Subjects
Contrast enhancement ,Optics ,Computer science ,business.industry ,Histogram modification ,business ,Power (physics) - Published
- 2015
- Full Text
- View/download PDF
29. Are food cinemagraphs more yummy than stills?
- Author
-
Alexander Toet, Daisuke Kaneko, Jan B. F. van Erp, and Martin G. van Schaik
- Subjects
03 medical and health sciences ,0302 clinical medicine ,Dynamics (music) ,Food products ,0502 economics and business ,05 social sciences ,050211 marketing ,030212 general & internal medicine ,Food science ,Psychology ,Affect (psychology) ,Cognitive psychology - Abstract
Cinemagraphs are a new medium that is intermediate between photographs and videos: most of the frame is static, while some details are animated in a seamless loop. Given their vivid appearance we expected that food cinemagraphs evoke stronger affective and appetitive responses than their static counterparts (stills). In this study we measured the Liking (affective) and Wanting (appetitive) responses to both cinemagraphs and stills representing a wide range of different food products. Our results show that food cinemagraphs only slightly increase Wanting scores and do not affect Liking scores, compared to similar stills. Although we found no main effect of image dynamics on Liking, we did observe a significant effect for some individual food items. However, the effects of image dynamics on Liking and Wanting appeared to be product specific: for some products dynamic images were scored higher on Liking or Wanting, while static images were scored higher for other products. This suggests that image dynamics intensifies subjective Liking and Wanting judgements but does not alter their polarity. Further research is needed to resolve this issue.
- Published
- 2017
- Full Text
- View/download PDF
30. Progress in sensor performance testing, modeling and range prediction using the TOD method: an overview
- Author
-
Piet Bijl, Maarten A. Hogervorst, and Alexander Toet
- Subjects
Target Acquisition ,ECOMOS ,Thermal test ,TOD ,Computer science ,business.industry ,Near-infrared spectroscopy ,Image processing ,02 engineering and technology ,01 natural sciences ,Superresolution ,Target acquisition ,EOSTAR ,010309 optics ,sensor model ,020210 optoelectronics & photonics ,EO ,0103 physical sciences ,IR ,0202 electrical engineering, electronic engineering, information engineering ,Computer vision ,Artificial intelligence ,business ,sensor test - Abstract
The Triangle Orientation Discrimination (TOD) methodology includes i) a widely applicable, accurate end-to-end EO/IR sensor test, ii) an image-based sensor system model and iii) a Target Acquisition (TA) range model. The method has been extensively validated against TA field performance for a wide variety of well- and under-sampled imagers, systems with advanced image processing techniques such as dynamic super resolution and local adaptive contrast enhancement, and sensors showing smear or noise drift, for both static and dynamic test stimuli and as a function of target contrast. Recently, significant progress has been made in various directions. Dedicated visual and NIR test charts for lab and field testing are available and thermal test benches are on the market. Automated sensor testing using an objective synthetic human observer is within reach. Both an analytical and an image-based TOD model have recently been developed and are being implemented in the European Target Acquisition model ECOMOS and in the EOSTAR TDA. Further, the methodology is being applied for design optimization of high-end security camera systems. Finally, results from a recent perception study suggest that DRI ranges for real targets can be predicted by replacing the relevant distinctive target features by TOD test patterns of the same characteristic size and contrast, enabling a new TA modeling approach. This paper provides an overview. © 2017 COPYRIGHT SPIE. The Society of Photo-Optical Instrumentation Engineers (SPIE)
- Published
- 2017
- Full Text
- View/download PDF
31. Semi-hidden target recognition in gated viewer images fused with thermal IR images
- Author
-
Menno A. Smeelen, Alexander Toet, Piet B. W. Schwering, and Marco Loog
- Subjects
Fusion scheme ,Ir camera ,Image fusion ,Pixel ,business.industry ,Computer science ,Image quality ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Observer (special relativity) ,Hardware and Architecture ,Signal Processing ,Optimal combination ,Contextual information ,Computer vision ,Artificial intelligence ,business ,Software ,Information Systems - Abstract
Defense and security surveillance scenarios typically involve the detection and classification of targets in complex and dynamic backgrounds. Imaging systems deployed for this purpose should therefore provide imagery that enables optimal simultaneous recognition of both targets and their context. Here we investigate the recognition of semi-hidden targets, which are targets that are embedded in complex scenes, and which may either be occluded by or merged with other details in the scene. Imagery of semi-hidden targets obtained with conventional visual (TV) and Infra-Red (IR) cameras is typically not optimal for recognition and classification purposes. Previous studies on image fusion did not consider semi-hidden targets. This study investigates the potential benefits of (1) adding a laser range gated viewer (GV) to an IR camera and of (2) fusing GV and IR imagery for the recognition of semi-hidden targets. A combination of an Image Quality Metric (IQM) and an accurate saliency metric is used to select a fusion method that is optimal for semi-hidden target recognition. The results of both metrics are validated through a human observer experiment. For application in very complex scenes (in which target recognition remains difficult after fusion) we designed a background dimming algorithm that either uniformly dims the entire background or applies less dimming in the local target background or in regions with important contextual information, without affecting the target representation itself. The optimal combination of fusion method and amount of dimming is determined through a second observer experiment. In a third observer experiment, we tested if target motion influences the preferred amount of dimming. We find that fusing GV with IR imagery improves human recognition of semi-hidden targets. A simple pixel-based approach with a PCA-based weighted fusion scheme appears to be the optimal fusion method. Contextual dimming improves target recognition in complex backgrounds. In addition, moving objects appear to affect observer's dimming preference, but further research is needed to quantify this effect.
- Published
- 2014
- Full Text
- View/download PDF
32. The Perception of Visual UncertaintyRepresentation by Non-Experts
- Author
-
Susanne Tak, Alexander Toet, and Jan B. F. van Erp
- Subjects
Information retrieval ,Point (typography) ,business.industry ,Computer science ,media_common.quotation_subject ,Computer Graphics and Computer-Aided Design ,Visualization ,Normal distribution ,Information visualization ,Data visualization ,Perception ,Signal Processing ,Computer Vision and Pattern Recognition ,business ,Representation (mathematics) ,Spatial analysis ,Software ,media_common ,Cognitive psychology - Abstract
We tested how non-experts judge point probability for seven different visual representations of uncertainty, using a case from an unfamiliar domain. Participants (n = 140) rated the probability that the boundary between two earth layers passed through a given point, for seven different visualizations of the positional uncertainty of the boundary. For all types of visualizations, most observers appear to construct an internal model of the uncertainty distribution that closely resembles a normal distribution. However, the visual form of the uncertainty range (i.e., the visualization type) affects this internal model and the internal model relates to participants' numeracy. We conclude that perceived certainty is affected by its visual representation. In a follow-up experiment we found no indications that the absence (or presence) of a prominent center line in the visualization affects the internal model. We discuss if and how our results inform which visual representation is most suitable for representing uncertainty and make suggestions for future work.
- Published
- 2014
- Full Text
- View/download PDF
33. Look Out, There is a Triangle behind You! The Effect of Primitive Geometric Shapes on Perceived Facial Dominance
- Author
-
Susanne Tak and Alexander Toet
- Subjects
Facial expression ,Facial affect ,media_common.quotation_subject ,lcsh:BF1-990 ,Experimental and Cognitive Psychology ,facial dominance ,Geometric shape ,triangles ,Sensory Systems ,Short and Sweet ,Ophthalmology ,lcsh:Psychology ,Artificial Intelligence ,Perception ,facial affect ,Valence (psychology) ,Psychology ,Social psychology ,facial expression ,media_common - Abstract
Previous research has shown that perceived facial valence is biased toward background valence. Here, we examine whether background dominance also affects perceived facial dominance. In particular, we hypothesized that downward-pointing triangles, which are known to convey threat, would affect perceived facial dominance. Participants judged perceived facial dominance of neutral faces presented overlaid on downward- or upward-pointing background triangles. Our results show that neutral faces are indeed judged more dominant when seen with a downward-pointing triangle in the background. The fact that simple geometric background shapes can affect facial judgments may have important implications for the design and experience of our daily environment and multimedia content.
- Published
- 2013
- Full Text
- View/download PDF
34. Multiscale image fusion through guided filtering
- Author
-
Alexander Toet and Maarten A. Hogervorst
- Subjects
Image fusion ,business.industry ,Computer science ,Multispectral image ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Process (computing) ,Binary number ,02 engineering and technology ,01 natural sciences ,Image (mathematics) ,Weighting ,010309 optics ,Computer Science::Computer Vision and Pattern Recognition ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,Noise (video) ,Scale (map) ,business - Abstract
We introduce a multiscale image fusion scheme based on guided filtering. Guided filtering can effectively reduce noise while preserving detail boundaries. When applied in an iterative mode, guided filtering selectively eliminates small scale details while restoring larger scale edges. The proposed multi-scale image fusion scheme achieves optimal spatial consistency by using guided filtering both at the decomposition and at the recombination stage of the multiscale fusion process. First, size-selective iterative guided filtering is applied to decompose the source images into base and detail layers at multiple levels of resolution. Then, frequency-tuned filtering is used to compute saliency maps at successive levels of resolution. Next, at each resolution level a binary weighting map is obtained as the pixelwise maximum of corresponding source saliency maps. Guided filtering of the binary weighting maps with their corresponding source images as guidance images serves to reduce noise and to restore spatial consistency. The final fused image is obtained as the weighted recombination of the individual detail layers and the mean of the lowest resolution base layers. Application of multiband visual (intensified) and thermal infrared imagery demonstrates that the proposed method obtains state-ofthe-art performance for the fusion of multispectral nightvision images. The method has a simple implementation and is computationally efficient.
- Published
- 2016
- Full Text
- View/download PDF
35. Improved colour matching technique for fused nighttime imagery with daytime colours
- Author
-
Alexander Toet and Maarten A. Hogervorst
- Subjects
business.industry ,media_common.quotation_subject ,Sensor fusion ,Visualization ,Transformation (function) ,Geography ,Night vision ,Contrast (vision) ,Daylight ,Computer vision ,Artificial intelligence ,Focus (optics) ,business ,Visibility ,media_common - Abstract
Previously, we presented a method for applying daytime colours to fused nighttime (e.g., intensified and LWIR) imagery (Toet and Hogervorst, Opt.Eng. 51(1), 2012). Our colour mapping not only imparts a natural daylight appearance to multiband nighttime images but also enhances the contrast and visibility of otherwise obscured details. As a result, this colourizing method leads to increased ease of interpretation, better discrimination and identification of materials, faster reaction times and ultimately improved situational awareness (Toet e.a., Opt.Eng.53(4), 2014). A crucial step in this colouring process is the choice of a suitable colour mapping scheme. When daytime colour images and multiband sensor images of the same scene are available the colour mapping can be derived from matching image samples (i.e., by relating colour values to sensor signal intensities). When no exact matching reference images are available the colour transformation can be derived from the first-order statistical properties of the reference image and the multiband sensor image (Toet, Info. Fus. 4(3), 2003). In the current study we investigated new colour fusion schemes that combine the advantages of the both methods, using the correspondence between multiband sensor values and daytime colours (1st method) in a smooth transformation (2nd method). We designed and evaluated three new fusion schemes that focus on: i) a closer match with the daytime luminances, ii) improved saliency of hot targets and iii) improved discriminability of materials
- Published
- 2016
- Full Text
- View/download PDF
36. Feature long axis size and local luminance contrast determine ship target acquisition performance: strong evidence for the TOD case
- Author
-
Alexander Toet, Piet Bijl, and Frank L. Kooi
- Subjects
Contrast transfer function ,business.product_category ,business.industry ,0211 other engineering and technologies ,Feature recognition ,02 engineering and technology ,01 natural sciences ,Luminance ,Target acquisition ,010309 optics ,Mast (sailing) ,Geography ,Minimum resolvable contrast ,Hull ,021105 building & construction ,0103 physical sciences ,Computer vision ,Funnel ,Artificial intelligence ,business - Abstract
Visual images of a civilian target ship on a sea background were produced using a CAD model. The total set consisted of 264 images and included 3 different color schemes, 2 ship viewing aspects, 5 sun illumination conditions, 2 sea reflection values, 2 ship positions with respect to the horizon and 3 values of atmospheric contrast reduction. In a perception experiment, the images were presented on a display in a long darkened corridor. Observers were asked to indicate the range at which they were able to detect the ship and classify the following 5 ship elements: accommodation, funnel, hull, mast, and hat above the bridge. This resulted in a total of 1584 Target Acquisition (TA) range estimates for two observers. Next, the ship contour, ship elements and corresponding TA ranges were analyzed applying several feature size and contrast measures. Most data coincide on a contrast versus angular size plot using (1) the long axis as characteristic ship/ship feature size and (2) local Weber contrast as characteristic ship/ship feature contrast. Finally, the data were compared with a variety of visual performance functions assumed to be representative for Target Acquisition: the TOD (Triangle Orientation Discrimination), MRC (Minimum Resolvable Contrast), CTF (Contrast Threshold Function), TTP (Targeting Task Performance) metric and circular disc detection data for the unaided eye (Blackwell). The results provide strong evidence for the TOD case: both position and slope of the TOD curve match the ship detection and classification data without any free parameter. In contrast, the MRC and CTF are too steep, the TTP and disc detection curves are too shallow and all these curves need an overall scaling factor in order to coincide with the ship and ship feature recognition data.
- Published
- 2016
- Full Text
- View/download PDF
37. Effects of signals of disorder on fear of crime in real and virtual environments
- Author
-
Martin G. van Schaik and Alexander Toet
- Subjects
Soundscape ,Social Psychology ,Ecological validity ,media_common.quotation_subject ,Poison control ,Fear of crime ,computer.software_genre ,Occupational safety and health ,Feeling ,Virtual machine ,Injury prevention ,Psychology ,computer ,Social psychology ,Applied Psychology ,media_common - Abstract
Despite the fact that virtual environments are increasingly deployed to study the relation between urban planning, physical and social disorder, and fear of crime, their ecological validity for this type of research has not been established. This study compares the effects of similar signs of public disorder (litter, warning signs, cameras, signs of vandalism and car burglary) in an urban neighborhood and in its virtual counterpart on the subjective perception of safety and livability of the neighborhood. Participants made a walking tour through either the real or the virtual neighborhood, which was either in an orderly (baseline) state or adorned with numerous signs of public disorder. During their tour they reported the signs of disorder they noticed and the degree to which each of these affected their emotional state and feelings of personal safety. After finishing their tour they appraised the perceived safety and livability of the environment. Both in the real and in the simulated urban neighborhood, signs of disorder evoked associations with social disorder. In all conditions, neglected greenery was spontaneously reported as a sign of disorder. Disorder did not inspire concern for personal safety in reality and in the virtual environment with a realistic soundscape. However, in the absence of sound disorder compromised perceived personal safety in the virtual environment. Signs of disorder were associated with negative emotions more frequently in the virtual environment than in its real-world counterpart, particularly in the absence of sound. Also, signs of disorder degraded the perceived livability of the virtual, but not of the real neighborhood. Hence, it appears that people focus more on details in a virtual environment than in reality. We conclude that both a correction for this focusing effect and realistic soundscapes are required to make virtual environments an appropriate medium for both etiological (e.g. the effects of signs of disorder on fear of crime) and intervention (e.g. CPTED) research. © 2012 Elsevier Ltd.
- Published
- 2012
- Full Text
- View/download PDF
38. Augmenting full colour-fused multi-band night vision imagery with synthetic imagery in real-time
- Author
-
Alexander Toet, R. van Son, Hogervorst, and Judith Dijk
- Subjects
PCS - Perceptual and Cognitive Systems ,MSG - Modelling Simulation & Gaming ,II - Intelligent Imaging ,Vision ,Image quality ,Computer science ,image fusion ,real-time fusion ,night vision ,natural colour mapping ,Inertial measurement unit ,Night vision ,Human ,Organisation ,Physics & Electronics ,Computer vision ,Visibility ,BSS - Behavioural and Societal Sciences ,TS - Technical Sciences ,Remote sensing ,Image fusion ,Orientation (computer vision) ,business.industry ,false colour ,augmented reality ,color ,Computer Science Applications ,Global Positioning System ,General Earth and Planetary Sciences ,Augmented reality ,Artificial intelligence ,business - Abstract
We present the design and first field trial results of an all-day all-weather enhanced and synthetic-fused multi-band colour night vision surveillance and observation system. The system augments a fused and dynamic three-band natural-colour night vision image with synthetic 3D imagery in real-time. The night vision sensor suite consists of three cameras, sensitive in, respectively, the visual (400–700 nm), the near-infrared (NIR, 700–1000 nm) and the long-wave infrared (LWIR, 8–14 mm) bands of the electromagnetic spectrum. The optical axes of the three cameras are aligned. Image quality of the fused sensor signals is enhanced in real-time through dynamic noise reduction, super resolution and local adaptive contrast enhancement. The quality of the LWIR image is enhanced through scene-based non-uniformity correction. The visual and NIR signals are used to represent the fused multi-band night vision image in natural daytime colours, using the Colour-the-Night colour remapping technique. Colour remapping can also be deployed to enhance the visibility of thermal targets that are camouflaged in the visual and NIR range of the spectrum. The dynamic false-colour night-time images are augmented with corresponding synthetic 3D scene views, generated in real-time using a geometric 3D scene model in combination with position and orientation information supplied by the Global Positioning System and inertial sensors of the system. Initial field trials show that this system provides enhanced situational information in various low-visibility conditions.
- Published
- 2011
- Full Text
- View/download PDF
39. Emotional Effects of Dynamic Textures
- Author
-
Alexander Toet, Menno Henselmans, Marcel P. Lucassen, Theo Gevers, and Intelligent Sensory Information Systems (IVI, FNWI)
- Subjects
Pleasure ,PCS - Perceptual and Cognitive Systems ,media_common.quotation_subject ,lcsh:BF1-990 ,emotion ,Experimental and Cognitive Psychology ,pleasure ,02 engineering and technology ,Texture (music) ,dominance ,050105 experimental psychology ,Motion (physics) ,arousal ,Artificial Intelligence ,Perception ,0202 electrical engineering, electronic engineering, information engineering ,dynamic textures ,Natural (music) ,0501 psychology and cognitive sciences ,Set (psychology) ,Dominance ,ComputingMethodologies_COMPUTERGRAPHICS ,media_common ,Emotion ,05 social sciences ,020207 software engineering ,Spatial contrast ,Contrast (music) ,BSS - Behavioural and Societal Sciences ,Sensory Systems ,Ophthalmology ,lcsh:Psychology ,Dynamics (music) ,Dynamic textures ,Arousal ,Psychology ,Social psychology ,Human ,Research Article ,Cognitive psychology - Abstract
This study explores the effects of various spatiotemporal dynamic texture characteristics on human emotions. The emotional experience of auditory (eg, music) and haptic repetitive patterns has been studied extensively. In contrast, the emotional experience of visual dynamic textures is still largely unknown, despite their natural ubiquity and increasing use in digital media. Participants watched a set of dynamic textures, representing either water or various different media, and self-reported their emotional experience. Motion complexity was found to have mildly relaxing and nondominant effects. In contrast, motion change complexity was found to be arousing and dominant. The speed of dynamics had arousing, dominant, and unpleasant effects. The amplitude of dynamics was also regarded as unpleasant. The regularity of the dynamics over the textures' area was found to be uninteresting, nondominant, mildly relaxing, and mildly pleasant. The spatial scale of the dynamics had an unpleasant, arousing, and dominant effect, which was larger for textures with diverse content than for water textures. For water textures, the effects of spatial contrast were arousing, dominant, interesting, and mildly unpleasant. None of these effects were observed for textures of diverse content. The current findings are relevant for the design and synthesis of affective multimedia content and for affective scene indexing and retrieval.
- Published
- 2011
- Full Text
- View/download PDF
40. Structural similarity determines search time and detection probability
- Author
-
Alexander Toet and TNO Defensie en Veiligheid
- Subjects
Structural similarity index (SSIM) ,genetic structures ,Vision ,Computer science ,Structural similarity ,media_common.quotation_subject ,Luminance ,Similarity (network science) ,Clutter ,Contrast (vision) ,Detection time ,media_common ,Visual search ,business.industry ,Matched filter ,Pattern recognition ,Condensed Matter Physics ,Atomic and Molecular Physics, and Optics ,Electronic, Optical and Magnetic Materials ,Target structure similarity (TSSIM) ,Metric (mathematics) ,Artificial intelligence ,Detection probability ,business - Abstract
The recently introduced TSSIM clutter metric is currently the best predictor of human visual search performance for natural images (Chang and Zhang [1]). The TSSIM quantifies the similarity of a target to its background in terms luminance, contrast and structure. It correlates stronger with experimental mean search times and detection probabilities than other clutter metrics (Chang and Zhang [1,2]). Here we show that it is predominantly the structural similarity component of the TSSIM which determines human visual search performance, whereas the luminance and contrast components of the TSSIM show no relation with human performance. This result agrees with previous reports that human observers mainly rely on structural features to recognize image content. Since the structural similarity component of the TSSIM is equivalent to a matched filter, it appears that matched filtering predicts human visual performance when searching for a known target. © 2010 Elsevier B.V. All rights reserved.
- Published
- 2010
- Full Text
- View/download PDF
41. Effects of Third Person Perspective on Affective Appraisal and Engagement: Findings From SECOND LIFE
- Author
-
Alexander Toet and Ellen Schuurink
- Subjects
Perspective (graphical) ,General Social Sciences ,computer.software_genre ,Metaverse ,Computer Science Applications ,Task (project management) ,Arousal ,Virtual machine ,Valence (psychology) ,Affective appraisal ,Psychology ,Social psychology ,computer ,Avatar - Abstract
This study investigates the influence of a first-person perspective (1PP) and a third-person perspective (3PP), respectively, on the affective appraisal and on the user engagement of a three-dimensional virtual environment in SECOND LIFE. Participants explored the environment while searching for five targets during a limited time span, using either a 1PP or a 3PP. No significant overall effect was found for viewing perspective on the appraisal of the three-dimensional virtual environment on the dimensions of arousal and valence. However, a 3PP yields more perceived control over the avatar and the events, which is a requirement for engagement. Analysis of the performance on the search task shows that participants using a 3PP find more objects but also need more time to find them. The present results suggest that a 3PP conveys a more distinct impression of the environment, thereby increasing engagement, and probably induces a different viewing strategy. Hence, a 3PP appears preferable for simulation and training applications in which the correct assessment of the affective properties of an environment is essential.
- Published
- 2010
- Full Text
- View/download PDF
42. Towards cognitive image fusion
- Author
-
Stavri G. Nikolov, John J. Lewis, David Bull, C N Canagarajah, Timothy D. Dixon, Maarten A. Hogervorst, and Alexander Toet
- Subjects
Image fusion ,Fusion ,Modalities ,Computer science ,business.industry ,media_common.quotation_subject ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Image (mathematics) ,Hardware and Architecture ,Salient ,Perception ,Signal Processing ,Segmentation ,Computer vision ,Artificial intelligence ,Set (psychology) ,business ,Software ,Information Systems ,media_common - Abstract
The increasing availability and deployment of imaging sensors operating in multiple spectral bands has led to a large research effort in image fusion, resulting in a plethora of pixel-level image fusion algorithms. However, the cognitive aspects of multisensor image fusion have not received much attention in the development of these methods. In this study we investigate how humans interpret visual and infrared images, and we compare the interpretation of these individual image modalities to their fused counterparts, for different image fusion schemes. This was done in an attempt to test to what degree image fusion schemes can enhance human perception of the structural layout and composition of realistic outdoor scenes. We asked human observers to manually segment the details they perceived as most prominent in a set of corresponding visual, infrared and fused images. For each scene, the segmentations of the individual input image modalities were used to derive a joint reference (''gold standard'') contour image that represents the visually most salient details from both of these modalities and for that particular scene. The resulting reference images were then used to evaluate the manual segmentations of the fused images, using a precision-recall measure as the evaluation criterion. In this sense, the best fusion method provides the largest number of correctly perceived details (originating from each of the individual modalities that were used as input for the fusion scheme) and the smallest amount of false alarms (fusion artifacts or illusory details). A comparison with an objective score of subject performance indicates that the reference contour method indeed appears to characterize the performance of observers using the results of the fusion schemes. The results show that this evaluation method can provide valuable insight into the way fusion schemes combine perceptually important details from the individual input image modalities. Given a reference contour image, the method can potentially be used to design image fusion schemes that are optimally tuned to human visual perception for different applications and scenarios (e.g. environmental or weather conditions).
- Published
- 2010
- Full Text
- View/download PDF
43. Fast natural color mapping for night-time imagery
- Author
-
Alexander Toet and Maarten A. Hogervorst
- Subjects
Color histogram ,Color image ,business.industry ,Computer science ,Binary image ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Color balance ,False color ,Color printing ,Color quantization ,Hardware and Architecture ,Signal Processing ,Color mapping ,Computer vision ,Artificial intelligence ,business ,Software ,ComputingMethodologies_COMPUTERGRAPHICS ,Information Systems - Abstract
We present a new method to render multi-band night-time imagery (images from sensors whose sensitive range does not necessarily coincide with the visual part of the electromagnetic spectrum, e.g. image intensifiers, thermal camera's) in natural daytime colors. The color mapping is derived from the combination of a multi-band image and a corresponding natural color daytime reference image. The mapping optimizes the match between the multi-band image and the reference image, and yields a nightvision image with a natural daytime color appearance. The lookup-table based mapping procedure is extremely simple and fast and provides object color constancy. Once it has been derived the color mapping can be deployed in real-time to different multi-band image sequences of similar scenes. Displaying night-time imagery in natural colors may help human observers to process this type of imagery faster and better, thereby improving situational awareness and reducing detection and recognition times.
- Published
- 2010
- Full Text
- View/download PDF
44. Restricting the Vertical and Horizontal Extent of the Field-of-View: Effects on Manoeuvring Performance
- Author
-
Alexander Toet, Sander E. M. Jansen, and Nicolaas Johannes Delleman
- Subjects
tv.genre ,Heading (navigation) ,Traverse ,Horizontal and vertical ,Control theory ,Computer science ,Obstacle course ,Orientation (geometry) ,Field of view ,tv ,Simulation ,Visual field ,Course (navigation) - Abstract
It is known that Field-of-view restrictions affect distance estimation, postural equilibrium, and the ability to control heading. These are all important factors when manoeuvring on foot through complex structured environments. Although considerable research has been devoted to the horizontal angular extent of the Field-of-View (FoV), rather less attention has been paid to the vertical angle. The present study investigated the effects of both vertical and horizontal FoV restriction on manoeuvring performance and head movement while traversing an obstacle course consisting of three different types of obstacles. A restriction of both the horizontal and vertical angle of the visual field resulted in increased time needed to traverse the course. In addition, the extent of head movement during traversal was affected by vertical, but not horizontal viewing restriction. Furthermore, it was investigated if performance could be improved by altering the orientation of the visual field instead of its dimensions. The results do not indicate this. The findings of this study can be used to formulate requirements for the selection and development of field-of-view limiting devices, such as head-mounted displays and night-vision goggles.
- Published
- 2010
- Full Text
- View/download PDF
45. Locomotion through a Complex Environment with Limited Field-of-View
- Author
-
Mirela Kahrimanovic, Alexander Toet, Nico J. Delleman, and TNO Defensie en Veiligheid
- Subjects
Male ,Time Factors ,Traverse ,Computer science ,Experimental and Cognitive Psychology ,Angular velocity ,Environment ,Time ,Course (navigation) ,Young Adult ,Control theory ,Humans ,Motor skill ,Communication ,tv.genre ,business.industry ,Obstacle course ,Sensory Systems ,tv ,Preferred walking speed ,Obstacle ,Female ,Perception ,Visual Fields ,Constant (mathematics) ,business ,Locomotion - Abstract
Restrictions of field-of-view are known to impair human performance for a range of different tasks. However, such effects on human locomotion through a complex environment are still not clear. Effects of both horizontal (30 degrees, 75 degrees, 112 degrees, 120 degrees, 140 degrees, 160 degrees, and 180 degrees) and vertical (18 degrees and 48 degrees) field-of-view restrictions on the walking speed and head movements of participants maneuvering through an obstacle course were investigated. All field-of-view restrictions tested significantly increased time to complete the entire course, compared to the unrestricted condition. The time to traverse the course was significantly longer for a vertical field-of-view of 18 degrees than for a vertical field-of-view of 48 degrees. For a fixed vertical field-of-view size, the traversal time was constant for horizontal field-of-view sizes ranging between 75 degrees and 180 degrees and increased significantly for the 30 degrees horizontal field-of-view condition. In the restricted viewing conditions, the angular velocity of head movements made while stepping over an obstacle increased significantly over that for the unrestricted field-of-view condition, but no difference was found between the different field-of-view sizes. Implications of the current findings for the development of devices with field-of-view restrictions are discussed.
- Published
- 2008
- Full Text
- View/download PDF
46. Effects of field-of-view restriction on manoeuvring in a 3-D environment
- Author
-
Sander E. M. Jansen, Nico J. Delleman, and Alexander Toet
- Subjects
Adult ,Male ,business.industry ,Computer science ,Physical Therapy, Sports Therapy and Rehabilitation ,Human Factors and Ergonomics ,Field of view ,Imaging, Three-Dimensional ,Computer graphics (images) ,Task Performance and Analysis ,Humans ,Champ visuel ,Female ,Computer vision ,Artificial intelligence ,Visual Fields ,business ,Locomotion - Abstract
Field-of-view (FOV) restrictions are known to affect human behaviour and to degrade performance for a range of different tasks. However, the relationship between human locomotion performance in complex environments and FOV size is currently not fully known. This paper examined the effects of FOV restrictions on the performance of participants manoeuvring through an obstacle course with horizontal and vertical barriers. All FOV restrictions tested (the horizontal FOV was either 30 degrees , 75 degrees or 120 degrees , while the vertical FOV was always 48 degrees ) significantly reduced performance compared to the unrestricted condition. Both the time and the number of footsteps needed to traverse the entire obstacle course increased with a decreasing FOV size. The relationship between FOV restriction and manoeuvring performance that was determined can be used to formulate requirements for FOV restricting devices that are deployed to perform time-limited human locomotion tasks in complex structured environments, such as night-vision goggles and head-mounted displays used in training and entertainment systems.
- Published
- 2008
- Full Text
- View/download PDF
47. Effects of Field-of-View Restrictions on Speed and Accuracy of Manoeuvring
- Author
-
Alexander Toet, Sander E. M. Jansen, and Nico J. Delleman
- Subjects
Adult ,Male ,Traverse ,Computer science ,Movement ,Poison control ,Experimental and Cognitive Psychology ,Field of view ,Models, Biological ,050105 experimental psychology ,Orientation ,Range (aeronautics) ,Task Performance and Analysis ,Humans ,0501 psychology and cognitive sciences ,050107 human factors ,Simulation ,Lenses ,Vision, Binocular ,Orientation (computer vision) ,Distance Perception ,05 social sciences ,Darkness ,Sensory Systems ,Task (computing) ,Eyeglasses ,Visual Perception ,Female ,Sensory Deprivation ,Visual Fields ,Eye Protective Devices ,Night vision device ,Binocular vision ,Locomotion ,Psychomotor Performance - Abstract
Effects of field-of-view restrictions on the speed and accuracy of participants performing a real-world manoeuvring task through an obstacled environment were investigated. Although field-of-view restrictions are known to affect human behaviour and to degrade performance for a range of different tasks, the relationship between human manoeuvring performance and field-of-view size is not known. This knowledge is essential to evaluate a trade-off between human performance, cost, and ergonomic aspects of field-of-view limiting devises like head-mounted displays and night vision goggles which are frequently deployed for tasks involving human motion through environments with obstacles. In this study the speed and accuracy of movement were measured in 15 participants (8 men, 7 women, 22.9 ± 2.8 yr. of age) traversing a course formed by three wall segments for different field-of-view restrictions. Analysis showed speed decreased linearly with decreasing field-of-view extent, while accuracy was consistently reduced for all restricted field-of-view conditions. Present results may be used to evaluate cost and performance trade-offs for field-of-view restricting devices deployed to perform time-limited human-locomotion tasks in complex structured environments, such as night-vision goggles and head-mounted displays.
- Published
- 2007
- Full Text
- View/download PDF
48. Effects of simulated darkness on the affective appraisal of a virtual environment
- Author
-
Alexander Toet, Paul E. Vreugdenhil, and J.M. Houtkamp
- Subjects
Virtual machine ,Darkness ,Affective appraisal ,computer.software_genre ,Psychology ,computer ,Cognitive psychology - Abstract
This study investigated whether simulated darkness influences the affective appraisal of a desktop virtual environment (VE). In the real world darkness often evokes thoughts of vulnerability, threat, and danger, and may automatically precipitate emotional responses consonant with those thoughts (fear of darkness). This influences the affective appraisal of a given environment after dark and the way humans behave in that environment in conditions of low lighting. Desktop VEs are increasingly deployed to study the effects of environmental qualities and (architectural or lighting) interventions on human behaviour and feelings of safety. Their (ecological) validity for these purposes depends critically on their ability to correctly address the user’s cognitive and affective experience. However, it is currently not known how and to what extent simulated darkness in desktop (i.e., non-immersive) VEs affects the user’s affective appraisal of the represented environment. In this study young female volunteers explored either a daytime or a night-time version of a desktop VE representing a deserted prototypical Dutch polder landscape. The affective appraisal of the VE and the emotional response of the participants were measured through self-report. To enhance the personal relevance of the simulation, a fraction of the participants was led to believe that the virtual exploration tour would prepare them for a follow-up tour through the real world counterpart of the VE. The results show that the VE was appraised as slightly less pleasant and more arousing in simulated darkness (compared to a daylight) condition. The fictitious follow-up assignment had no emotional effects and did not influence the affective appraisal of the VE. Further research is required to assess on the validity of desktop VEs for both etiological (e.g., the effects of signs of darkness on navigation behaviour and fear of crime) and intervention (e.g., effects of street lighting on feelings of safety) research.
- Published
- 2015
- Full Text
- View/download PDF
49. Visual Search: Experiments
- Author
-
Piet Bijl and Alexander Toet
- Subjects
Visual search ,business.industry ,Computer science ,Computer vision ,Artificial intelligence ,business - Published
- 2015
- Full Text
- View/download PDF
50. Display Systems: Binocular and 3-D, Visual Comfort
- Author
-
Alexander Toet and Frank L. Kooi
- Published
- 2015
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.