112 results on '"Hogervorst, M.A."'
Search Results
2. Neurophysiological responses during cooking food associated with different emotions
- Author
-
Brouwer, A.-M., Hogervorst, M.A., Grootjen, M., van Erp, J.B.F., and Zandstra, E.H.
- Published
- 2017
- Full Text
- View/download PDF
3. Uncertainty management in regulatory and health technology assessment decision-making on drugs: guidance of the HTAi-DIA Working Group.
- Author
-
Hogervorst, M.A., Vreman, R., Heikkinen, I., Bagchi, I., Gutierrez-Ibarluzea, I., Ryll, B., Eichler, H.G., Petelos, E., Tunis, S., Sapede, C., Goettsch, W., Janssens, R., Huys, I., Barbier, L., DeJean, D., Strammiello, V., Lingri, D., Goodall, M., Papadaki, M., Toussi, M., Voulgaraki, D., Mitan, A., Oortwijn, W.J., Hogervorst, M.A., Vreman, R., Heikkinen, I., Bagchi, I., Gutierrez-Ibarluzea, I., Ryll, B., Eichler, H.G., Petelos, E., Tunis, S., Sapede, C., Goettsch, W., Janssens, R., Huys, I., Barbier, L., DeJean, D., Strammiello, V., Lingri, D., Goodall, M., Papadaki, M., Toussi, M., Voulgaraki, D., Mitan, A., and Oortwijn, W.J.
- Abstract
Contains fulltext : 294548.pdf (Publisher’s version ) (Open Access), OBJECTIVES: Uncertainty is a fundamental component of decision making regarding access to and pricing and reimbursement of drugs. The context-specific interpretation and mitigation of uncertainty remain major challenges for decision makers. Following the 2021 HTAi Global Policy Forum, a cross-sectoral, interdisciplinary HTAi-DIA Working Group (WG) was initiated to develop guidance to support stakeholder deliberation on the systematic identification and mitigation of uncertainties in the regulatory-HTA interface. METHODS: Six online discussions among WG members (Dec 2021-Sep 2022) who examined the output of a scoping review, two literature-based case studies and a survey; application of the initial guidance to a real-world case study; and two international conference panel discussions. RESULTS: The WG identified key concepts, clustered into twelve building blocks that were collectively perceived to define uncertainty: "unavailable," "inaccurate," "conflicting," "not understandable," "random variation," "information," "prediction," "impact," "risk," "relevance," "context," and "judgment." These were converted into a checklist to explain and define whether any issue constitutes a decision-relevant uncertainty. A taxonomy of domains in which uncertainty may exist within the regulatory-HTA interface was developed to facilitate categorization. The real-world case study was used to demonstrate how the guidance may facilitate deliberation between stakeholders and where additional guidance development may be needed. CONCLUSIONS: The systematic approach taken for the identification of uncertainties in this guidance has the potential to facilitate understanding of uncertainty and its management across different stakeholders involved in drug development and evaluation. This can improve consistency and transparency throughout decision processes. To further support uncertainty management, linkage to suitable mitigation strategies is necessary.
- Published
- 2023
4. Effects of multisensory context on tofu and soy sauce evaluation and consumption
- Author
-
Hiraguchi, H., Burg, E. van der, Stuldreher, I.V., Toet, A., Velut, S., Zandstra, E.H., Os, D.E. van, Hogervorst, M.A., Erp, J.B.F. van, Brouwer, A.M., Hiraguchi, H., Burg, E. van der, Stuldreher, I.V., Toet, A., Velut, S., Zandstra, E.H., Os, D.E. van, Hogervorst, M.A., Erp, J.B.F. van, and Brouwer, A.M.
- Abstract
Item does not contain fulltext, We examined the effects of informative pitch and multisensory contexts as potential factors influencing individuals’ affective and sensory experience of tofu with soy sauce and the amount consumed in a setting outside the lab. 216 participants watched one of two pitches, either promoting vegetarian diets or promoting exercise. Subsequently, the participants were guided into one of three multisensory contexts. These were designed following a ‘sustainable’, ‘meat’, or ‘neutral’ theme, with matching objects, odor, and music. A cup of soy sauce was served with four pieces of tofu. Participants rated the aroma and appearance of soy sauce, and the taste of tofu dipped in soy sauce using the ‘EmojiGrid’ valence (pleasantness) and arousal measuring tool on a smartphone that allows for quick and intuitive ratings. We hypothesized that the vegetarian pitch and sustainable context would increase both the positive ratings of tofu and soy sauce and the number of tofu pieces consumed. Our results showed that the ‘meat’ context increased arousal rating on soy sauce and tofu, and a tendency to consume more tofu relative to the other contexts. Pitch did not influence affective ratings or amount consumed. We conclude that multisensory context has the potential to positively affect food choice and perception, and discuss which elements of the multisensory context were likely drivers of the found effect., 6 p.
- Published
- 2023
5. Response to uncertainty management in regulatory and health technology assessment decision-making on drugs: guidance of the HTAi-DIA Working Group - author's reply.
- Author
-
Hogervorst, M.A. and Hogervorst, M.A.
- Subjects
- All institutes and research themes of the Radboud University Medical Center., Radboudumc 4: lnfectious Diseases and Global Health Health Evidence.
- Published
- 2023
6. Measuring the dynamics of camouflage in natural scenes using convolutional neural networks
- Author
-
van der Burg, E., Hogervorst, M.A., Toet, A., Stein, K., Schleijpen, R., and Brein en Cognitie (Psychologie, FMG)
- Abstract
Natural scenes are typically highly heterogeneous, making it difficult to assess camouflage effectiveness for moving objects since their local contrast varies with their momentary position. Camouflage performance is usually assessed through visual search and detection experiments involving human observers. However, such studies are time-consuming and expensive since they involve many observers and repetitions. Here, we show that a (state-of-the-art) convolutional neural network (YOLO) can be applied to measure the camouflage effectiveness of stationary and moving persons in a natural scene. The network is trained on human observer data. For each detection, it also provides the probability that the detected object is correctly classified as a person, which is directly related to camouflage effectiveness: more effective camouflage yields lower classification probabilities. By plotting the classification probability as a function of a person’s position in the scene, a ‘camouflage efficiency heatmap’ is obtained, that reflects the variation of camouflage effectiveness over the scene. Such a heatmap can for instance be used to determine locations in a scene where the person is most effectively camouflaged. Also, YOLO can be applied dynamically during a scene traversal, providing real-time feedback on a person’s detectability. We compared the YOLO-predicted classification probability for a soldier in camouflage clothing moving through a rural scene to human performance. Camouflage effectiveness predicted by YOLO agrees closely with human observer assessment. Thus, YOLO appears an efficient tool for the assessment of camouflage of static as well as dynamic objects.
- Published
- 2022
7. Towards cognitive image fusion
- Author
-
Toet, A., Hogervorst, M.A., Nikolov, S.G., Lewis, J.J., Dixon, T.D., Bull, D.R., and Canagarajah, C.N.
- Published
- 2010
- Full Text
- View/download PDF
8. The relation between visual search and visual conspicuity for moving targets
- Author
-
Van der Burg, E., Yu, J., Hogervorst, M.A., Lee, B., Culpepper, J., Toet, A., Stein, K.U., Schleijpen, R., Brein en Cognitie (Psychologie, FMG), and Brain and Cognition
- Subjects
Visual search ,Relation (database) ,Point (typography) ,business.industry ,Computer science ,media_common.quotation_subject ,Fidelity ,Ranging ,GeneralLiterature_MISCELLANEOUS ,Motion (physics) ,Camouflage ,Peripheral vision ,Computer vision ,Artificial intelligence ,business ,media_common - Abstract
In order to assess camouflage and the role of movement under widely ranging (lighting, weather, background) conditions simulation techniques are highly useful. However, sufficient level of fidelity of the simulated scenes is required to draw conclusions. Here, live recordings were obtained of moving soldiers and simulations of similar scenes were created. To assess the fidelity of the simulation a search experiment was carried out in which performance of recorded and simulated scenes was compared. Several movies of bushland environments were shown (recorded as well as simulated scenes) and participants were instructed to find the moving target as rapidly as possible within a time limit. In another experiment, visual conspicuity of the targets was measured. For static targets it is well known that the conspicuity (i.e., the maximum distance to detect a target in visual periphery) is a valid measure for camouflage efficiency as it predicts visual search performance. In the present study, we investigate whether conspicuity also predicts search performance for moving targets. In the conspicuity task, participants saw a short (560 ms) part of the movies used for the search experiments. This movie was presented in a loop such that the target moved forward, backward, forward, etcetera. Conspicuity was determined as follows: a participant starts by fixating a location in the scene far away from the target so that he/she is not able to detect it. Next, the participant fixates progressively closer to the target location until the target can just be detected in peripheral vision; at this point the distance to the target is recorded. As with static stimuli, we show that visual conspicuity predicts search performance. This suggests that conspicuity may be used as a means to establish whether simulated scenes show sufficiently fidelity to be used for camouflage assessment (and the effect of motion).
- Published
- 2021
9. Adaptive camouflage of moving targets
- Author
-
Van der Burg, E., Hogervorst, M.A., Toet, A., Stein, K.U., Schleijpen, R., Brein en Cognitie (Psychologie, FMG), Psychology Other Research (FMG), and Brain and Cognition
- Subjects
Camouflage ,Computer science ,business.industry ,Search ,Defence, Safety and Security ,Detection ,Motion ,Detection performance ,Computer vision ,Artificial intelligence ,Adaptation ,business ,Urban environment - Abstract
Targets that are well camouflaged under static conditions are often easily detected as soon as they start moving. We investigated and evaluated ways to design camouflage that dynamically adapts to the background and conceals the target while taking the variation in potential viewing directions into account. In a human observer experiment recorded imagery was used to simulate moving (either walking or running) and static soldiers, equipped with different types of camouflage patterns and viewed from different directions. Participants were instructed to search for the soldier and to make a speeded response as soon as they detected the soldier. Mean correct search times and mean detection probability were compared between soldiers in standard (Netherlands) Woodland uniform, in static camouflage (adapted to the local background) and in dynamically adapting camouflage. We investigated the effects of background type and variability on detection performance by varying the soldiers’ environment (like bushland, and urban). In general, performance was worse for dynamic soldiers compared to static soldiers, confirming the notion that motion breaks camouflage. Furthermore, camouflage performance of the static adaptive camouflage condition was generally much better than for the standard Woodland camouflage. Also, camouflage performance was found to depend on the background. When moving across a highly variable (heterogenous) background, dynamic camouflage turned out to be especially beneficial (i.e., performance was better in a bush environment than in an urban environment). Interestingly, our dynamic camouflage design was outperformed a method which simply displays the ‘exact’ background on the camouflage suit, since it is better capable of taking the variability in viewing directions into account. By combining new adaptive camouflage technologies with dynamic adaptive camouflage designs such as the one presented here it may become possible to prevent detection of moving targets in the (near) future.
- Published
- 2020
- Full Text
- View/download PDF
10. Measuring cooking experience implicitly and explicitly: Physiology, facial expression and subjective ratings
- Author
-
Brouwer, A.M., Hogervorst, M.A., Erp, J.B.F. van, Grootjen, M., Dam, E. van, and Zandstra, E.H.
- Subjects
Emotion ,Facial expression ,Physiology ,food and beverages ,Cooking ,EEG ,Electrodermal ,Food preparationHeart rate - Abstract
Understanding consumers’ emotional experience during the process of cooking is important to enable the development of food products. In addition to verbal (‘explicit’) reports, physiological variables and facial expression may be helpful measures since they do not interfere with the experience itself and are of a continuous nature. This study investigated the potential of a range of implicit and explicit measures (1) to differentiate between subtle differences in pleasantness of ingredients, and (2) to identify emotionally salient phases during the process of cooking. 74 participants cooked and tasted a curry dish following standardized timed auditory instructions, either with ‘basic’ or ‘premium’ versions of ingredients. Heart rate, skin conductance, EEG and facial expression were recorded continuously during cooking and tasting. Subjective ratings of valence and arousal were taken directly after. Before and after cooking, participants performed ‘dry cooking’ sessions without ingredients to acquire changes in the physiological variables caused by physical activity only. We found no differences between the ‘basic’ and ‘premium’ groups, neither in implicit, nor in explicit measures. However, there were several robust physiological effects reflecting different cooking phases. Most notably, heart rate was relatively high for two specific phases: adding curry paste from a sachet during cooking, and tasting the prepared dish. The verbal reports of valence and arousal showed similar patterns over phases. Thus, our method suggests that physiological variables can be used as continuous, implicit measures to identify phases of affective relevance during cooking and may be a valuable addition to explicit measures of emotion. © 2019 Elsevier Ltd
- Published
- 2019
11. EO system design and performance optimization by image-based end-to-end modeling
- Author
-
Bijl, P. and Hogervorst, M.A.
- Subjects
Signal to noise ratio ,Economic and social effects ,ECOMOS ,TOD ,Sensor model ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Cameras ,CMOS integrated circuits ,Iridium ,Sensor tests ,EOSTAR ,Thermography (imaging) ,Integrated circuit design ,EO ,Optical systems ,IR ,Imaging systems ,Target acquisition ,Optical data processing - Abstract
Image-based Electro-Optical system simulation including an end-to-end performance test is a powerful tool to characterize a camera system before it has been built. In particular, it can be used in the design phase to make an optimal trade-off between performance on the one hand and SWaPC (Size, Weight, Power and Cost) criteria on the other. During the design process, all components can be simulated in detail, including optics, sensor array properties, chromatic and geometrical lens corrections, signal processing, and compression. Finally, the overall effect on the outcome can be visualized, evaluated and optimized. In this study, we developed a detailed model of the CMOS camera system imaging chain (including scene, image processing and display). In parallel a laboratory sensor test, analytical model predictions and an image-based simulation were applied to two operational high-end CMOS camera lens assemblies (CLA) with different FPA sizes (2.5K and 4K) and optics. The model simulation was evaluated by comparing simulated (display) stills and videos with recorded imagery using both physical (SNR) and psychophysical measures (acuity and contrast thresholds using the TOD-methodology) at different shutter times, zoom settings, target sizes and contrasts, target positions in the visual field, and target speeds. The first results show the model simulations are largely in line with the recorded sensor images with some minor deviations. The final goal of the study is a detailed, validated and powerful sensor performance prediction model. © 2019 SPIE. Downloading of the abstract is permitted for personal use only.
- Published
- 2019
12. Monitoring mental state during real life office work
- Author
-
Brouwer, A.M., Water, L. van de, Hogervorst, M.A., Kraaij, W., Schraagen, J.M.C., and Hogenelst, K.
- Subjects
Emotion ,Facial expression ,Artificial intelligence ,Privacy ,Computers ,Heart rate ,Affective computing ,Experience sampling ,Ecological momentary assessment ,Computer science ,Data privacy - Abstract
Monitoring an individual’s mental state using unobtrusively measured signals is regarded as an essential element in symbiotic human-machine systems. However, it is not straightforward to model the relation between mental state and such signals in real life, without resorting to (unnatural) emotion induction. We recorded heart rate, facial expression and computer activity of nineteen participants while working at the computer for ten days. In order to obtain ‘ground truth’ emotional state, participants indicated their current emotion using a valence-arousal affect grid every 15 min. We found associations between valence/arousal and the unobtrusively measured variables. There was some first success to predict subjective valence/arousal using personal classification models. Thus, real-life office emotions appear to vary enough, and can be reported well enough, to uncover relations with unobtrusively measured variables. This is required to be able to monitor individuals in real life more fine-grained than the frequency with which emotion is probed.
- Published
- 2018
13. Spatial frequency tuning for 3-D corrugations from motion parallax
- Author
-
Hogervorst, M.A, Bradshaw, M.F, and Eagle, R.A
- Published
- 2000
- Full Text
- View/download PDF
14. The role of perspective information in the recovery of 3D structure-from-motion
- Author
-
Eagle, R.A. and Hogervorst, M.A.
- Published
- 1999
- Full Text
- View/download PDF
15. EEG and eye tracking signatures of target encoding during structured visual search
- Author
-
Brouwer, A.M., Hogervorst, M.A., Oudejans, B.A., Ries, A.J., and Touryan, J.
- Subjects
Target detection ,PCS - Perceptual and Cognitive Systems ,Vision ,Visual search ,Pupil size ,Human & Operational Modelling ,EEG ,ELSS - Earth, Life and Social Sciences ,BCI ,SRP ,Fixation ,FRP - Abstract
EEG and eye tracking variables are potential sources of information about the underlying processes of target detection and storage during visual search. Fixation duration, pupil size and event related potentials (ERPs) locked to the onset of fixation or saccade (saccade-related potentials, SRPs) have been reported to differ dependent on whether a target or a non-target is currently fixated. Here we focus on the question of whether these variables also differ between targets that are subsequently reported (hits) and targets that are not (misses). Observers were asked to scan 15 locations that were consecutively highlighted for 1 s in pseudo-random order. Highlighted locations displayed either a target or a non-target stimulus with two, three or four targets per trial. After scanning, participants indicated which locations had displayed a target. To induce memory encoding failures, participants concurrently performed an aurally presented math task (high load condition). In a low load condition, participants ignored the math task. As expected, more targets were missed in the high compared with the low load condition. For both conditions, eye tracking features distinguished better between hits and misses than between targets and non-targets (with larger pupil size and shorter fixations for missed compared with correctly encoded targets). In contrast, SRP features distinguished better between targets and non-targets than between hits and misses (with average SRPs showing larger P300 waveforms for targets than for non-targets). Single trial classification results were consistent with these averages. This work suggests complementary contributions of eye and EEG measures in potential applications to support search and detect tasks. SRPs may be useful to monitor what objects are relevant to an observer, and eye variables may indicate whether the observer should be reminded of them later.
- Published
- 2017
16. A feasible BCI in real life : Using predicted head rotation to improve HMD imaging
- Author
-
Brouwer, A.M., Waa, J.S. van der, Hogervorst, M.A., Cacace, A., and Stokking, H.
- Subjects
User interfaces ,PCS - Perceptual and Cognitive Systems ,Movement ,Real-life applications ,Virtual Reality ,Electroencephalography ,Passive BCI ,Intelligent User Interfaces ,Virtual reality ,Labeled training data ,Predictive modeling ,HMD ,Interfaces (computer) ,Helmet mounted displays ,Immersive application ,Human & Operational Modelling ICT ,Human & Operational Modelling ,Head mounted displays ,ELSS - Earth, Life and Social Sciences ,EEG ,Human computer interaction ,ELSS - Earth, Life and Social Sciences TS - Technical Sciences ,PCS - Perceptual and Cognitive Systems NT - Network Technology ,Brain computer interface - Abstract
While brain signals potentially provide us with valuable information about a user, it is not straightforward to derive and use this information to smooth man-machine interaction in a real life setting. We here propose to predict head rotation on the basis of brain signals in order to improve images presented in a Head Mounted Display (HMD). Previous studies based on arm and leg movements suggest that this could be possible, and a pilot study showed promising results. From the perspective of the field of Brain-Computer Interfaces (BCI), this application provides a good case to put the field's achievements to the test and to further develop in the context of a real life application. The main reason for this is that within the proposed application, acquiring accurately labeled training data (whether and which head movement took place) and monitoring of the quality of the predictive model can happen on the fly. From the perspective of HMD technology and Intelligent User Interfaces, the proposed BCI potentially improves user experience and enables new types of immersive applications. © 2017 ACM. ACM SIGAI; ACM SIGCHI
- Published
- 2017
17. Multiscale image fusion through guided filtering
- Author
-
Toet, A. and Hogervorst, M.A.
- Subjects
Image segmentation ,Saliency ,Surveillance ,PCS - Perceptual and Cognitive Systems ,Vision ,Nightvision ,Bins ,Iterative methods ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Space surveillance ,Defence, Safety and Security ,Guided filter ,Thermal imagery ,Computer Science::Computer Vision and Pattern Recognition ,Image fusion ,Guided filters ,Human & Operational Modelling ,ELSS - Earth, Life and Social Sciences ,Night vision ,Intensified imagery - Abstract
We introduce a multiscale image fusion scheme based on guided filtering. Guided filtering can effectively reduce noise while preserving detail boundaries. When applied in an iterative mode, guided filtering selectively eliminates small scale details while restoring larger scale edges. The proposed multi-scale image fusion scheme achieves optimal spatial consistency by using guided filtering both at the decomposition and at the recombination stage of the multiscale fusion process. First, size-selective iterative guided filtering is applied to decompose the source images into base and detail layers at multiple levels of resolution. Then, frequency-tuned filtering is used to compute saliency maps at successive levels of resolution. Next, at each resolution level a binary weighting map is obtained as the pixelwise maximum of corresponding source saliency maps. Guided filtering of the binary weighting maps with their corresponding source images as guidance images serves to reduce noise and to restore spatial consistency. The final fused image is obtained as the weighted recombination of the individual detail layers and the mean of the lowest resolution base layers. Application of multiband visual (intensified) and thermal infrared imagery demonstrates that the proposed method obtains state-ofthe-art performance for the fusion of multispectral nightvision images. The method has a simple implementation and is computationally efficient.
- Published
- 2016
18. Toward physiological indices of emotional state driving future ebook interactivity
- Author
-
Erp, J.B.F. van, Hogervorst, M.A., and Werf, Y.D. van der
- Subjects
Emotion ,PCS - Perceptual and Cognitive Systems ,Neurophysiology ,Ebook ,Human-computer interaction ,Creativity ,Interactivity ,Reading ,Multimedia ,Human & Operational Modelling ,EEG ,ELSS - Earth, Life and Social Sciences ,Brain-computer interfaces - Abstract
Ebooks of the future may respond to the emotional experience of the reader. (Neuro-) physiological measures could capture a reader's emotional state and use this to enhance the reading experience by adding matching sounds or to change the storyline therewith creating a hybrid art form in between literature and gaming. We describe the theoretical foundation of the emotional and creative brain and review the neurophysiological indices that can be used to drive future ebook interactivity in a real life situation. As a case study, we report the neurophysiological measurements of a bestselling author during nine days of writing which can potentially be used later to compare them to those of the readers. In designated calibration blocks, the artist wrote emotional paragraphs for emotional (IAPS) pictures. Analyses showed that we can reliably distinguish writing blocks from resting but we found no reliable differences related to the emotional content of the writing. The study shows that measurements of EEG, heart rate (variability), skin conductance, facial expression and subjective ratings can be done over several hours a day and for several days in a row. In follow-up phases, we will measure 300 readers with a similar setup.
- Published
- 2016
19. Physiological signals distinguish between reading emotional and non-emotional sections in a novel
- Author
-
Brouwer, A.M., Hogervorst, M.A., Reuderink, B., Werf, Y. van der, and Erp, J.B.F. van der
- Subjects
PCS - Perceptual and Cognitive Systems ,Mental state estimation ,Reading ,Literature ,Physiology ,Affective computing ,Human & Operational Modelling ,Passive BCI ,EEG ,ELSS - Earth, Life and Social Sciences - Abstract
We are interested in monitoring an individual’s emotions during the reading of a novel. While physiological responses to experimentally induced emotions are often small and inconsistent, being engaged in a novel may elicit relatively strong responses. We analyzed EEG, ECG, skin conductance and respiration of 57 readers reading a complete, yet to be published novel written by a popular contemporary writer (Arnon Grunberg) and compared physiology during the reading of pre-defined sections that were either emotionally intense or not. Heart rate was lower while reading emotional sections. We also found effects of emotional intensity on breathing variability and alpha asymmetry. Most of the examined physiological variables were strongly affected by time on task. We could estimate whether the physiological data of an individual reader were collected during the reading of an emotional or non-emotional section with an accuracy of up to 72% using individualized models and after correcting data for individually modeled time-related effects. Our results imply that using our methodology, it is possible to examine fluctuating reader’s emotion during the reading of a novel after reading, but not (yet) in real time.
- Published
- 2015
20. Evidence for effects of task difficulty but not learning on neurophysiological variables associated with effort
- Author
-
Brouwer, A.M., Hogervorst, M.A., Holewijn, M., and Erp, J.B.F. van
- Subjects
Meurophysiology ,PCS - Perceptual and Cognitive Systems ,Physiology ,Physiological process ,Human Performances ,Workload ,Eye ,Task performance ,Electroencephalogram ,Response time ,Effort ,Skin conductance ,Training ,Learning ,Mental dissociation ,Mental task ,EEG ,ELSS - Earth, Life and Social Sciences ,Exercise ,Learning curve ,Eyelid reflex ,Psychophysiology - Abstract
Learning to master a task is expected to be accompanied by a decrease in effort during task execution. We examine the possibility to monitor learning using physiological measures that have been reported to reflect effort or workload. Thirty-five participants performed different difficulty levels of the n-back task while a range of physiological and performance measurements were recorded. In order to dissociate non-specific time-related effects fromeffects of learning, we used the easiest level as a baseline condition. This condition is expected to only reflect non-specific effects of time. Performance and subjective measures confirmed more learning for the difficult level than for the easy level. The difficulty levels affected physiological variables in the way as expected, therewith showing their sensitivity. However, while most of the physiological variables were also affected by time, time-related effects were generally the same for the easy and the difficult level. Thus, in a well-controlled experiment that enabled the dissociation of general time effects from learning we did not find physiological variables to indicate decreasing effort associated with learning. Theoretical and practical implications are discussed.
- Published
- 2014
21. Fixation-related potentials : Foveal versus parafoveal target identification
- Author
-
Brouwer, A.M., Brinkhuis, M.A.B., Reuderink, B., Hogervorst, M.A., and Erp, J.B.F. van
- Subjects
PCS - Perceptual and Cognitive Systems ,genetic structures ,Human Performances ,ELSS - Earth, Life and Social Sciences - Abstract
The P300 event-related potential (ERP) can be used to infer whether an observer is looking at a target or not. Common practice in P300 BCIs and experiments is that observers are asked to fixate their eyes while stimuli are presented. First studies have shown that it is also possible to distinguish between target and non-target fixations on the basis of single ERPs in the case that eye movements are made, and ERPs are synchronized to fixations (fixation-related potentials or FRPs) rather than to stimulus onset. However, in these studies small object sizes ensured that participants could only identify whether the object was a target or non-target after fixating on it. We here compare (non-)target FRPs when objects are identified before versus after fixation. We also examine ERPs from static eyes conditions. FRP shapes are in accordance with the notion that the late component of the P300 is associated with identifying a target, and eye movements do not substantially affect the P300. Even when the time of object identification is unknown, it is possible to distinguish between target and non-target FRPs on a single FRP basis. These results are important for fundamental science and for developing applications to covertly monitor observers’ interests.
- Published
- 2014
22. A TOD dataset to validate human observer models for target acquisition modeling and objective sensor performance testing
- Author
-
Bijl, P., Kooi, F.L., and Hogervorst, M.A.
- Subjects
Target Acquisition ,Infrared devices ,PCS - Perceptual and Cognitive Systems ,Vision ,TOD ,Sensors ,Human Performances ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Range performance ,Human observer ,Benchmarking ,Statistical tests ,Human observers ,Optical systems ,Triangle orientation discrimination ,Computer vision ,ELSS - Earth, Life and Social Sciences ,Triangle orientation discriminations ,Sensor - Abstract
End-to-end Electro-Optical system performance tests such as TOD, MRTD and MTDP require the effort of several trained human observers, each performing a series of visual judgments on the displayed output of the system. This significantly contributes to the costs of sensor testing. Currently, several synthetic human observer models exist that can replace real human observers in the TOD sensor performance test and can be used in a TOD based Target Acquisition (TA) model. The reliability that may be expected with such a model is of key importance. In order to systematically test HVS (Human Vision System) models for automated TOD sensor performance testing, two general sets of human observer TOD threshold data were collected. The first set contains TOD data for the unaided human eye. The second set was collected on imagery processed with sensor effects, systematically varying primary sensor parameters such as diffraction blur, pixel pitch, and spatial noise. The set can easily be extended to other sensor effects including dynamic noise, boost, E-zoom, or fused sensor imagery and may serve as a benchmark for competing human vision and sensor performance models.
- Published
- 2014
23. Perceptual evaluation of color transformed multispectral imagery
- Author
-
Toet, A., Jong, M.J. de, Hogervorst, M.A., and Hooge, I.T.C.
- Subjects
PCS - Perceptual and Cognitive Systems ,Scene gist ,Vision ,Human Performances ,Color mapping ,Image fusion ,Defence, Safety and Security ,ELSS - Earth, Life and Social Sciences ,Night vision ,Gaze behavior ,False color ,Color fusion - Abstract
Color remapping can give multispectral imagery a realistic appearance. We assessed the practical value of this technique in two observer experiments using monochrome intensified (II) and long-wave infrared (IR) imagery, and color daylight (REF) and fused multispectral (CF) imagery. First, we investigated the amount of detail observers perceive in a short timespan. REF and CF imagery yielded the highest precision and recall measures, while II and IR imagery yielded significantly lower values. This suggests that observers have more difficulty in extracting information from monochrome than from color imagery. Next, we measured eye fixations during free image exploration. Although the overall fixation behavior was similar across image modalities, the order in which certain details were fixated varied. Persons and vehicles were typically fixated first in REF, CF, and IR imagery, while they were fixated later in II imagery. In some cases, color remapping II imagery and fusion with IR imagery restored the fixation order of these image details. We conclude that color remapping can yield enhanced scene perception compared to conventional monochrome nighttime imagery, and may be deployed to tune multispectral image representations such that the resulting fixation behavior resembles the fixation behavior corresponding to daylight color imagery.
- Published
- 2014
24. Perceptual evaluation of colorized nighttime imagery
- Author
-
Toet, A., Jong, M.J. de, Hogervorst, M.A., and Hooge, I.T.C.
- Subjects
Precision and recall ,PCS - Perceptual and Cognitive Systems ,Vision ,Human Performances ,Multispectral images ,Color ,Defence, Safety and Security ,Color night vision ,Eye movements ,Scene gist ,Image fusion ,Perceptual evaluation ,ELSS - Earth, Life and Social Sciences ,Information fusion ,Color transform ,Picture archiving and communication systems - Abstract
We recently presented a color transform that produces fused nighttime imagery with a realistic color appearance (Hogervorst & Toet, 2010, Information Fusion, 11-2, 69-77). To assess the practical value of this transform we performed two experiments in which we compared human scene recognition for monochrome intensified (II) and longwave infrared (IR) imagery, and color daylight (REF) and fused multispectral (CF) imagery. First we investigated the amount of detail observers can perceive in a short time span (the gist of the scene). Participants watched brief image presentations and provided a full report of what they had seen. Our results show that REF and CF imagery yielded the highest precision and recall measures, while both II and IR imagery yielded significantly lower values. This suggests that observers have more difficulty extracting information from monochrome than from color imagery. Next, we measured eye fixations of participants who freely explored the images. Although the overall fixation behavior was similar across image modalities, the order in which certain details were fixated varied. Persons and vehicles were typically fixated first in REF, CF and IR imagery, while they were fixated later in II imagery. In some cases, color remapping II imagery and fusion with IR imagery restored the fixation order of these image details. We conclude that color remapping can yield enhanced scene perception compared to conventional monochrome nighttime imagery, and may be deployed to tune multispectral image representation such that the resulting fixation behavior resembles the fixation behavior for daylight color imagery.
- Published
- 2014
25. NVG-the-day: Towards realistic night-vision training
- Author
-
Hogervorst, M.A. and Kooi, F.L.
- Subjects
Infrared devices ,PCS - Perceptual and Cognitive Systems ,Colour fusion ,Vision ,Night time ,Human Performances ,Image intensifier ,Near infrared light ,Defence, Safety and Security ,Night vision goggles ,NVG simulation ,Training ,Image intensifiers (solid state) ,ELSS - Earth, Life and Social Sciences ,Sensor modelling ,Goggles ,Night vision ,Simulation ,Flow visualization ,Visualization ,Personnel training - Abstract
Current night-time training using (flight-, driving-) simulators is hindered by the lack of realism. Effective night-time training requires the simulation of the illusions and limitations experienced while wearing Night Vision Goggles during the night. Various methods exist that capture certain sensor effects such as noise and the characteristic halos around lights. However, other effects are often discarded, such as the fact that image intensifiers are especially sensitive to nearinfrared (NIR) light, which makes vegetation appear bright in the image (the chlorophyll effect) and strongly affects the contrast of objects against their background. Combined with the contrast and resolution reduction in NVG imagery, a scene at night may appear totally different than during the day. In practice these effects give rise to misinterpretations and illusions. When training persons on how to deal with such illusions it is essential to simulate them as accurately as possible. We present a method based on our Colour-Fusion technique (see Toet & Hogervorst, Opt. Eng. 2012)1 to create a realistic NVG simulation from daytime imagery, which allows for training of the typical effects experienced while wearing NVG during the night.
- Published
- 2014
26. Urban camouflage assessment through visual search and computational saliency
- Author
-
Toet, A. and Hogervorst, M.A.
- Subjects
Camouflage ,Saliency ,PCS - Perceptual and Cognitive Systems ,Vision ,Clutter ,Defence Research ,Search time ,Defence, Safety and Security ,Detection probability ,BSS - Behavioural and Societal Sciences ,Human - Abstract
We present a new method to derive a multiscale urban camouflage pattern from a given set of background image samples. We applied this method to design a camouflage pattern for a given (semi-arid) urban environment. We performed a human visual search experiment and a computational evaluation study to assess the effectiveness of this multiscale camouflage pattern relative to the performance of 10 other (multiscale, disruptive and monotonous) patterns that were also designed for deployment in the same operating theater. The results show that the pattern combines the overall lowest detection probability with an average mean search time. We also show that a frequency-tuned saliency metric predicts human observer performance to an appreciable extent. This computational metric can therefore be incorporated in the design process to optimize the effectiveness of camouflage patterns derived from a set of background samples. © 2012 Society of Photo-Optical Instrumentation Engineers (SPIE)
- Published
- 2013
27. Maritime Situation Awareness Capabilities from Satellite and Terrestrial Sensor Systems
- Author
-
Dekker, R.J., Bouma, H., Breejen, E. den, Broek, A.C. van den, Hanckmann, P., Hogervorst, M.A., Mohamoud, A.A., Schoemaker, R.M., Sijs, J., Tan, R.G., Toet, A., and Smith, A.J.E.
- Subjects
Signal processing ,Sensor systems ,Sensor fusion ,Radar ,DSS - Distributed Sensor Systems ,II - Intelligent Imaging ,PCS - Perceptual and Cognitive Systems ,ED - Electronic Defence ,RT - Radar Technology ,Multispectral images ,Defence Research ,Maritime situational awareness ,Defence, Safety and Security ,Physics & Electronics ,Human ,Data fusion ,TS - Technical Sciences ,BSS - Behavioural and Societal Sciences - Published
- 2013
28. Physiological correlates of stress in individuals about to undergo eye laser surgery
- Author
-
Hogervorst, M.A., Brouwer, A.M., and Vos, W.K.
- Subjects
User interfaces ,Physiology ,TPI - Training & Performance Innovations PCS - Perceptual and Cognitive Systems ,Heart rate ,Skin conductance ,Surgery ,Information Society ,Stress ,Classification ,BSS - Behavioural and Societal Sciences ,Heart rate variability ,Human - Abstract
We examined to what extent we can distinguish between ‘real-life’ stressed and relaxed participants on the basis of heart rate (HR), heart rate variability (HRV) and skin conductance level (SCL) as measured during rest. Physiological and subjective measures were compared between individuals that were to undergo eye laser surgery and a control group. We found significantly higher HR and lower HRV in surgery clients, but no effect on SCL. Moreover, physiological indicators (HR) were found to correlate with subjective ones. Despite the inter-subject variations we were able to discriminate (using an SVM classifier) between surgery clients and controls with an accuracy of 70%. An alternative method of measuring HR using a Vital Signs camera showed good correspondence (error of 0.50 bpm in mean HR) with HR determined from ECG, opening up a range of practical applications that require contactless measurement methods.
- Published
- 2013
29. Beoordeling snelheidsaanduidingen hectometerbordjes
- Author
-
Alferdinck, J.W.A.M., Kroon, E.C.M., Hogervorst, M.A., and Horst, A.R.A. van der
- Subjects
Mobility ,PCS - Perceptual and Cognitive Systems ,Traffic ,Safe and Clean Mobility ,BSS - Behavioural and Societal Sciences ,Human - Abstract
1 Inleiding 3 2 Meetmethode 4 2.1 Bordjes 4 2.2 Leesbaarheid 12 2.3 Begrijpelijkheid 12 3 Resultaten 14 3.1 Leesbaarheid 14 3.2 Begrijpelijkheid 15 4 Discussie en conclusies 19 4.1 Leesbaarheid 19 4.2 Begrijpelijkheid 21 4.3 Samenvattend 21 5 Referenties 23 Bijlage(n) A Vragenlijst begrijpelijkheid
- Published
- 2013
30. TOD characterization of the gatekeeper electro optical security system
- Author
-
Gosselink, G.A.B., Anbeek, H., Bijl, P., and Hogervorst, M.A.
- Subjects
Target Acquisition ,PCS - Perceptual and Cognitive Systems ,TOD ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Defence Research ,Perception ,Defence, Safety and Security ,Electronics ,Test method ,Classification ,range prediction ,BSS - Behavioural and Societal Sciences ,Human - Abstract
The Triangle Orientation Discrimination (TOD) test method was applied to characterize thermal and visual range performance of the Gatekeeper Electro Optical Security System. Gatekeeper developed by Thales Nederland BV, is currently in use with the Royal Netherlands Navy. The system houses uncooled infrared and colour TV cameras providing up to 360° view in azimuth. The images displayed to the operator are automatically optimized based on the scene intensity distribution. Because of this built-in scene-based optimization, proper measurement of the system requires careful surround illumination of the TOD setup over a large part of the camera Field Of View. The tests provided very accurate threshold estimates with relatively small observer differences. The resulting TOD curves that characterize the sensor system in terms of acuity and contrast sensitivity can be used as input to a Target Acquisition model to predict range performance for operational scenarios.
- Published
- 2013
31. Face recognition at mesopic light levels and various light spectra
- Author
-
Alferdinck, J.W.A.M. and Hogervorst, M.A.
- Subjects
PCS - Perceptual and Cognitive Systems ,Mesopic vision ,Vision ,Light spectrum ,Information Society ,Face recognition ,BSS - Behavioural and Societal Sciences ,Human - Abstract
Light sources that are optimized for mesopic vision contain a relatively high amount of bluish light (high S/P-ratio) and are therefore effective for peripheral visual tasks at mesopic light levels. Since the spectra of these light sources differ strongly from the common public lighting there were doubts on the performance of these light sources in residential areas where faces of pedestrians should be recognised at sufficient distance. We performed a face recognition experiment at mesopic light levels using light sources with six various light spectra and S/P-ratios between 0.52 and 3.16. We measured the vertical and semi-cylindrical photopic illuminance at the faces of the target persons. The mesopic illuminance was calculated with the CIE-191 model. It appeared that the face recognition distance strongly depends on the vertical photopic illuminance at the face of the target persons but no difference was found between the six light spectra. The mesopic and semi-cylindrical illuminance did not give a better prediction of the face recognition distance than the common vertical photopic illuminance. The experiment indicates that the spectrum of the lamps is not very important for face recognition, which is a foveal visual task where only the central part of the visual field is involved and where the mesopic effect does not play a role.
- Published
- 2013
32. Progress in color night vision
- Author
-
Toet, A. and Hogervorst, M.A.
- Subjects
PCS - Perceptual and Cognitive Systems ,Vision ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,false colour ,Information Society ,image fusion ,real-time fusion ,BSS - Behavioural and Societal Sciences ,augmented reality ,night vision ,color ,Vital ICT Infrastructure ,natural colour mapping ,ComputingMethodologies_COMPUTERGRAPHICS ,Human - Abstract
We present an overview of our recent progress and the current state-of-the-art techniques of color image fusion for night vision applications. Inspired by previously developed color opponent fusing schemes, we initially developed a simple pixel-based false color-mapping scheme that yielded fused false color images with large color contrast and preserved the identity of the input signals. This method has been successfully deployed in different areas of research. However, since this color mapping did not produce realistic colors, we continued to develop a statistical color-mapping procedure that would transfer the color distribution of a given example image to a multiband nighttime image. This procedure yields a realistic color rendering. However, it is computationally expensive and achieves no color constancy since the mapping depends on the relative amounts of the different materials in the scene. By applying the statistical mapping approach in a color look-up-table framework, we finally achieved both color constancy and computational simplicity. This sample-based color transfer method is specific for different types of materials in a scene and can be easily adapted for the intended operating theatre and the task at hand. The method can be implemented as a look-up-table transform and is highly suitable for real-time implementations.
- Published
- 2012
33. Recent progress in target acquisition modelling and testing
- Author
-
Bijl, P. and Hogervorst, M.A.
- Subjects
Target Acquisition ,test method ,PCS - Perceptual and Cognitive Systems ,TTP ,Vision ,TOD ,Defence Research ,MRTD ,MTDP ,Defence, Safety and Security ,range prediction ,BSS - Behavioural and Societal Sciences ,Human - Abstract
The MRTD and MRC end-to-end EO/IR system tests and corresponding models, described in NATO STANAGs 4347-4351, lead to incorrect Target Acquisition (TA) range predictions for modern imaging systems that are 1) usually under-sampled and/or 2) contain complex signal processing. Considerable efforts have been made to develop tests and models that can cope with these highly non-linear properties. This has led to new candidates for inclusion in the standards: the TTP (Targeting Task Performance) range prediction metric (US), the TOD (Triangle Orientation Discrimination) test method (NL), and the MTDP test with the TRM4 model (GE). In the present study, we directly compare TOD, TTP and US tactical vehicle identification performance data for a variety of sensors. The data show that the TOD test pattern is a good representative of the US tactical vehicle set and that the two approaches can be integrated into a single methodology for modelling and testing
- Published
- 2012
34. Estimating workload using EEG spectral power and ERPs in the n-back task
- Author
-
Brouwer, A.M., Hogervorst, M.A., Erp, J.B.F. van, Heffelaar, T., Zimmerman, P.H., and Oostenveld, R.
- Subjects
PCS - Perceptual and Cognitive Systems ,Biomedical Innovation ,n-back taak ,Werklast ,BSS - Behavioural and Societal Sciences ,Spectral power ,passieve BCI ,Ergonomics ,EEG ,P300 ,Healthy Living ,ERP ,Human - Abstract
Binnen een gecontroleerd werklastexperiment schatten we in een gesimuleerde online situatie werklastniveau op basis van EEG spectral power, ERPs en een combinatie hiervan. Dit lukt voor 33 van de 35 proefpersonen met gemiddelden van 80-90% correct.
- Published
- 2012
35. Dynamic visual acuity
- Author
-
Brouwer, A.M., Bos, J.E., Hogervorst, M.A., and Ledegang, W.D.
- Published
- 2012
36. Hyperspectral Data Analysis and Visualisation
- Author
-
Hogervorst, M.A. and Schwering, P.B.W.
- Subjects
Target dection ,Informatics ,Situational Awareness ,PCS - Perceptual and Cognitive Systems ,ED - Electronic Defence ,Sensors ,Data analysis ,Defence Research ,Hyperspectral Imaging ,Defence, Safety and Security ,Electro-Optical ,visualisation ,Thermal Infrared ,BSS - Behavioural and Societal Sciences ,TS - Technical Sciences ,Human ,Physics & Electronics - Abstract
Electro-Optical (EO) imaging sensors are widely used for a range of tasks, e.g. for Target Acquisition (TA: detection, recognition and identification of (military) relevant objects) or visual search. These tasks can be performed by a human observer, by an algorithm (Automatic Target Recognition) or by both (Aided Target Recognition). In the past decades, the development of night vision devices in the thermal infrared and image intensifying systems has greatly extended the applicability of EO systems. Despite of these rapid developments, the current generation of sensors has important limitations. Until now, operational thermal imagers are sensitive to IR (infrared) radiation from a single spectral band in the Long Wave (8-14 μm, LWIR) or Mid Wave (3-5 μm, MWIR) infrared region. These so-called broad band sensors basically produce a monochrome (i.e. a black-and-white pan-chromatic) image that deviates considerably from a normal daylight view, and is based on temperature contrasts in a scene. With these systems, the distinction between real targets and decoys, r between military and civilian targets is often difficult to make. Also, camouflaged targets or targets hat are hidden deep in the woods are difficult to detect. Recognizing different objects and materials may be difficult. Examples of misinterpretations when using an Image Intensifier system are grass that looks like snow, or trees that look like bushes, when seen from a helicopter. These misinterpretations may lead to disorientation (loss of Situational Awareness) or to a (fatal) wrong distance estimation. Currently, multi-band and hyperspectral imaging sensors in the thermal infrared are under development. Traditionally hyperspectral imagers were developed for satellites with applications ranging from monitoring the environment, climate analysis, detection of pollution and fires. These systems also promise significant improvements in military task performance. With these new systems, targets may be distinguished not only on the basis of differences in radiation magnitude, but also on differences in spectral properties.
- Published
- 2011
37. TOD to TTP calibration
- Author
-
Bijl, P., Reynolds, J.P., Vos, W.K., Hogervorst, M.A., and Fanning, J.D.
- Subjects
Identification ,Range prediction ,PCS - Perceptual and Cognitive Systems ,TTP ,Vision ,TOD ,Perception ,Target acquisition ,Test method ,BSS - Behavioural and Societal Sciences ,Human - Abstract
The TTP (Targeting Task Performance) metric, developed at NVESD, is the current standard US Army model to predict EO/IR Target Acquisition performance. This model however does not have a corresponding lab or field test to empirically assess the performance of a camera system. The TOD (Triangle Orientation Discrimination) method, developed at TNO in The Netherlands, provides such a measurement. In this study, we make a direct comparison between TOD performance for a range of sensors and the extensive historical US observer performance database built to develop and calibrate the TTP metric. The US perception data were collected doing an identification task by military personnel on a standard 12 target, 12 aspect tactical vehicle image set that was processed through simulated sensors for which the most fundamental sensor parameters such as blur, sampling, spatial and temporal noise were varied. In the present study, we measured TOD sensor performance using exactly the same sensors processing a set of TOD triangle test patterns. The study shows that good overall agreement is obtained when the ratio between target characteristic size and TOD test pattern size at threshold equals 6.3. Note that this number is purely based on empirical data without any intermediate modeling. The calibration of the TOD to the TTP is highly beneficial to the sensor modeling and testing community for a variety of reasons. These include: i) a connection between requirement specification and acceptance testing, and ii) a very efficient method to quickly validate or extend the TTP range prediction model to new systems and tasks. © 2011 SPIE.
- Published
- 2011
38. Real-time full color multiband night vision
- Author
-
Toet, A., Hogervorst, M.A., and TNO Defensie en Veiligheid
- Subjects
PCS - Perceptual and Cognitive Systems ,Artificial Intelligence ,Information Society ,BSS - Behavioural and Societal Sciences ,Human - Published
- 2010
39. Mesopic vision and public lighting – A literature review and a face recognition experiment
- Author
-
Alferdinck, J.W.A.M., Hogervorst, M.A., Eijk, A.M.J. van, Kusmierczyk, J.T., and TNO Defensie en Veiligheid
- Subjects
Vision - Abstract
Vraagstelling : Het Ministerie van Infrastructuur en Milieu (I&M) heeft in navolging van de adviezen van de - door haar in 2007 ingestelde - Taskforce Verlichting een onderzoek ingesteld om vast te stellen in hoeverre er in de openbare verlichting energie kan worden bespaard door onder andere toepassing van lichtbronnen die geoptimaliseerd zijn voor mesopisch zien. Aanleiding voor dit onderzoek is een recente CIE-publicatie over mesopisch zien en de opkomst van ledverlichting waarmee relatief eenvoudig spectra zijn te optimaliseren. Het spectrum van verlichting dat geoptimaliseerd is voor mesopisch zien bevat relatief veel blauwachtig licht (hogere S/P-ratio) en is daardoor effectief voor perifere visuele taken bij mesopische lichtniveaus. Omdat de wetenschappelijke inzichten met betrekking tot mesopisch zien nog onvoldoende uitgekristalliseerd zijn voor het aanpassen van de huidige normen voor openbare verlichting, is een aantal vragen met betrekking tot mesopisch zien bij openbare verlichting onderzocht. We hebben een literatuuronderzoek uitgevoerd naar adaptatie, effect van leeftijd, atmosferische verstrooiing, en gezichtsherkenning. Daarnaast is er een gezichtsherkennings-experiment uitgevoerd bij verschillende lichtspectra. Werkwijze : In het literatuuronderzoek hebben we ons gericht op het modeleren van adaptatieluminantie, het effect van leeftijd de waarnemer op S/P-ratio, de atmosferische verstrooiing en gezichtsherkenning bij lichtbronnen die geoptimaliseerd zijn voor mesopisch zien. In het gezichtsherkenningsexperiment hebben wij in een gesimuleerde woonstraat de gezichtsherkenningsafstand gemeten van doelpersonen op verschillende posities en bij verschillende lichtniveaus. Er werden zes verschillende lichtbronnen gebruikt voor de verlichting: een warmwitte fluorescentielamp (S/P=1.26), een hogedruk natrium lamp (S/P=0.52), twee witte ledlampen (3000 K, S/P=1,16 en 4500 K, S/P=1,61), en twee ledlampen met hoge S/P-ratio’s van 2,73 en 3,16. Twee groepen (jong, gem.: 16,5 jaar; oud, gem.: 60,2 jaar) van in totaal 45 proefpersonen beoordeelden de herkenbaarheid van de gezichten van de doelpersonen. We hebben de verticale verlichtingssterkte en halfcilindrische verlichtingssterkte gemeten op de gezichten van de doelpersonen. De mesopische verlichtingssterkten werden berekend met het CIE-model. Resultaten : Literatuuronderzoek • Het visuele adaptatiesysteem is complex, maar er zijn modellen beschikbaar voor de berekening van het tijdsverloop van de adaptatie, dat tot 3 minuten kan duren voor de luminanties in de openbare verlichting (
- Published
- 2010
40. Fast natural color mapping for night-time imagery
- Author
-
Hogervorst, M.A., Toet, A., and TNO Defensie en Veiligheid
- Subjects
Reference image ,Look up table ,Thermal camera ,Nightvision ,Color mapping ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Natural colors ,Color ,Image intensifiers ,False color ,Table lookup ,Situational awareness ,Image processing ,Human observers ,Mapping ,Object colors ,Image fusion ,Color appearance ,Electromagnetic spectra ,Color printing ,Multi-band images ,Multiband ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
We present a new method to render multi-band night-time imagery (images from sensors whose sensitive range does not necessarily coincide with the visual part of the electromagnetic spectrum, e.g. image intensifiers, thermal camera's) in natural daytime colors. The color mapping is derived from the combination of a multi-band image and a corresponding natural color daytime reference image. The mapping optimizes the match between the multi-band image and the reference image, and yields a nightvision image with a natural daytime color appearance. The lookup-table based mapping procedure is extremely simple and fast and provides object color constancy. Once it has been derived the color mapping can be deployed in real-time to different multi-band image sequences of similar scenes. Displaying night-time imagery in natural colors may help human observers to process this type of imagery faster and better, thereby improving situational awareness and reducing detection and recognition times. © 2009 Elsevier B.V. All rights reserved.
- Published
- 2010
41. Electronic imaging and signal processing toward realistic night-vision simulation
- Author
-
Hogervorst, M.A. and TNO Defensie en Veiligheid
- Subjects
Image processing - Published
- 2009
42. Are you really looking? Finding the answer through fixation patterns and EEG
- Author
-
Brouwer, A.-M., Hogervorst, M.A., Herman, P., Kooi, F., and TNO Defensie en Veiligheid
- Subjects
genetic structures ,Vision ,brain ,eye tracking ,attention - Abstract
Eye movement recordings do not tell us whether observers are 'really looking' or whether they are paying attention to something else than the visual environment. We want to determine whether an observer's main current occupation is visual or not by investigating fixation patterns and EEG. Subjects were presented with auditory and visual stimuli. In some conditions, they focused on the auditory information whereas in others they searched or judged the visual stimuli. Observers made more fixations that are less cluttered in the visual compared to the auditory tasks, and they were less variable in their average fixation location. Fixated features revealed which target the observers were looking for. Gaze was not attracted more by salient features when performing the auditory task. 8-12 Hz EEG oscillations recorded over the parieto-occipital regions were stronger during the auditory task than during visual search. Our results are directly relevant for monitoring surveillance workers. © 2009 Springer.
- Published
- 2009
43. TRICLOBS portable triband color lowlight observation system
- Author
-
Toet, A., Hogervorst, M.A., and TNO Defensie en Veiligheid
- Subjects
Near Infrared ,Look up table ,Vision ,Observation systems ,Table lookup ,Computer graphics ,Dichroic beamsplitter ,Infrared spectroscopy ,Noise reductions ,Sensor signals ,Intensive care units ,Color mapping ,Ethernet connections ,Navigation tools ,Microbolometer ,Cameras ,False color ,Optical beam splitters ,Night vision sensors ,Multimedia systems ,Color transform ,Infrared devices ,Thermal camera ,Transmitted radiation ,Uncooled ,Color ,Contrast Enhancement ,All-weather surveillance ,Lookup tables ,Real-time fusion ,Natural daylight ,Video output ,Image fusion ,Long wave infrared ,Electromagnetic spectra ,Digital image ,Color printing ,Lighting ,Analog video signal ,Sensors ,Natural color mapping ,Optical axes ,Detectability ,Processing units ,LCD displays ,Test results ,Sensor suite ,Digital video signals ,Information fusion ,Liquid crystal displays - Abstract
We present the design and first test results of the TRICLOBS (TRI-band Color Low-light OBServation) system The TRICLOBS is an all-day all-weather surveillance and navigation tool. Its sensor suite consists of two digital image intensifiers (Photonis ICU's) and an uncooled longwave infrared microbolometer (XenICS Gobi 384). The night vision sensor suite registers the visual (400-700 nm), the near-infrared (700-1000 nm) and the longwave infrared (8-14 m) bands of the electromagnetic spectrum. The optical axes of the three cameras are aligned, using two dichroic beam splitters: an ITO filter to reflect the LWIR part of the incoming radiation into the thermal camera, and a B43-958 hot mirror to split the transmitted radiation into a visual and NIR part. The individual images can be monitored through two LCD displays. The TRICLOBS provides both digital and analog video output. The digital video signals can be transmitted to an external processing unit through an Ethernet connection. The analog video signals can be digitized and stored on on-board harddisks. An external processor is deployed to apply a fast lookup-table based color transform (the Color-the-Night color mapping principle) to represent the TRICLOBS image in natural daylight colors (using information in the visual and NIR bands) and to maximize the detectability of thermal targets (using the LWIR signal). The external processor can also be used to enhance the quality of all individual sensor signals, e.g. through noise reduction and contrast enhancement. © 2009 SPIE.
- Published
- 2009
44. Toward realistic night-vision simulation
- Author
-
Hogervorst, M.A. and TNO Defensie en Veiligheid
- Subjects
Image processing ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
A novel color-transformation method converts daytime scenes into night-vision goggle images suitable for realistic nighttime training
- Published
- 2009
45. Sensor performance as a function of sampling (d) and optical blur (Fλ)
- Author
-
Bijl, P., Hogervorst, M.A., and TNO Defensie en Veiligheid
- Subjects
Infrared devices ,Range prediction ,Mathematical models ,Detector size ,TTP ,Targets ,Vision ,TOD ,Sensors ,Diffraction blur ,Monte Carlo methods ,Optical resolving power ,Optoelectronic devices ,Mergers and acquisitions ,Thermography (imaging) ,target aquisition ,Light transmission ,triangle orientation discrimination tod ,Imaging systems ,Target acquisition ,Computer simulation languages ,Diffraction ,performance ,Forecasting - Abstract
Detector sampling and optical blur are two major factors affecting Target Acquisition (TA) performance with modern EO and IR systems. In order to quantify their relative significance, we simulated five realistic LWIR and MWIR sensors from very under-sampled (detector pitch d >> diffraction blur Fλ) to well-sampled (Fλ >> d). Next, we measured their TOD (Triangle Orientation Discrimination) sensor performance curve. The results show a region that is clearly detectorlimited, a region that is clearly diffraction-limited, and a transition area. For a high contrast target, threshold size TFPA on the sensor focal plane can mathematically be described with a simple linear expression: TFPA =1.5·d ·w(d/Fλ) + 0.95· Fλ·w(Fλ/d), w being a steep weighting function between 0 and 1. Next, tacticle vehicle identification range predictions with the TOD TA model and TTP (Targeting Task Performance) model where compared to measured ranges with human observers. The TOD excellently predicts performance for both well-sampled and under-sampled sensors. While earlier TTP versions (2001, 2005) showed a pronounced difference in the relative weight of sampling and blur to range, the predictions with the newest (2008) TTP version that considers in-band aliasing are remarkably close to the TOD. In conclusion, the TOD methodology now provides a solid laboratory sensor performance test, a Monte Carlo simulation model to assess performance from sensor physics, a Target Acquisition range prediction model and a simple analytical expression to quickly predict sensor performance as a function of sampling and blur. TTP approaches TOD with respect to field performance prediction. Keywords: TOD, TTP, Target Acquisition, range prediction, diffraction blur, detector size
- Published
- 2009
46. Evaluatie camera systemen (Maxan) t.b.v. zichtveldverbetering vrachtwagen
- Author
-
Hogervorst, M.A. and TNO Defensie en Veiligheid
- Subjects
Traffic ,Traffic safety - Published
- 2009
47. Modular target acquisition model & visualization tool
- Author
-
Bijl, P., Hogervorst, M.A., Vos, W.K., and TNO Defensie en Veiligheid
- Subjects
Mathematical models ,User interfaces ,Vision ,Atmosphere ,TOD ,EO-VISTA ,Image acquisition ,Visualization tool ,Image sensors ,Computer simulation ,TDA ,Triangle Orientation Discrimination (TOD) ,EOSTAR ,Human observer ,Target acquisition ,Decision making ,Simulation ,Sensor - Abstract
We developed a software framework for image-based simulation models in the chain: scene-atmosphere-sensor-image enhancement-display-human observer: EO-VISTA. The goal is to visualize the steps and to quantify (Target Acquisition) task performance. EO-VISTA provides an excellent means to systematically determine the effects of certain factors on overall performance in the context of the whole chain. There is a wide number of applications in the areas of sensor design, maintenance, TA model development, tactical decision aids and R&D. The framework is set up in such a way that modules of different producers can be combined, once they comply with a standardized interface. At the moment the shell runs with three modules, required to calculate TA-performance based on the TOD (Triangle Orientation Discrimination) method. In order to demonstrate the potential of a future comprehensive visualization tool, two example calculations are carried out using two programs not yet implemented: the pcSitoS sensor simulation model and the EOSTAR scene and atmosphere model. With the examples we show that: i) pcSitoS yields a TOD comparable to that of the real sensor that is simulated, ii) performance differences between the human visual system model implemented for automated TOD measurement and a human observer are consistent over different types of sensor and may be corrected for relatively easy, and iii) simulation results of thermal ship imagery are in line with acquisition ranges predicted with the TOD model. All these results can be studied more extensively with EO-VISTA in a systematic way.
- Published
- 2008
48. Target acquisition performance : Effects of target aspect angle, dynamic imaging and signal processing
- Author
-
Beintema, J.A., Bijl, P., Hogervorst, M.A., Dijk, J., and TNO Defensie en Veiligheid
- Subjects
Signal processing ,Digital cameras ,Mathematical models ,TOD ,Military applications ,Imaging techniques ,Super resolution ,Dynamic imaging ,Image enhancement ,Validation ,Sensor performance ,NVThermIP ,Target acquisition ,Local adaptive contrast enhancement - Abstract
In an extensive Target Acquisition (TA) performance study, we recorded static and dynamic imagery of a set of military and civilian two-handheld objects at a range of distances and aspect angles with an under-sampled uncooled thermal imager. Next, we applied signal processing techniques including DSR (Dynamic Super Resolution) and LACE (Local Adaptive Contrast Enhancement) to the imagery. In a perception experiment, we determined identification (ID) and threat/non-threat discrimination performance as a function of target range for a variety of conditions. The experiment was performed to validate and extend current TA models. In addition, range predictions were performed with two TA models: the TOD model and NVThermIP. The results of the study are: i) target orientation has a strong effect on performance, ii) the effect of target orientation is well predicted by the two TA models, iii) absolute identification range is close the range predicted with the two models using the recommended criteria for two-handheld objects, iv) there was no positive effect of sensor motion on performance, and this was against the expectations based on earlier studies, v) the benefit of DSR was smaller than expected on the basis of the model predictions, and vi) performance with LACE was similar to performance on an image optimized manually, indicating that LACE can be used to optimize the contrast automatically. The relatively poor results with motion and DSR are probably due to motion smear induced by a higher camera speed than used in earlier studies. Camera motion magnitude and smear are not yet implemented in TA models.
- Published
- 2008
49. Method for applying daytime colors to nighttime imagery in realtime
- Author
-
Hogervorst, M.A. and Toet, A.
- Subjects
Optimization ,Vision ,Multisensory perception ,Color mapping ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Color image processing ,Real time systems ,False color ,Lookup tables ,Table lookup ,Realtime fusion ,Images ,Image fusion ,Natural color ,Multiband sensors ,Multi agent systems ,ComputingMethodologies_COMPUTERGRAPHICS ,Visualization - Abstract
We present a fast and efficient method to derive and apply natural colors to nighttime imagery from multiband sensors. The color mapping is derived from the combination of a multiband image and a corresponding natural color reference image. The mapping optimizes the match between the multiband image and the reference image, and yields a nightvision image with colors similar to that of the daytime image. The mapping procedure is simple and fast. Once it has been derived the color mapping can be deployed in realtime. Different color schemes can be used tailored to the environment and the application. The expectation is that by displaying nighttime imagery in natural colors human observers will be able to interpret the imagery better and faster, thereby improving situational awareness and reducing reaction times.
- Published
- 2008
50. What do colour-blind people really see?
- Author
-
Hogervorst, M.A., Alferdinck, J.W.A.M., and TNO Defensie en Veiligheid
- Subjects
colour blindness - Abstract
Problem: colour perception of dichromats (colour-blind persons) Background: Various models have been proposed (e. g. Walraven & Alferdinck, 1997; Brettel et al. , 1997) to model reduced colour vision of colour-blind people. It is clear that colour-blind people cannot distinguish certain object colours that appear different to people with normal vision: such objects are depicted by the models in the same colour. However, which colours are perceived is not clear. Question: How well can the object colour be estimated based on the cone response of 2 cones using the colour statistics of the environment? Model: System using 2 instead of 3 cones + colour statistics of the environment. System learns/has learned (consciously or unconsciously) the relationship between cone responses and colour as perceived by someone with normal colour vision.
- Published
- 2008
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.