20 results on '"H. Lee Task"'
Search Results
2. Depth and Collision Judgment Using Night Vision Goggles
- Author
-
H. Lee Task and Patricia R. DeLucia
- Subjects
Engineering ,Injury control ,Accident prevention ,business.industry ,Aerospace Engineering ,Poison control ,Collision ,Computer Science Applications ,Education ,Optometry ,Depth perception ,business ,Night vision device ,Applied Psychology ,Simulation ,Field conditions - Abstract
There is a concern about depth perception with night vision goggles (NVGs). We measured judgments involving depth perception in laboratory and field conditions. When participants judged absolute distance, or collision, in a laboratory, results indicated that objects can appear closer with NVGs than with unaided vision, and underestimations of distance were greater with smaller distances. When participants were transported in a car and judged when to turn a vehicle to avoid collision with a simulated wall, there was no difference between goggle and unaided vision. Effects of NVGs on depth perception may be specific to task and viewing conditions.
- Published
- 1995
- Full Text
- View/download PDF
3. A comparison of Landolt C and triangle resolution targets using the synthetic observer approach to sensor resolution assessment
- Author
-
David W. Dommett, H. Lee Task, and Alan R. Pinkus
- Subjects
Image fusion ,Observer (quantum physics) ,Image quality ,Computer science ,Orientation (computer vision) ,business.industry ,Resolution (electron density) ,Point (geometry) ,Computer vision ,Artificial intelligence ,business ,Landolt C - Abstract
Resolution is often provided as one of the key parameters addressing the quality capability of a sensor. One traditional approach to determining the resolution of a sensor/display system is to use a resolution target pattern to find the smallest target size for which the critical target element can be "resolved" using the sensor/display system, which usually requires a human in the loop to make the assessment. In previous SPIE papers we reported on a synthetic observer approach to determining the point at which a Landolt C resolution target was resolved; a technique with marginal success when compared to human observers. This paper compares the results of the previously developed synthetic observer approach using a Landolt C with a new synthetic observer approach based on Triangle Orientation Detection (TOD). A large collection of multi-spectral (visible, near infra-red, and thermal) sensor images of triangle and Landolt C resolution targets were recorded at a wide range of distances. Each image contained both the triangle and the Landolt C resolution targets as well as a person holding a weapon or other object. The images were analyzed using the two different synthetic observer approaches, one for triangles and one for Landolt Cs, and the results compared with each other for the three different sensors. This paper describes the results and planned future effort to compare the results with human visual performance for both the resolution targets and for the hand-held objects.© (2012) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
- Published
- 2012
- Full Text
- View/download PDF
4. A comparison of synthetic and human observer approaches to multispectral sensor resolution assessment
- Author
-
David W. Dommett, H. Lee Task, and Alan R. Pinkus
- Subjects
Image fusion ,business.industry ,Computer science ,Image quality ,Multispectral image ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Computer vision ,Artificial intelligence ,Spectral bands ,business ,Image resolution ,Sub-pixel resolution - Abstract
Resolution is one of the key parameters addressing the qua lity capability of a sensor. One approach to determining the resolution of a sensor/display system is to use a resolution target pattern to find the smallest resolved element using the system, which typically requires a human in the loop to make the assessment. This paper compares the results of a software approach to generate an effective resolution value for a sensor with human vision results using the same images. Landolt Cs were selected as the resolution target, which were imaged at multiple distances from multiple sensors. The images were analyzed usin g the software to determine the orientati on of the C at each distance, resulting in a probability of correct orientation detec tion curve as a function of distance. Probability of correct orientation detection as a function of distance was al so obtained directly from subjects that viewed the imagery. These curves were then used to generate resolution values for the sensor using the software results and the subject results. Resolution results for both the software and the participants were obtained for four different spectral band sensors as well as for fused images from two pairs of sensors. Keywords: resolution, sensor resolution, image fusion, image quality, visual resolution
- Published
- 2011
- Full Text
- View/download PDF
5. Development of a dichoptic foveal/peripheral head-mounted display with partial binocular overlap
- Author
-
H. Lee Task, Dale R. Tyczka, Martha Jane Chatten, John O. Merritt, Bridget A. Fath, Darrel G. Hopper, and John B. Chatten
- Subjects
Binocular rivalry ,Pixel ,Machine vision ,business.industry ,Computer science ,media_common.quotation_subject ,Optical head-mounted display ,Stereoscopy ,law.invention ,Stereoscopic depth ,Foveal ,law ,Perception ,Computer graphics (images) ,Peripheral vision ,Computer vision ,Artificial intelligence ,business ,media_common - Abstract
Previous foveal/peripheral display systems have typically combined the foveal and peripheral views optically, in a single eye, in order to provide simultaneously both high resolution and wide field of view from a limited number of pixels. While quite effective, this approach can lead to cumbersome optical designs that are not well suited to head-mounted displays. A simpler approach may be possible in the form of a dichoptic vision system, wherein each eye receives a different field of view (FOV) of the same scene, at different resolutions. One eye would be presented with highresolution narrow-FOV foveal imagery, while the other would receive a much wider peripheral FOV. Binocular overlap in the central region would provide some degree of stereoscopic depth perception. It remains to be determined, however, if such a system would be acceptable to users, or if binocular rivalry or other adverse side-effects would degrade visual task performance compared to conventional head-mounted binocular displays. In this paper, we describe a preliminary dichoptic foveal/peripheral vision system and suggest methods by which its usability and performance can be assessed. This effort was funded by the U.S. Air Force Research Laboratory Human Performance Wing under SBIR Topic AF093-018.
- Published
- 2011
- Full Text
- View/download PDF
6. Synthetic observer approach to multispectral sensor resolution assessment
- Author
-
David W. Dommett, H. Lee Task, and Alan R. Pinkus
- Subjects
Image fusion ,Computer science ,business.industry ,Orientation (computer vision) ,Image quality ,Multispectral image ,Resolution (electron density) ,Detector ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Image processing ,Spectral bands ,Sensor fusion ,Computer vision ,Artificial intelligence ,business ,Sub-pixel resolution - Abstract
Resolution is often provided as one of the key parameters addressing the quality characteristic of a sensor. One traditional approach when determining the resolution of a sensor/display system is to use a resolution target pattern to detect the smallest element that can be "resolved" using the system. This requires a human in the loop to make the assessment. This study investigated the use of a custom designed software approach to generate an effective resolution value for a sensor. Landolt Cs were selected as the resolution target, which were imaged at multiple distances with different sensors. The images were analyzed using custom software to determine the orientation of the C at each distance, which resulted in a probability of correct orientation detection curve as a function of distance. This curve was used to generate a "resolution" for the sensor without involving human vision. Resolution results for four different spectral band sensors were obtained as well as effective resolution of fused images from select pairs of sensors. These results and the possible use of this synthetic observer resolution approach are presented and discussed, as well as possible future research relating this resolution to human visual performance with fused image sources.
- Published
- 2010
- Full Text
- View/download PDF
7. Object recognition methodology for the assessment of multi-spectral fusion algorithms: phase 1
- Author
-
H. Lee Task, Alexander Toet, and Alan R. Pinkus
- Subjects
Image fusion ,Computer science ,business.industry ,Multispectral image ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Cognitive neuroscience of visual object recognition ,Sensor fusion ,Task (project management) ,Range (mathematics) ,Test set ,Pattern recognition (psychology) ,Detection theory ,Computer vision ,Artificial intelligence ,business - Abstract
This effort we acquired and registered a multi-spectral dynamic image test set with the intent of using the imagery to assess the operational effectiveness of static and dynamic image fusion techniques for a range of relevant military tasks. This paper describes the image acquisition methodology, the planned human visual performance task approach, the lessons learned during image acquisition and the plans for a future, improved image set, resolution assessment methodology and human visual performance task. © 2009 SPIE.
- Published
- 2009
- Full Text
- View/download PDF
8. Quad-emissive display for multi-spectral sensor analyses
- Author
-
Alan R. Pinkus and H. Lee Task
- Subjects
Physics ,business.industry ,Infrared ,Bar (music) ,Near-infrared spectroscopy ,Multispectral image ,Detector ,Hyperspectral imaging ,Astrophysics::Cosmology and Extragalactic Astrophysics ,Spectral bands ,Stencil ,Optics ,Optoelectronics ,Astrophysics::Earth and Planetary Astrophysics ,business ,Astrophysics::Galaxy Astrophysics - Abstract
The Quad-Emissive Display (QED) is a device that is designed to provide suitable emissive energy in four spectral bands to permit the simultaneous evaluation of sensors with different spectral sensitivities. A changeable target pattern, such as a Landolt C, a tumbling "E", a triangle or a bar pattern, is fabricated as a stencil (cutout) that is viewed against a second, black surface located several centimeters behind the stencil and thermally isolated from the stencil target. The sensor spectral bands of interest are visible (0.4 to 0.7 microns), near infrared (0.7 to 1.0 microns), short wave infrared (1.0 to 3.0 microns) and the long wave infrared (8.0 to 14.0 microns). This paper presents the details of the structure of the QED and preliminary results on the types of sensor/display resolution measurements and psychophysical studies that can be accomplished using the QED.
- Published
- 2009
- Full Text
- View/download PDF
9. Theoretical and applied aspects of night vision goggle resolution and visual acuity assessment
- Author
-
H. Lee Task and Alan R. Pinkus
- Subjects
Visual acuity ,Image quality ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Intensifier ,Geography ,Night vision ,Optical transfer function ,medicine ,Computer vision ,Metric (unit) ,Artificial intelligence ,medicine.symptom ,Image sensor ,business ,Night vision device - Abstract
The image quality of night vision goggles is often expressed in terms of visual acuity, resolution or modulation transfer function. The primary reason for providing a measure of image quality is the underlying assumption that the image quality metric correlates with the level of visual performance that one could expect when using the device, for example, target detection or target recognition performance. This paper provides a theoretical analysis of the relationships between these three image quality metrics: visual acuity, resolution and modulation transfer function. Results from laboratory and field studies were used to relate these metrics to visual performance. These results can also be applied to non-image intensifier based imaging systems such as a helmet-mounted display coupled to an imaging sensor.
- Published
- 2007
- Full Text
- View/download PDF
10. Pixels, people, perception, pet peeves, and possibilities: a look at displays
- Author
-
H. Lee Task
- Subjects
Engineering ,Monocular ,Pixel ,business.industry ,Interface (computing) ,media_common.quotation_subject ,Pet peeve ,Visor ,Computer graphics (images) ,Perception ,System integration ,Visual interface ,business ,media_common - Abstract
This year marks the 35 th anniversary of the Visually Coupled Systems symposium held at Brooks Air Force Base, San Antonio, Texas in November of 1972. This paper uses the proceedings of the 1972 VCS symposium as a guide to address several topics associated primarily with helmet-mounted displays, systems integration and the human-machine interface. Specific topics addressed include monocular and binocular helmet-mounted displays (HMDs), visor projection HMDs, color HMDs, system integration with aircraft windscreens, visual interface issues and others. In addition, this paper also addresses a few mysteries and irritations (pet peeves) collected over the past 35+ years of experience in the display and display related areas.
- Published
- 2007
- Full Text
- View/download PDF
11. Night vision imaging system lighting evaluation methodology
- Author
-
H. Lee Task, Martha A. Hausmann, Maryann H. Barbato, and Alan R. Pinkus
- Subjects
Geography ,Near vertical incidence skywave ,business.industry ,Night vision ,Radiance ,Lighting system ,Computer vision ,Artificial intelligence ,User interface ,business ,Night vision device ,Visual field ,Cockpit - Abstract
In order for night vision goggles (NVGs) to be effective in aircraft operations, it is necessary for the cockpit lighting and displays to be NVG compatible. It has been assumed that the cockpit lighting is compatible with NVGs if the radiance values are compliant with the limits listed in Mil-L-85762A and Mil-Std-3009. However, these documents also describe a NVG-lighting compatibility field test procedure that is based on visual acuity. The objective of the study described in this paper was to determine how reliable and precise the visual acuity-based (VAB) field evaluation method is and compare it to a VAB method that employs less expensive equipment. In addition, an alternative, objective method of evaluating compatibility of the cockpit lighting was investigated. An inexpensive cockpit lighting simulator was devised to investigate two different interference conditions and six different radiance levels per condition. This paper describes the results, which indicate the objective method, based on light output of the NVGs, is more precise and reliable than the visual acuity-based method. Precision and reliability were assessed based on a probability of rejection (of the lighting system) function approach that was developed specifically for this study.
- Published
- 2005
- Full Text
- View/download PDF
12. The impact of target luminance and radiance on night vision device visual performance testing
- Author
-
H. Lee Task and Peter L. Marasco
- Subjects
Helmet-mounted display ,business.industry ,Image intensifier ,Luminance ,law.invention ,Starlight ,Geography ,law ,Night vision ,Radiance ,Computer vision ,Scotopic vision ,Artificial intelligence ,business ,Night vision device - Abstract
Visual performance through night-vision devices (NVDs) is a function of many parameters such as target contrast, objective and eyepiece lens focus, signal/noise of the image intensifier tube, quality of the image intensifier, night-vision goggle (NVG) gain, and NVG output luminance to the eye. The NVG output luminance depends on the NVG sensitive radiance emitted (or reflected) from the visual acuity target (usually a vision testing chart). The primary topic of this paper is the standardization (or lack thereof) of the radiance levels used for NVG visual acuity testing. The visual acuity chart light level might be determined in either photometric (luminance) units or radiometric (radiance) units. The light levels are often described as “starlight,” “quarter moon,” or “optimum” light levels and may not actually provide any quantitative photometric or radiometric information. While these terms may be useful to pilots and the users of night-vision devices, they are inadequate for accurate visual performance testing. This is because there is no widely accepted agreement in the night vision community as to the radiance or luminance level of the target that corresponds to the various named light levels. This paper examines the range of values for “starlight,” “quarter moon,” and “optimum” light commonly used by the night vision community and referenced in the literature. The impact on performance testing of variations in target luminance/radiance levels is also examined. Arguments for standardizing on NVG-weighted radiometric units for testing night-vision devices instead of photometric units are presented. In addition, the differences between theoretical weighted radiance and actual weighted radiance are also discussed.
- Published
- 2003
- Full Text
- View/download PDF
13. Night vision goggle visual acuity assessment: results of an interagency test
- Author
-
H. Lee Task
- Subjects
Visual acuity ,genetic structures ,business.industry ,Computer science ,eye diseases ,Test (assessment) ,Night vision ,medicine ,Optometry ,Computer vision ,Scotopic vision ,Artificial intelligence ,Spatial frequency ,medicine.symptom ,business ,Image resolution ,Night vision device - Abstract
There are several parameters that are used to characterize the quality of a night vision goggle (NVG) such as resolution, gain, field-of-view, visual acuity, etc. One of the primary parameters is visual acuity or resolution of the NVG. These two terms are often used interchangeably primarily because of the measurement methods employed. The objectives of this paper are to present: (1) an argument as to why NVG visual acuity and resolution should be considered as distinctly different parameters, (2) descriptions of different methods of measuring visual acuity and resolution, and (3) the results of a blind test by several agencies to measure the resolution of the same two NVGs (four oculars).
- Published
- 2001
- Full Text
- View/download PDF
14. Measurement of visual performance through scattering visors and aerospace transparencies
- Author
-
H. Lee Task and Peter L. Marasco
- Subjects
Physics ,Haze ,business.industry ,Scattering ,media_common.quotation_subject ,Luminance ,Transparency (projection) ,Optics ,Visor ,Contrast (vision) ,Aerospace ,business ,Landolt C ,media_common - Abstract
Light scattered from helmet visors and aerospace transparencies is known to reduce visual performance. One popular measurement technique, maintained by the American Society for Testing and Materials, is ASTM D 1003. It is a standard procedure used to measure haze inherent in transparent materials, which is defined as the percent of the total transmitted light that is scattered. However, research has shown that visual acuity measured through several different types of helmet visors does not correlate well with visor haze. This is most likely due to the fact that the amount of light scattered from a transparent material depends heavily on the light illuminating the transparency and on the viewing geometry, behavior that ASTM D 1003 does not characterized. Scattered light causes transparent parts to appear luminescent and imparts a veiling luminance when superimposed over a target, reducing target contrast and inducing a visual performance loss. This paper describes an experiment in which threshold target background luminance, the luminance at which a target was barely visible, was measured for a number of observers viewing a Landolt C target through several levels of veiling luminance. Threshold luminance was examined for predictable behavior with respect to veiling luminance.
- Published
- 2001
- Full Text
- View/download PDF
15. Measurement of military helmet- and head-mounted display (HMD) visor optical properties
- Author
-
Dean F. Kocian and H. Lee Task
- Subjects
Engineering ,business.industry ,Optical engineering ,media_common.quotation_subject ,Optical head-mounted display ,Optical coating ,CRTS ,Visor ,Contrast (vision) ,business ,Reflection (computer graphics) ,Neutral density filter ,Simulation ,media_common - Abstract
This paper examines the light transmission, reflection, and scattering characteristics of military helmet visors used for see-through helmet-mounted displays (HMDs). HMDs used for the within-visual-range counter-air mission normally use the inner surface of the helmet visor to reflect the HMD image to the pilot's eye. This approach is popular because it minimizes any optical structures that interfere with the pilot's vision, while also maximizing see-through to the ambient scene. In most cases, a reflective coating, which increases the cost of the helmet visor significantly, must be applied in the inner surfaces in order to achieve enough contrast between the HMD image and the external light passing through the visor. Recently, with the development of high luminance miniature cathode-ray-tubes, it has been possible to eliminate the reflective coatings on neutral density helmet visors having a see-through range of 13 - 35%. This paper examines the light management properties of both types of visors. The paper stresses measurement techniques that produce repeatable results and what these results might imply about visual performance under operational lighting conditions.© (2000) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.
- Published
- 2000
- Full Text
- View/download PDF
16. Effects of aircraft windscreens and canopies on HMT/D aiming accuracy: III
- Author
-
H. Lee Task and Chuck Goodyear
- Published
- 1999
- Full Text
- View/download PDF
17. Effects of aircraft windscreen on helmet-mounted display/tracker aiming accuracy
- Author
-
H. Lee Task
- Subjects
Transparency (projection) ,Optics ,Helmet-mounted display ,Computer science ,business.industry ,business ,Refraction - Abstract
Modern fighter aircraft windscreens are typically made of curved, transparent plastic for improved aero-dynamics and bird-strike protection. Since they are curved these transparencies often refract light in such a way that a pilot looking through the transparency will see a target in a location other than where it really is. This effect has been known for many years and methods to correct the aircraft head-up display (HUD) for these angular deviations have been developed and employed. The same problem will occur for helmet-mounted displays (HMDs) used for target acquisition only worse due to the fact the pilot can look through any part of the transparency instead of being constrained to just the forward section as in the case of the HUD. To determine the potential impact of these windscreen refraction errors two F-15 windscreens were measured; one acrylic and one multilayer acrylic and polycarbonate laminate. The average aiming error measured for the acrylic was 3.6 milliradians with a maximum error of 9.0 milliradians. The laminated windscreen was slightly worse at 4.1 milliradians average error and 10.5 milliradians maximum. These aiming errors were greatly reduced by employing correction algorithms which could be applied to the aiming information on the HMD. Subtleties of coordinate systems and roll correction are also addressed.
- Published
- 1996
- Full Text
- View/download PDF
18. Visually Coupled Systems Hardware and the Human Interface
- Author
-
Dean F. Kocian and H. Lee Task
- Subjects
ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION - Abstract
A visually coupled system (VCS) has been defined as “ . . . a special ‘subsystem’ which integrates the natural visual and motor skills of an operator into the system he is controlling” (Birt and Task, 1973). A basic VCS consists of three major components: (1) a head- or helmet-mounted (or head-directed) visual display, (2) a means of tracking head and/or eye pointing direction, and (3) a source of visual information which is dependent on eye/head viewing direction. The concept of a VCS is relatively simple: an operator looks in a particular direction, the head or eye tracker determines what that direction is, and the visual information source produces appropriate imagery to be viewed on the display by the operator. In this manner the operator is visually coupled to the system represented by the visual information source. The visual information source could be a physical imaging sensor such as a television camera or it could be a synthetic source such as computer-generated imagery (the basis for a virtual reality (VR) or virtual environment system). Thus, a VR system is really a subset of a VCS which can present both real-world and virtual information to an operator, often on a see-through display. The display is usually a helmet/head-mounted display (HMD) but it could also be the interior of a dome capable of displaying a projected image or it could be a mechanically mounted display that is not supported by the head but is attached to the head which in recent times has been referred to as a binocular omni-oriented monitor (BOOM) display. Both eye-tracking and head-tracking devices have been developed but by far the least expensive and most widely used is head tracking (this is based on the reasonable assumption that the eyes will be looking in the general direction that the head is pointing). Figures 6-1 through 6-4 are photographs of some early helmet-mounted and BOOM displays. In this chapter we will concentrate primarily on helmet/head-mounted displays and helmet/head trackers. This section describes each of the three main components of a visually coupled system and defines characteristics that are used in the specification of these components.
- Published
- 1995
- Full Text
- View/download PDF
19. Visual acuity versus field of view and light level for night vision goggles (NVGs)
- Author
-
Sharon A. Dixon, Mary M. Donohue-Perry, and H. Lee Task
- Subjects
Visual acuity ,business.industry ,Image intensifier ,Field of view ,Luminance ,Starlight ,law.invention ,Optics ,Geography ,law ,Night vision ,medicine ,Computer vision ,Spatial frequency ,Artificial intelligence ,medicine.symptom ,business ,Night vision device - Abstract
Visual acuity (resolution) and field of view are two significant parameters used to characterize night vision goggles (NVGs). It is well established that these two parameters are coupled together in an inverse relationship: an increase in field of view results in a reduction in visual acuity and vice versa. An experiment was conducted to determine how visual acuity through NVGs changes as a function of NVG field of view and ambient scene illumination level. A total of three trained observers were used for this study who ranged in age from 33 to 42 years of age. The NVGs used in the study had fields of view of 40, 47, and 52 degrees, respectively. Five levels of ambient scene illumination (corresponding to NVG output luminance levels of 0.01, 0.03, 0.08, 0.26, and 1.9 fL) were provided by a 2856k light source which ranged from overcast starlight to quarter moon. The targets used in the study were approximately 95+% contrast square wave targets ranging in size from 45 cycles/degree to 5 cycles per degree. The method of adjustment was employed by having the trained observer start at a distance of 30 feet and determine the highest spatial frequency target which was clearly discernable. The subject was then directed to walk back slowly from the target until it was just out of focus, and then walk forward until the target was barely discernable. The distance from the target was recorded and used to calculate the angular spatial frequency (and equivalent Snellen acuity). The results indicate that the simple geometric model of the inverse relationship between resolution and field of view is adequate for characterizing this design trade-off for the quality of image intensifier tubes currently available.© (1994) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.
- Published
- 1994
- Full Text
- View/download PDF
20. Dynamic Spatial Filter For Optical Signal Processing Using A Liquid Crystal Light Valve
- Author
-
H. Lee Task and William R. Mallory
- Subjects
Materials science ,Spatial filter ,business.industry ,Polarization (waves) ,Frame rate ,law.invention ,symbols.namesake ,Fourier transform ,Transducer ,Optics ,Light valve ,law ,symbols ,Optical filter ,business ,Beam splitter - Abstract
A device is described which is capable of doing dynamic spatial filtering. The filter,which can be changed at TV frame rates, utilizes a liquid crystal light valve (LCLV) in acontrolled -reflectivity mode. A filter pattern can be generated by a CRT or by otheroptical methods and imaged onto the LCLV. The LCLV is placed in the Fourier plane of anoptical transform system. The dynamic spatial filter is described in detail, and currentexperimental results are given.IntroductionAn experimental spatial filter has been constructed which utilizes reflection from aliquid crystal light valve (LCLV) in the Fourier plane. The driver or input side of theLCLV is illuminated with a spatial pattern by a light source capable of changing at thedesired rate, up to the response rate of the LCLV. Only information on the output sidewhich appears opposite the illuminated regions contributes to the final image. In ourexperiment, a slide projector was used as the light source. By using a CRT, the filterpattern can be changed at TV frame rates.Liquid crystal light valveThe heart of the dynamic spatial filter is an LCLV1. This device is a light -to -lightimage transducer. A low -level spatially distributed optical input on one side of a thin -film sandwich modifies the polarization of the light reflected from the other side.
- Published
- 1982
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.