36 results on '"Harley R"'
Search Results
2. Gabor Difference Analysis of Digital Video Quality
- Author
-
Jing Guo, M. Van Dyke-Lewis, and Harley R. Myler
- Subjects
Computer science ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Gabor transform ,Video quality ,Video compression picture types ,Gabor filter ,Human visual system model ,Media Technology ,Video denoising ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Data compression ,Reference frame - Abstract
The rapid increase in the development of digital video systems has generated a strong need for objective video quality metrics. Two methods are presented in this work. One is the Gabor difference analysis (GDA) full reference method, and the other is the reverse frame prediction (RFP) no reference video quality method. Both methods are based on the multi-channel properties of the human visual system (HVS). Gabor filtering is used in both methods. In the GDA method, a reference and a degraded digital video sequence are compared by taking into account various psycho-perceptual properties of the HVS. The RFP method does not require a reference video stream and is intended for in-service testing and on-line monitoring. The performances of the proposed methods are evaluated. The methods in this work are shown to be consistent with the data from subjective testing over a wide range of scenes. This work is critical to the evaluation of the effectiveness of compression schemes on HDTV imagery.
- Published
- 2004
- Full Text
- View/download PDF
3. Constraint mechanisms for knowledge acquisition from computer-aided design data
- Author
-
Avelino J. Gonzalez, Massood Towhidnejad, and Harley R. Myler
- Subjects
Constraint (information theory) ,Engineering ,Artificial Intelligence ,business.industry ,Computer Aided Design ,Artificial intelligence ,business ,computer.software_genre ,Knowledge acquisition ,computer ,Industrial and Manufacturing Engineering - Abstract
A number of automated reasoning systems find their basis in process control engineering. These programs are often model-based and use individual frames to represent component functionality. This representation scheme allows the process system to be dynamically monitored and controlled as the reasoning system need only simulate the behavior of the modeled system while comparing its behavior to real-time data. The knowledge acquisition task required for the construction of knowledge bases for these systems is formidable because of the necessity of accurately modeling hundreds of physical devices. We discuss a novel approach to the capture of this component knowledge entitled automated knowledge generation (AKG) that utilizes constraint mechanisms predicated on physical behavior of devices for the propagation of truth through the component model base. A basic objective has been to construct a complete knowledge base for a model-based reasoning system from information that resides in computer-aided design (CAD) databases. If CAD has been used in the design of a process control system, then structural information relating the components will be available and can be utilized for the knowledge acquisition function. Relaxation labeling is the constraint-satisfaction method used to resolve the functionality of the network of components. It is shown that the relaxation algorithm used is superior to simple translation schemes.
- Published
- 1993
- Full Text
- View/download PDF
4. Representation of process system knowledge through component constraint descriptions
- Author
-
Avelino J. Gonzalez, Harley R. Myler, and Frederic D. McKenzie
- Subjects
Theoretical computer science ,Knowledge representation and reasoning ,Artificial Intelligence ,Control and Systems Engineering ,Computer science ,Electrical and Electronic Engineering ,Constraint satisfaction ,Transfer function ,Knowledge acquisition - Abstract
Automated development of models for use in computer simulations of engineered systems (e.g. electronic, power, thermal, process systems, etc.) can represent a significant advantage to system designers and troubleshooters. The fact that most modern systems have been designed in Computer-Aided Design (CAD) environments represents a unique opportunity for automatically generating a model from the electronic representation of a system. Models generally require definition of the system structure (i.e. component connectivity) and of the behavioral description of its components. With some exceptions, determination of the system connectivity from a CAD representation is a relatively uncomplicated procedure. However, the assignment of a functional behavior to each component of the system depicted in the CAD representation is a significant problem. This is because behavioral information is usually not included in the CAD representation of the system, as it is not required by the typical users of the CAD graphic output. The overall issue addressed in this paper, therefore, is the determination of the correct behavioral attributes for the components making up the modeled system. This will be addressed through the identifcation and matching of system components to elements of an external base of generic component knowledge. The components' behavioral representation (i.e. transfer function) will be set equal to that of its matching element in this external database. The compatibility of a hard-to-identify system component with its (known) neighboring components can be used to shed some light on its identity. Component compatibility can also be used to determine correct connectivity when that defined by the CAD database is incorrect. This compatibility can be determined through the use of domain (system) knowledge, in the form of system theory and/or practice. The representation of this knowledge as a series of constraints is the focus of this paper. Verification of this technique using a testbed system is also reported here.
- Published
- 1993
- Full Text
- View/download PDF
5. CONSTRAINT MECHANISMS IN AUTOMATED KNOWLEDGE GENERATION
- Author
-
Massood Towhidnejad, Avelino J. Gonzalez, and Harley R. Myler
- Subjects
Reasoning system ,Knowledge management ,Knowledge representation and reasoning ,business.industry ,Computer science ,Knowledge engineering ,Knowledge acquisition ,Knowledge-based systems ,Knowledge base ,Knowledge extraction ,Artificial Intelligence ,Domain knowledge ,Software engineering ,business - Abstract
In the past decade, the use of control and diagnostic reasoning systems in different areas of government, industry, and university operations has increased. A great number of these systems find their basis in engineering, specifically in process control. The majority of the time devoted to the development of these systems is spent in the areas of Knowledge Engineering (KE) and Knowledge Acquisition (KA). Extensive research for the development of systems that perform the KE task is under way. This article presents an approach toward automatic knowledge acquisition. The objective of this research was to construct a complete knowledge base for a diagnostic and control reasoning system from information that resides in Computer Aided Design (CAD) databases. This work will decrease the amount of time spent in the manual generation of knowledge bases for diagnostic reasoning systems, ft will also enable the creation of more reliable knowledge bases since less hand coding is required.
- Published
- 1993
- Full Text
- View/download PDF
6. Object-oriented neural simulation tools for a hypercube parallel machine
- Author
-
Randall K. Gillis, Arthur Robert Weeks, Gary W. Hall, and Harley R. Myler
- Subjects
Object-oriented programming ,Artificial neural network ,Computer science ,business.industry ,Cognitive Neuroscience ,Machine learning ,computer.software_genre ,Computer Science Applications ,Software ,Parallel processing (DSP implementation) ,Artificial Intelligence ,Hypercube ,Artificial intelligence ,Software engineering ,business ,Computer-aided software engineering ,computer ,Graphical user interface - Abstract
A substantial amount of work has recently been completed at the University of Central Florida in the development of an Artificial Neural Network (ANN) simulation environment that overcomes traditional implementation problems normally associated with these types of programs. Researchers addressing the development and application of ANN systems seek modifiability, expansibility and platform independence. Our system allows for these elements as well as parallel execution when a parallel hardware is available. This is accomplished by use of object-oriented programming and a Computer Aided Software Engineering (CASE) approach to the development environment that allows the user to modify the software describing the ANN model without understanding of the overall implementation details. A sophisticated Graphical User Interface (GUI) is provided to allow rapid construction and evaluation of complex large-scale neural models.
- Published
- 1992
- Full Text
- View/download PDF
7. Display Design Guide for Visual Media
- Author
-
Richard D. Gilson, Harley R. Myler, and Jada D. Brooks
- Subjects
Engineering ,VISUAL TRAINING ,business.industry ,media_common.quotation_subject ,Fidelity ,Display design ,General Medicine ,Training (civil) ,Human–computer interaction ,Taxonomy (general) ,Visual media ,Computer vision ,Artificial intelligence ,business ,Relevant information ,Realism ,media_common - Abstract
Visual display systems have become so advanced with available options, it is difficult to decide which type of system is needed. Traditionally, the choice has been to select the system with the greatest level of functional and physical fidelity that can be procured. While most would assume that more realism results in better training and performance, research (Dwyer, 1972; Marsh, 1983; Richey, 1986) suggests otherwise. Realism in a display system may serve to be more distracting than helpful, depending upon the training objective. The initial intent of this project was to locate any and all information pertaining to visual display characteristics as they relate to human visual functions or to visual training requirements. A selected literature search was conducted to locate relevant information, data bases or taxonomies that relate the above information. No current taxonomy of this kind was located; therefore, the efforts of the project turned in the direction of formulating a taxonomic structure based on information derived from various disciplines such as instructional technology, human psychology, and computer engineering. General guidelines only, with no specific information, pertaining to the selection of visual media for given situations were found in the instructional technology literature. Based on those guidelines, four general types of presentation media were selected: alphanumerics, 2-dimensional graphics, 3-dimensional graphics, and scene quality. Next, display specifications for these presentations were identified through the aid of engineering and vendor-provided design parameters. Finally, human limitations were applied and psychophysical transformations allowed the determination of final display descriptions.
- Published
- 1990
- Full Text
- View/download PDF
8. Iterative image reconstruction: a wavelet approach
- Author
-
Harley R. Myler and Wissam A. Rabadi
- Subjects
Discrete wavelet transform ,Iterative method ,business.industry ,Applied Mathematics ,Second-generation wavelet transform ,Wavelet transform ,Cascade algorithm ,Iterative reconstruction ,Wavelet ,Signal Processing ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Harmonic wavelet transform ,Algorithm ,Mathematics - Abstract
Image reconstruction from the measurements of the image Fourier transform magnitude remains an important and difficult problem. Among all the approaches developed to solve this problem, the iterative transform algorithms are currently the most efficient. However, these algorithms suffer from major drawbacks that limit their practical application. We introduce a wavelet adaptation of the general iterative algorithm that can significantly improve the performance of the algorithm while dramatically reducing its computational complexity.
- Published
- 1998
- Full Text
- View/download PDF
9. Knowledge base enhancement of visual tracking
- Author
-
Wiley E. Thompson, Harley R. Myler, and Gerald M. Flachs
- Subjects
Identification (information) ,Knowledge base ,business.industry ,Computer science ,Feature (computer vision) ,Laser tracker ,Eye tracking ,Computer vision ,Artificial intelligence ,Image segmentation ,business ,Tracking (particle physics) - Abstract
A proposed system for a visual tracker is presented that relies on information developed from previous tracking scenarios stored in a knowledge base to enhance tracking performance. The system is comprised of a centroid tracker front end that supplies segmented image features to a data reduction algorithm and subsequently to a track processor operating under one of two modes, learn or track. While in learn mode, a human operator provides identification cues for membership in a long-term storage relation within a knowledge base. In track mode the system operates autonomously with a cognitive processing algorithm replacing the human operator. The autonomous system functions as a correlation tracker by comparing processed input to data that was stored in the knowledge base during learn mode. Results determined from the classification generate tracker directives that either enhance the current track or cause the tracker to search for alternate targets based on a global target tracking list.
- Published
- 2005
- Full Text
- View/download PDF
10. An approach to the acquisition of a world frame using a visual associative memory
- Author
-
D.B. Clifton, Arthur Robert Weeks, and Harley R. Myler
- Subjects
Visual memory ,Computer science ,business.industry ,Content-addressable storage ,Gestalt psychology ,Grandmother cell ,Spatial intelligence ,Artificial intelligence ,Content-addressable memory ,business ,Memory map ,Associative property - Abstract
An approach to the autonomous acquisition of world knowledge is presented. Based on the theory that biological cognizance is essentially a Gestalt awareness of the world, our research attempts to develop a similar awareness using an associative memory. This associative memory receives inputs from several self-organizing feature maps which identify objects in particular parts of an image. Spatial relationships of objects in the image are reflected in the relationship of the feature maps to each other, so the associative memory records both identity and spatial information of objects. To this visual memory is added the memory of the viewer's internal status, in this case an estimate of position. Together, these associative memories provide a view-field model of location awareness similar to that originally presented by D. Zipser (1986). In keeping with the Gestalt approach taken here, the grandmother cell approach of the earlier work has been replaced by the use of distributed representations. View-fields are identified as a pattern of activity across a number of nodes. It is believed that this distributed approach will permit greater generalization on the part of the viewer and provide a larger memory capacity than can be achieved otherwise. >
- Published
- 2002
- Full Text
- View/download PDF
11. Multicriterion vehicle pose estimation for SAR ATR
- Author
-
Liviu I. Voicu, Ronald Patton, and Harley R. Myler
- Subjects
Synthetic aperture radar ,business.industry ,Image segmentation ,Systems modeling ,3D modeling ,computer.software_genre ,Identification (information) ,Geography ,Automatic target recognition ,Motion estimation ,Computer vision ,Artificial intelligence ,Data mining ,business ,Pose ,computer - Abstract
Many approaches to target recognition on SAR images employ model-based techniques. These systems incorporate computationally intensive operations such as large database probing or complex 3D renderings that are used to produce simulations that are compared against unknown targets. These operations would achieve a significant improvement in speed performance if the target poses were known in advance. A study that addresses the problem of estimating the poses of vehicles in SAR images is reported in this paper. A pose estimation algorithm suite is proposed that is based on a set of partially independent criteria. A statistical analysis of the performance obtained by employing the established criteria, both individually and in combination, is also conducted and the results are comparatively discussed.
- Published
- 1999
- Full Text
- View/download PDF
12. Computationally intelligent approach to ATR
- Author
-
Harley R. Myler, Liviu I. Voicu, Steven P. Smith, and Ronald Patton
- Subjects
Synthetic aperture radar ,Engineering ,business.industry ,Computer programming ,Process (computing) ,Image processing ,Machine learning ,computer.software_genre ,Segmentation ,Artificial intelligence ,Geometric hashing ,business ,computer ,Evolutionary programming ,Scope (computer science) - Abstract
A model-based system employing a computationally intelligent search strategy has been developed for classifying military vehicles in SAR imagery. The system combines pose detection, Evolutionary Programming (EP) methods, and Geometric Hashing (GH). The design is based on an information filtering process that progressively narrows the scope of the problem space while maximizing for success. While the current system has been trained to identify 12 military vehicles, the architecture is extensible to additional vehicle types.
- Published
- 1999
- Full Text
- View/download PDF
13. Detection performance prediction on IR images assisted by evolutionary learning
- Author
-
Ronald Patton, Liviu I. Voicu, and Harley R. Myler
- Subjects
Visual perception ,business.industry ,media_common.quotation_subject ,Feature extraction ,Evolutionary algorithm ,Image segmentation ,Visualization ,Data modeling ,Geography ,Perception ,Clutter ,Computer vision ,Artificial intelligence ,business ,media_common - Abstract
Background clutter characterization in IR imagery has become an actively researched field and several clutter models have been reported. These models attempt to evaluate the target detection/recognition probabilities that are characteristic to a certain scene when specific target and human visual perception features are known. The prior knowledge assumed and required by these models is a severe limitation. Furthermore, the attempt to model subjective and intricate mechanisms such as human perception with simple mathematical formulae is controversial. In this paper, we introduce the idea of adaptive models that are dynamically derived from a set of examples by a supervised evolutionary learning scheme. A set of characteristic scene and target features with a demonstrated influence on the human visual perception mechanism is first extracted from the original images. Then, the correlation between these features and the results obtained by visual observer tests on the same set of images are captured into a model by the learning scheme. The effectiveness of the adaptive modeling principle is discussed in the final part of the paper.
- Published
- 1999
- Full Text
- View/download PDF
14. Semiotic foundation for multisensor-multilook fusion
- Author
-
Harley R. Myler
- Subjects
Data processing ,Modalities ,Situation awareness ,InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,Computer science ,Process (engineering) ,business.industry ,Feature extraction ,ComputingMilieux_PERSONALCOMPUTING ,Multimodality ,InformationSystems_MODELSANDPRINCIPLES ,Human–computer interaction ,Semiotics ,Artificial intelligence ,business ,Qualitative research - Abstract
This paper explores the concept of an application of semiotic principles to the design of a multisensor-multilook fusion system. Semiotics is an approach to analysis that attempts to process media in a united way using qualitative methods as opposed to quantitative. The term semiotic refers to signs, or signatory data that encapsulates information. Semiotic analysis involves the extraction of signs from information sources and the subsequent processing of the signs into meaningful interpretations of the information content of the source. The multisensor fusion problem predicated on a semiotic system structure and incorporating semiotic analysis techniques is explored and the design for a multisensor system as an information fusion system is explored. Semiotic analysis opens the possibility of using non-traditional sensor sources and modalities in the fusion process, such as verbal and textual intelligence derived from human observers. Examples of how multisensor/multimodality data might be analyzed semiotically is shown and discussion on how a semiotic system for multisensor fusion could be realized is outlined. The architecture of a semiotic multisensor fusion processor that can accept situational awareness data is described, although an implementation has not as yet been constructed.
- Published
- 1998
- Full Text
- View/download PDF
15. Approach to multisensor/multilook information fusion
- Author
-
Ronald Patton and Harley R. Myler
- Subjects
Engineering ,Data processing ,Artificial neural network ,business.industry ,Evolutionary algorithm ,Information processor ,Swarm behaviour ,Sensor fusion ,Machine learning ,computer.software_genre ,Electronic data ,Artificial intelligence ,Geometric hashing ,business ,computer - Abstract
We are developing a multi-sensor, multi-look Artificial Intelligence Enhanced Information Processor (AIEIP) that combines classification elements of geometric hashing, neural networks and evolutionary algorithms in a synergistic combination. The fusion is coordinated using a piecewise level fusion algorithm that operates on probability data from statistics of the individual classifiers. Further, the AIEIP incorporates a knowledge-based system to aid a user in evaluating target data dynamically. The AIEIP is intended as a semi-autonomous system that not only fuses information from electronic data sources, but also has the capability to include human input derived from battlefield awareness and intelligence sources. The system would be useful in either advanced reconnaissance information fusion tasks where multiple fixed sensors and human observer inputs must be combined or for a dynamic fusion scenario incorporating an unmanned vehicle swarm with dynamic, multiple sensor data inputs. This paper represents our initial results from experiments and data analysis using the individual components of the AIEIP on FLIR target sets of ground vehicles.
- Published
- 1997
- Full Text
- View/download PDF
16. Application of the Neocognitron to target identification
- Author
-
Harley R. Myler
- Subjects
Image fusion ,Artificial neural network ,Contextual image classification ,business.industry ,Computer science ,Optical engineering ,Image processing ,Computer vision ,Neocognitron ,Artificial intelligence ,Forward looking infrared ,business ,Classifier (UML) - Abstract
A derivative of the Fukashima Neocognitron was trained with a set of preprocessed target image chips for the purpose of target classification. The Neocognitron was chosen because of robust performance on handwritten characters that contain contours and complex line shapes not unlike the processed target images used in our multi-sensor/multi-look information fusion system. Advantages of the Neocognitron include translation and distortion invariance, which are desirable properties of any classifier. This paper represents our initial results and conclusions from experiments and data analysis using the Neocognitron on FLIR target sets of ground vehicles.© (1997) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.
- Published
- 1997
- Full Text
- View/download PDF
17. Image reconstruction using wavelets
- Author
-
Wissam A. Rabadi and Harley R. Myler
- Subjects
Computer science ,business.industry ,Image processing ,Iterative reconstruction ,symbols.namesake ,Fourier transform ,Wavelet ,symbols ,Probabilistic analysis of algorithms ,Computer vision ,Artificial intelligence ,business ,Image retrieval ,Algorithm ,Image restoration - Abstract
Image reconstruction from the measurements of the image Fourier transform modula is an important and difficult problem. Among all the approaches developed to solve this problem, the iterative algorithms remain the most efficient. However, these algorithms suffer from a major drawback that limits their practical application. In this paper we introduce a wavelet adaptation of one of the iterative algorithms that can significantly improve the performance of these algorithms while dramatically reducing their computational complexity.
- Published
- 1996
- Full Text
- View/download PDF
18. Pyramid framework for image reconstruction from nonimaged laser speckle
- Author
-
Arthur Robert Weeks, Kevin J. Gamble, Wissam A. Rabadi, and Harley R. Myler
- Subjects
business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Image processing ,Iterative reconstruction ,Speckle pattern ,symbols.namesake ,Fourier transform ,Wavelet ,symbols ,Computer vision ,Pyramid (image processing) ,Artificial intelligence ,business ,Image retrieval ,Image restoration ,Mathematics - Abstract
A multiresolution approach for image reconstruction from the magnitude of its Fourier transform has been developed and implemented by employing the concept of pyramid sampling. In this approach several iterations of the error reduction algorithm are preformed at each level of the pyramid using a coarse-to-fine strategy, resulting in improved convergence and reduced computational cost.© (1995) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.
- Published
- 1995
- Full Text
- View/download PDF
19. Edge detection of color images using the HSL color space
- Author
-
Carlos E. Felix, Arthur Robert Weeks, and Harley R. Myler
- Subjects
Color image ,business.industry ,Color space ,Edge detection ,Spectral color ,Computer Science::Graphics ,RGB color model ,Computer vision ,Artificial intelligence ,Chromaticity ,business ,Image gradient ,Mathematics ,Hue - Abstract
Various edge detectors have been proposed as well as several different types of adaptive edge detectors, but the performance of many of these edge detectors depends on the features and the noise present in the grayscale image. Attempts have been made to extend edge detection to color images by applying grayscale edge detection methods to each of the individual red, blue, and green color components as well as to the hue, saturation, and intensity color components of the color image. The modulus 2(pi) nature of the hue color component makes its detection difficult. For example, a hue of 0 and 2(pi) yields the same color tint. Normal edge detection of a color image containing adjacent pixels with hue of 0 and 2(pi) could yield the presence of an edge when an edge is really not present. This paper presents a method of mapping the 2(pi) modulus hue space to a linear space enabling the edge detection of the hue color component using the Sobel edge detector. The results of this algorithm are compared against the edge detection methods using the red, blue, and green color components. By combining the hue edge image with the intensity and saturation edge images, more edge information is observed.
- Published
- 1995
- Full Text
- View/download PDF
20. RGB color enhancement using homomorphic filtering
- Author
-
Liviu I. Voicu, Arthur Robert Weeks, and Harley R. Myler
- Subjects
business.industry ,Computer science ,Image quality ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Butterworth filter ,Image processing ,Filter (signal processing) ,Homomorphic filtering ,RGB color model ,Computer vision ,Artificial intelligence ,business ,High-pass filter ,Linear filter ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
Due to the special features of the homomorphic filter, relying on its capabilities of selectively enhancing blurred images with poor contrast and nonuniform illumination, a study concerning the possibility of applying it to RGB (24-bit true color) images has been made. Moreover, the effects of different shapes for the linear filter employed by the process are discussed and illustrated using a classical high pass butterworth filter modified for more flexibility in the final enhancement. An image of poor quality in terms of blurring and nonuniform illumination was used to demonstrate the results of different stages of the filtering process. It is shown that homomorphic filtering is a viable tool for enhancing poor quality RGB images.
- Published
- 1995
- Full Text
- View/download PDF
21. Line drawing extraction from gray level images by feature integration
- Author
-
Hoi J. Yoo, Harley R. Myler, Richard Lepage, and Daniel Crevier
- Subjects
Line segment ,Similarity (geometry) ,Feature (computer vision) ,business.industry ,Orientation (computer vision) ,Computer science ,Feature extraction ,Human visual system model ,Closure (topology) ,Canny edge detector ,Computer vision ,Artificial intelligence ,business - Abstract
We describe procedures that extract line drawings from digitized gray level images, without use of domain knowledge, by modeling preattentive and perceptual organization functions of the human visual system. First, edge points are identified by standard low-level processing, based on the Canny edge operator. Edge points are then linked into single-pixel thick straight- line segments and circular arcs: this operation serves to both filter out isolated and highly irregular segments, and to lump the remaining points into a smaller number of structures for manipulation by later stages of processing. The next stages consist in linking the segments into a set of closed boundaries, which is the system's definition of a line drawing. According to the principles of Gestalt psychology, closure allows us to organize the world by filling in the gaps in a visual stimulation so as to perceive whole objects instead of disjoint parts. To achieve such closure, the system selects particular features or combinations of features by methods akin to those of preattentive processing in humans: features include gaps, pairs of straight or curved parallel lines, L- and T-junctions, pairs of symmetrical lines, and the orientation and length of single lines. These preattentive features are grouped into higher-level structures according to the principles of proximity, similarity, closure, symmetry, and feature conjunction. Achieving closure may require supplying missing segments linking contour concavities. Choices are made between competing structures on the basis of their overall compliance with the principles of closure and symmetry. Results include clean line drawings of curvilinear manufactured objects. The procedures described are part of a system called VITREO (viewpoint-independent 3-D recognition and extraction of objects).
- Published
- 1994
- Full Text
- View/download PDF
22. Histogram equalization of the saturation component for true-color images using the C-Y color space
- Author
-
Harley R. Myler, Arthur Robert Weeks, and G. Eric Hague
- Subjects
Color histogram ,Computer science ,business.industry ,Color image ,Color normalization ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Color space ,RGB color space ,Computer Science::Computer Vision and Pattern Recognition ,Computer Science::Multimedia ,RGB color model ,Computer vision ,Adaptive histogram equalization ,Artificial intelligence ,business ,Histogram equalization ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
Histogram equalization is a well known tool for enhancing the contrast and brightness of grayscale images. Grayscale histogram equalization has been extended to color images with limited success. One common method is to equalize the illumination component, while leaving the saturation and hue components unchanged. This method doesn't improve the overall color saturation of the image. Another approach is to apply equalization techniques in the RGB color space. The difficulty in using the RGB color space is that it does not correspond to human interpretation of color. This paper describes a method for histogram equalization of the saturation component using the color difference (or C-Y) color space. Since equalization of the saturation component alone leads to color artifacts, attention is given to the relationship that exists between saturation and intensity.
- Published
- 1994
- Full Text
- View/download PDF
23. Grayscale image preprocessing for viewpoint-independent 3D extraction of objects
- Author
-
Arthur Robert Weeks, Hoi J. Yoo, and Harley R. Myler
- Subjects
business.industry ,Computer science ,Feature extraction ,Cognitive neuroscience of visual object recognition ,Image processing ,Image segmentation ,Object (computer science) ,Grayscale ,Edge detection ,Computer graphics (images) ,Line (geometry) ,Computer vision ,Artificial intelligence ,business - Abstract
The process by which images are prepared for complex vision systems is nontrivial. In this paper we describe the sequence of steps required to preprocess a grayscale picture for input to the Viewpoint Independent 3-D Extraction and Recognition of Objects (VJTREO) system. VITREO is capable of accepting grayscale pictures for processing into line drawings for input to object recognition subsystems. These drawings are analyzed for edge and surface features to allow the extraction of component parts for subsequentrecognition of the object containing them from stored component descriptions. This analysis demands that theline drawing extraction be robust and a combination of edge and line algorithms are employed. This processis described and examples of VITREO object extraction are shown.Keywords: object recognition, computer vision, edge extraction, edge linking, VITREO 1. INTRODUCTION The Viewpoint Independent 3-D Extraction of Objects system, or V1TREO, was developed[1] to recognize complex objects from grayscale images. The approach used is that described by Biederman asRecognition by Components (RBC) theory, which is described in a number of references [2,3,4]. Simplyput, RBC theory examines a complex object as a combination of what Biederman calls "Geometric Icons", orgeons. Thirty-six geons have been specified as various extrusions of shapes across their central axis. This isbest described by the example given in Figure 1.Consider a circle extruded across a straight line, Figure la. This yields a cylinder. Now, reproduce theextrusion process, but this time collapse the circle as the extrusion along the axis progresses, Figure lb. Thisgives us a cone. Finally, if the axis curves during the extrusion, we can create a horn as shown in Figure lc.If the shape, i.e., the circle, is changed to a rectangle, then we get the figures shown in ld.(a)
- Published
- 1994
- Full Text
- View/download PDF
24. Speckle simulation movies for analysis and evaluation of laser systems
- Author
-
Harley R. Myler, Wissam A. Rabadi, and Arthur Robert Weeks
- Subjects
Workstation ,business.industry ,Computer science ,Frame (networking) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Digital imaging ,Image processing ,Laser ,Supercomputer ,law.invention ,Speckle pattern ,Software ,law ,Computer graphics (images) ,Computer vision ,Artificial intelligence ,business - Abstract
Recent results achieved in the simulation of laser speckle in real-time are presented and discussed. Far field speckle has been simulated by a number of researchers using computer generated random phase screens to develop single frame pictures, however, our simulation develops a speckle picture in time, a movie, that we believe adequately illustrates speckle behavior as reflected from rotating and translating surfaces. The simulation was developed on a coarse array parallel supercomputer and the movies are formatted as QuickTime sequences. These sequences may be viewed on personal computers or workstations using easily obtained presentation software. Of interest is our correlation to speckle produced in the laboratory using lasers and translated surfaces and captured as frame sequences using a CCD camera.
- Published
- 1994
- Full Text
- View/download PDF
25. Adaptive local thresholding algorithm that maximizes the contour features within the thresholded image
- Author
-
L. F. Apley, Harley R. Myler, and Arthur Robert Weeks
- Subjects
business.industry ,Balanced histogram thresholding ,Binary image ,Gaussian pyramid ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Image processing ,Pattern recognition ,Thresholding ,Computer Science::Computer Vision and Pattern Recognition ,Thresholding algorithm ,Entropy (information theory) ,Computer vision ,Artificial intelligence ,business ,Image restoration ,Mathematics - Abstract
Global thresholding is widely used in image processing to generate binary images, which are used by various pattern recognition systems. Typically, many features that are present in the original gray-level image are lost in the resulting binary image. This paper presents an adaptive thresholding algorithm, that maximizes the edge features within the gray-level image. The Gaussian pyramid algorithm is used to find the local gray-level variations that are present in the original gray-level image. The resulting Gaussian pyramid image is then subtracted from the original gray-level image removing the local variations in illumination. This new image is then adaptively thresholded using the adaptive contour entropy algorithm. The resulting binary images have been shown to contain more edge features than the binary images generated using global thresholding techniques.
- Published
- 1994
- Full Text
- View/download PDF
26. Surface extraction using spatial position from line drawings
- Author
-
Arthur Robert Weeks, Hoi J. Yoo, and Harley R. Myler
- Subjects
Surface (mathematics) ,Computer science ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Digital imaging ,Boundary (topology) ,Edge (geometry) ,Edge detection ,Hough transform ,law.invention ,Digital image ,law ,Position (vector) ,Computer vision ,Artificial intelligence ,business ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
Line drawings can be obtained from digital images using a combination of various edges processing techniques such as edge detection, edge thinning, perceptual organization, the Hough transform, and others. Our interest has been the extraction of surfaces for use in subsequent object recognition algorithms. Current approaches of the surface extraction require a pre-defined data structure of the vertice and edge of an object. Using the data structure, all edge directions are taken clockwise, thus if the edge is counted twice with different directions, the edge is considered as the common edge of two different surfaces. Consequently, the computation cost is very high and increases tremendously for complex objects. In this paper, we propose a very simple algorithm to extract both whole (bounding) and component surfaces. Our approach is based on the spatial position of contours without any geometric constraints. The approach locates boundaries of lines in an image that are easily measured by a city-block distance transformation. The surface is then obtained by peeling off the outside boundary of the contour. The component surfaces are then separated by the set of inside boundaries, if present.
- Published
- 1993
- Full Text
- View/download PDF
27. Maximization of contour edge detection using adaptive thresholding
- Author
-
Arthur Robert Weeks, Michelle Van Dyke-Lewis, and Harley R. Myler
- Subjects
Computer science ,Feature (computer vision) ,business.industry ,Computer Science::Computer Vision and Pattern Recognition ,Entropy (information theory) ,Computer vision ,Pattern recognition ,Maximization ,Artificial intelligence ,business ,Thresholding ,Edge detection - Abstract
A new adaptive thresholding technique is presented that maximizes the contour edge information within an image. Early work by Attneave suggested that visual information in images is concentrated at the contours. He concluded that the information associated with these points and their nearby neighbors is essential for image perception. Resnikoff has suggested a measurement of information gain in terms of direction. This measurement determines information gained from a measure of an angle direction along image contours relative to other measures of information gain for other positions along the curve. Hence, one form of information measure is the angular entropy of contours within an image. Our adaptive thresholding algorithm begins by varying the threshold value between a minimum and a maximum threshold value and then computing the total contour entropy over the entire binarized edge image. Next, the threshold value that yields the highest contour entropy is selected as the optimum threshold value. It is at this threshold value that the binarized image contains the greatest amount of image features.
- Published
- 1993
- Full Text
- View/download PDF
28. Novel approach to aircraft silhouette recognition using genetic algorithms
- Author
-
Harley R. Myler, Arthur Robert Weeks, and Jill Laura Hooper-Giles
- Subjects
business.industry ,Computer science ,Medial axis ,parasitic diseases ,Genetic algorithm ,technology, industry, and agriculture ,Image processing ,Computer vision ,Artificial intelligence ,business ,human activities ,Classifier (UML) ,Silhouette - Abstract
An approach to aircraft silhouette recognition using a genetic algorithm for pattern analysis and search tasks and a bimorph shape classifier is presented. The bimorph classifier produces an assortment of shapes derived from a medial axis transform language (MAT) by establishing a set of genes, a chromosome, that portrays the genetic makeup of each shape produced. Each gene represents a unique shape feature for that object and each chromosome a unique object. The chromosomes are used to generate the shapes embodying the classification space. The genetic algorithm then performs a search on the space until the exemplar shape is found that matches an unknown aircraft. The outcome of the search is a chromosome that constitutes the aircraft shape characteristics. The chromosome may then be compared to that of known aircraft to determine the type of aircraft in question. The procedures and results of utilizing this classification system on various aircraft silhouettes are presented.
- Published
- 1992
- Full Text
- View/download PDF
29. Decision-directed entropy-based adaptive filtering
- Author
-
Arthur Robert Weeks, Harley R. Myler, and Michelle Van Dyke-Lewis
- Subjects
Adaptive filter ,Pixel ,Computer science ,business.industry ,Median filter ,Pattern recognition ,Image processing ,Salt-and-pepper noise ,Filter (signal processing) ,Artificial intelligence ,business ,Digital filter ,Edge detection - Abstract
A recurring problem in adaptive filtering is selection of control measures for parameter modification. A number of methods reported thus far have used localized order statistics to adaptively adjust filter parameters. The most effective techniques are based on edge detection as a decision mechanism to allow the preservation of edge information while noise is filtered. In general, decision-directed adaptive filters operate on a localized area within an image by using statistics of the area as a discrimination parameter. Typically, adaptive filters are based on pixel to pixel variations within a localized area that are due to either edges or additive noise. In homogeneous areas within the image where variances are due to additive noise, the filter should operate to reduce the noise. Using an edge detection technique, a decision directed adaptive filter can vary the filtering proportional to the amount of edge information detected. We show an approach using an entropy measure on edges to differentiate between variations in the image due to edge information as compared against noise. The method uses entropy calculated against the spatial contour variations of edges in the window.
- Published
- 1991
- Full Text
- View/download PDF
30. Calibration issues in the measurement of ocular movement and position using computer image processing
- Author
-
Alfred S. Jolson, Harley R. Myler, and Arthur Robert Weeks
- Subjects
Monocular ,genetic structures ,Pixel ,Computer science ,business.industry ,Pupillary distance ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Eye movement ,Image processing ,eye diseases ,Personal computer ,Digital image processing ,Calibration ,Computer vision ,sense organs ,Artificial intelligence ,Strabismus ,business - Abstract
There has been a desire for some time by practitioners and researchers in the ophthalmology field for an instrument to automatically measure eye alignment quickly and accurately. With MS DOS based Personal Computer image processing systems becoming readily available and inexpensive, a method is presented that uses image processing tools to easily diagnose eye misalignment and initiate the appropriate treatment. The MS DOS based Personal Computer image processing system discussed is able to accurately measure the monocular inner pupillary distance, compute the axial length of each eye, and measure angular deviations so that tropias and phorias can easily be measured. Calibration of the imaging system requires that the pixel displacement as a function of eye movement be known. It is the purpose of this paper to present experimental data collected over several patients with and without strabismus (ocular deviations) and a detailed theoretical analysis that calibrates the imaging system discussed. In designing the imaging system to measure ocular deviations, several calibration schemes were developed. The first uses estimated data collected and published in the literature on the axial length of the eye as a function of the patient's eye (Estimated), the second uses the axial length of the eye measured from an A-scan (Calibrated), and the third uses the imaging system directly to compute the axial length (Physiologic).
- Published
- 1991
- Full Text
- View/download PDF
31. Clutter modeling in infrared images using genetic programming
- Author
-
Anthony Gallagher, Mosleh Uddin, Liviu I. Voicu, Julien Schuler, and Harley R. Myler
- Subjects
Infrared ,Computer science ,business.industry ,Optical engineering ,media_common.quotation_subject ,Supervised learning ,General Engineering ,Genetic programming ,Observer (special relativity) ,Atomic and Molecular Physics, and Optics ,Data modeling ,Perception ,Clutter ,Computer vision ,Artificial intelligence ,business ,media_common - Abstract
Background clutter characterization in infrared imagery has become an actively researched field, and several clutter models have been reported. These models attempt to evaluate the target detection and recognition probabilities that are characteristic of a certain scene when specific target and human visual perception features are known. The prior knowledge assumed and required by these models is a severe limitation. Furthermore, the attempt to model subjective and intricate mechanisms such as human perception with general mathematical formulas is controversial. In this paper, we introduce the idea of adaptive models that are dynamically derived from a set of examples by a supervised learning mechanism based on genetic programming foundations. A set of characteristic scene and target features with a demonstrated influence on the human visual perception mechanism is first extracted from the original images. Then, the correlations between these features and detection performance results obtained by visual observer tests on the same set of images are captured into models by a learning algorithm. The effectiveness of the adaptive modeling principle is discussed in the final part of the paper.
- Published
- 2000
- Full Text
- View/download PDF
32. Histogram specification of 24-bit color images in the color difference (C-Y) color space
- Author
-
Arthur Robert Weeks, Lloyd J. Sartor, and Harley R. Myler
- Subjects
Color histogram ,Color image ,Color normalization ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Histogram matching ,Color space ,Atomic and Molecular Physics, and Optics ,Computer Science Applications ,ComputingMethodologies_PATTERNRECOGNITION ,Computer Science::Computer Vision and Pattern Recognition ,Computer Science::Multimedia ,RGB color model ,Adaptive histogram equalization ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Histogram equalization ,ComputingMethodologies_COMPUTERGRAPHICS ,Mathematics - Abstract
Histogram equalization and specification have been widely used to enhance the content of grayscale images, with histogram specification having the advantage of allowing the output histogram to be specified as compared to histogram equalization which attempts to produce and output histogram which is uniform. Unfortunately, extending histogram techniques to color images is not very straightforward. Performing histogram specification on color images in the RGB color space results in specified histograms that are hard to interpret for a particular enhancement that is desired. Human perception of color interprets a color in terms of its hue, saturation, and intensity components. In this paper, we describe a method of extending graylevel histogram specification to color images by performing histogram specification on the luminance (or intensity), saturation, and hue components in the color difference (C-Y) color space. This method takes into account the correlation between the hue, saturation, and intensity components while yielding specified histograms which have physical meaning. Histogram specification was performed on an example color image and was shown to enhance the color content and details within this image without introducing unwanted artifacts.
- Published
- 1999
- Full Text
- View/download PDF
33. Practical considerations on color image enhancement using homomorphic filtering
- Author
-
Arthur Robert Weeks, Liviu I. Voicu, and Harley R. Myler
- Subjects
Color histogram ,Color image ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Color balance ,Color space ,Atomic and Molecular Physics, and Optics ,Computer Science Applications ,RGB color space ,Homomorphic filtering ,Color depth ,RGB color model ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,ComputingMethodologies_COMPUTERGRAPHICS ,Mathematics - Abstract
We present a study concerning the practical possibilities of using the homomorphic filtering for color image enhancement. Two of the most popular color models, RGB and C-Y (color difference), are employed and the results are comparatively discussed. The homomorphic filtering has proven to be a viable tool for both color models considered.
- Published
- 1997
- Full Text
- View/download PDF
34. Adaptive thresholding algorithm that maximizes edge features within an image
- Author
-
Michelle Van Dyke-Lewis, Harley R. Myler, and Arthur Robert Weeks
- Subjects
Balanced histogram thresholding ,business.industry ,Evolutionary algorithm ,Pattern recognition ,Image segmentation ,Thresholding ,Atomic and Molecular Physics, and Optics ,Computer Science Applications ,Image (mathematics) ,Computer vision ,Enhanced Data Rates for GSM Evolution ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Algorithm ,Mathematics ,Data compression ,Image compression - Abstract
We present a new adaptive thresholding algorithm that uses the theories of human visual perception to select a global threshold value. The algorithm is based on the total contour infor- mation contained within an image and selects a threshold value that maximizes the edge features within the binarized image.
- Published
- 1993
- Full Text
- View/download PDF
35. Computer-generated noise images for the evaluation of image processing algorithms
- Author
-
Holly Wenaas, Arthur Robert Weeks, and Harley R. Myler
- Subjects
Computer science ,Gaussian ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Image processing ,symbols.namesake ,Dark-frame subtraction ,Digital image processing ,Image noise ,Median filter ,Computer vision ,Value noise ,Rayleigh scattering ,Optical filter ,Noise measurement ,business.industry ,Noise (signal processing) ,General Engineering ,Salt-and-pepper noise ,White noise ,Atomic and Molecular Physics, and Optics ,Uncorrelated ,Gradient noise ,Fourier transform ,Gaussian noise ,Colors of noise ,symbols ,Artificial intelligence ,business ,Digital filter ,Linear filter - Abstract
Effective implementation of image processing algorithms for enhancement and restoration often assumes that the images are degraded by known statistical noise. Depending on the application, the type of noise present may vary. The noise distributions that are commonly encountered in image processing are the Gaussian, Rayleigh, negative exponential, and gamma. Typically, when computer-generated noise images are used for algorithm development they are spatially uncorrelated. It is the purpose of this paper to present various types of computer-generated two-dimensional correlated and uncorrelated noise images along with suggestions of several applications.
- Published
- 1993
- Full Text
- View/download PDF
36. Application Of Expert System Techniques To A Visual Tracker
- Author
-
Gerald M. Flachs, Wiley E. Thompson, and Harley R. Myler
- Subjects
Relation (database) ,business.industry ,Computer science ,Fuzzy set ,Kanade–Lucas–Tomasi feature tracker ,Tracking system ,Tracking (particle physics) ,computer.software_genre ,Expert system ,Knowledge base ,Feature (computer vision) ,Computer vision ,Artificial intelligence ,business ,computer - Abstract
A structure for visual tracking system is presented which relies on information developed from previous tracking scenarios stored in a knowledge base to enhance tracking performance. The system is comprised of a centroid tracker front end which supplies segmented image features to a data reduction algorithm which holds the reduced data in a temporary data base relation. This relation is then classified vio two separate modes, learn and track. Under learn mode, an external teacher-irector operator provides identification and weighting cues for membership in a long-term storage relation within a knowledge base. Track mode operates autonomously from the learn mode where the system determines feature validity by applying fuzzy set membership criteria to previously stored track information in the database. Results determined from the classification generate tracker directives which either enhance or permit current tracking to continue or cause the tracker to search for alternate targets based upon analysis of a global target tracking list. The classification algorithm is based on correlative analysis of the tracker's segmented output presentation after low pass filtering derives lower order harmonics of the feature. The fuzzy set membership criteria is based on size, rotation, Irame location, and past history of the feature. The first three factors are lin-ear operations on the spectra, while the last is generated as a context relation in the knowledge base. The context relation interlinks data between features to facilitate tracker operation during feature occlusion or presence of countermeasures.
- Published
- 1985
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.