196 results on '"Landgrebe, David A."'
Search Results
2. Remote Sensing as a Tool for Achieving and Monitoring Progress Toward Sustainability
- Author
-
Rochon, Gilbert L., Johannsen, Chris J., Landgrebe, David A., Engel, Bernard A., Harbor, Jonathan M., Majumder, Sarada, Biehl, Larry L., Sikdar, Subhas K., editor, Glavič, Peter, editor, and Jain, Ravi, editor
- Published
- 2004
- Full Text
- View/download PDF
3. Multispectral land sensing: where from, where to?
- Author
-
Landgrebe, David A.
- Subjects
Remote sensing -- Research ,Business ,Earth sciences ,Electronics and electrical industries - Abstract
This paper begins with some brief historical comments to set the stage for a discussion of the long-term potential for land remote sensing technology. This is followed by comments about what is needed to accelerate the achievement of this potential. The discussion is concluded with what concomitant development is needed with regard to a hyperspectral data analysis system. Index Terms--Analysis methods, multispectral data, remote sensing uses.
- Published
- 2005
4. Remote sensing as a tool for achieving and monitoring progress toward sustainability
- Author
-
Rochon, Gilbert L., Johannsen, Chris J., Landgrebe, David A., Engel, Bernard A., Harbor, Jonathan M., Majumder, Sarada, and Biehl, Larry L.
- Published
- 2003
- Full Text
- View/download PDF
5. Nonparametric weighted feature extraction for classification
- Author
-
Kuo, Bor-Chen and Landgrebe, David A.
- Subjects
Nonparametric statistics -- Analysis ,Object recognition (Computers) -- Methods ,Pattern recognition -- Methods ,Discriminant analysis ,Factor analysis ,Business ,Earth sciences ,Electronics and electrical industries - Abstract
In this paper, a new nonparametric feature extraction method is proposed for high-dimensional multiclass pattern recognition problems. It is based on a nonparametric extension of scatter matrices. There are at least two advantages to using the proposed nonparametric scatter matrices. First, they are generally of full rank. This provides the ability to specify the number of extracted features desired and to reduce the effect of the singularity problem. This is in contrast to parametric discriminant analysis, which usually only can extract L-1 (number of classes minus one) features. In a real situation, this may not be enough. Second, the nonparametric nature of scatter matrices reduces the effects of outliers and works well even for nonnormal datasets. The new method provides greater weight to samples near the expected decision boundary. This tends to provide for increased classification accuracy. Index Terms--Dimensionality reduction, discriminant analysis, nonparametric feature extraction.
- Published
- 2004
6. Toward an optimal supervised classifier for the analysis of hyperspectral data
- Author
-
Dundar, M. Murat and Landgrebe, David A.
- Subjects
Earth sciences -- Research ,Business ,Earth sciences ,Electronics and electrical industries - Abstract
In this paper, we propose a supervised classifier based on implementation of the Bayes rule with kernels. The proposed technique first proposes an implicit nonlinear transformation of the data into a feature space seeking to fit normal distributions having a common covariance matrix onto the mapped data. One requirement of this approach is the evaluation of posterior probabilities. We express the discriminant function in dot-product form, and then apply the kernel concept to efficiently evaluate the posterior probabilities. The proposed technique gives the flexibility required to model complex data structures that originate from a wide range of class-conditional distributions. Although we end up with piecewise linear decision boundaries in the feature space, these corresponds to powerful nonlinear boundaries in the original input space. For the data we considered, we have obtained some encouraging results. Index Terms--Bayes rule, kernels, supervised classification.
- Published
- 2004
7. A cost-effective semisupervised classifier approach with kernels
- Author
-
Dundar, M. Murat and Landgrebe, David A.
- Subjects
Earth sciences -- Research ,Business ,Earth sciences ,Electronics and electrical industries - Abstract
In this paper, we propose a cost-effective iterative semisupervised classifier based on a kernel concept. The proposed technique incorporates unlabeled data into the design of a binary classifier by introducing and optimizing a cost function in a feature space that maximizes the Rayleigh coefficient while minimizing the total cost associated with misclassified labeled samples. The cost assigned to misclassified labeled samples accounts for the number of misclassified labeled samples as well as the amount by which they are on the wrong side of the boundary, and this counterbalances any potential adverse effect of unlabeled data on the classifier performance. Several experiments performed with remotely sensed data demonstrate that using the proposed semisupervised classifier shows considerable improvements over the supervised-only counterpart. Index Terms--Fisher's discriminant, semisupervised classifier, unlabeled data.
- Published
- 2004
8. A model-based mixture-supervised classification approach in hyperspectral data analysis
- Author
-
Dundar, M. Murat and Landgrebe, David
- Subjects
Analysis of covariance ,Gaussian processes -- Usage ,Business ,Earth sciences ,Electronics and electrical industries - Abstract
It is well known that there is a strong relation between class definition precision and classification accuracy in pattern classification applications. In hyperspectral data analysis, usually classes of interest contain one or more components and may not be well represented by a single Gaussian density function. In this paper, a model-based mixture classifier, which uses mixture models to characterize class densities, is discussed. However, a key outstanding problem of this approach is how to choose the number of components and determine their parameters for such models in practice, and to do so in the face of limited training sets where estimation error becomes a significant factor. The proposed classifier estimates the number of subclasses and class statistics simultaneously by choosing the best model. The structure of class covariances is also addressed through a model-based covariance estimation technique introduced in this paper. Index Terms--Covariance estimator, expectation-maximization, Gaussian mixtures.
- Published
- 2002
9. A robust classification procedure based on mixture classifiers and nonparametric weighted feature extraction
- Author
-
Kuo, Bor-Chen and Landgrebe, David A.
- Subjects
Remote sensing -- Methods ,Business ,Earth sciences ,Electronics and electrical industries - Abstract
There are many factors to consider in carrying out a hyperspectral data classification. Perhaps chief among them are class training sample size, dimensionality, and distribution separability. The intent of this study is to design a classification procedure that is robust and maximally effective, but which provides the analyst with significant assists, thus simplifying the analyst's task. The result is a quadratic mixture classifier based on Mixed-LOOC2 regularized discriminant analysis and nonparametric weighted feature extraction. This procedure has the advantage of providing improved classification accuracy compared to typical previous methods but requires minimal need to consider the factors mentioned above. Experimental results demonstrating these properties are presented.
- Published
- 2002
10. Adaptive Bayesian contextual classification based on Markov random fields
- Author
-
Jackson, Qiong and Landgrebe, David A.
- Subjects
Bayesian statistical decision theory -- Usage ,Markov processes -- Usage ,Image processing -- Methods ,Image processing -- Technology application ,Technology application ,Business ,Earth sciences ,Electronics and electrical industries - Abstract
In this paper, an adaptive Bayesian contextual classification procedure that utilizes both spectral and spatial interpixel dependency contexts in estimation of statistics and classification is proposed. Essentially, this classifier is the constructive coupling of an adaptive classification procedure and a Bayesian contextual classification procedure. In this classifier, the joint prior probabilities of the classes of each pixel and its spatial neighbors are modeled by the Markov random field. The estimation of statistics and classification are performed in a recursive manner to allow the establishment of the positive-feedback process in a computationally efficient manner. Experiments with real hyperspectral data show that, starting with a small training sample set, this classifier can reach classification accuracies similar to that obtained by a pixelwise maximum likelihood pixel classifier with a very large training sample set. Additionally, classification maps are produced that have significantly less speckle error. Index Terms--Adaptive iterative classification procedure, Bayesian contextual classification procedure, hyperspectral data, iterative conditional mode (ICM), semilabeled samples.
- Published
- 2002
11. An adaptive method for combined covariance estimation and classification
- Author
-
Jackson, Qiong and Landgrebe, David A.
- Subjects
Earth sciences -- Research ,Analysis of covariance -- Usage ,Business ,Earth sciences ,Electronics and electrical industries - Abstract
In this paper, a family of adaptive covariance estimators is proposed to mitigate the problem of limited training samples for application to hyperspectral data analysis in quadratic maximum likelihood classification. These estimators are the combination of adaptive classification procedures and regularized covariance estimators. In these proposed estimators, the semi-labeled samples (whose labels are determined by a decision rule) are incorporated in the process of determining the optimal regularized parameters and estimating those supportive covariance matrices that formulate final regularized covariance estimators. In all experiments with simulated and real remote sensing data, these proposed combined covariance estimators achieved significant improvement on statistics estimation and classification accuracy over conventional regularized covariance estimators and an adaptive Maximum Likelihood classifier (MLC). The degree of improvement increases with dimensions, especially for ill-posed or very ill-posed problems where the total number of training samples is smaller than the number of dimensions. Index Terms--Adaptive iterative classification procedure, high-dimensional data, hyperspectral data, regularized covariance estimation, semi-labeled samples.
- Published
- 2002
12. A Covariance Estimator for small sample size classification problems and its application to feature extraction
- Author
-
Kuo, Bor-Chen and Landgrebe, David A.
- Subjects
Geological research -- Methods ,Multivariate analysis -- Methods ,Remote sensing -- Analysis ,Business ,Earth sciences ,Electronics and electrical industries - Abstract
A key to successful classification of multivariate data is the defining of an accurate quantitative model of each class. This is especially the case when the dimensionality of the data is high, and the problem is exacerbated when the number of training samples is limited. For the commonly used quadratic maximum-likelihood classifier, the class mean vectors and covariance matrices are required and must be estimated from the available training samples. In high dimensional cases, it has been found that feature extraction methods are especially useful, so as to transform the problem to a lower dimensional space without loss of information, however, here too class statistics estimation error is significant. Finding a suitable regularized covariance estimator is a way to mitigate these estimation error effects. The main purpose of this work is to find an improved regularized covariance estimator of each class with the advantages of Leave-One-Out Covariance Estimator (LOOC) and Bayesian LOOC (BLOOC). Besides, using the proposed covariance estimator to improve the linear feature extraction methods when the multivariate data is singular or nearly so is demonstrated. This work is specifically directed at analysis methods for hyperspectral remote sensing data. Index Terms--Feature extraction, hyperspectral data classification, regularized covariance estimator.
- Published
- 2002
13. A process model for remote sensing data analysis
- Author
-
Madhok, Varun and Landgrebe, David A.
- Subjects
Remote sensing -- Information management ,Human-computer interaction -- Models ,Information storage and retrieval -- Models ,Business ,Earth sciences ,Electronics and electrical industries - Abstract
Remote sensing data is collected and analyzed to enhance understanding of the terrestrial surface--in composition, in form, or in function. One approach for accomplishing this is to design the analysis process as an iterated composite of several analyst-directed modules. This paper proposes such a modular design for the data analysis. The proposed methodology was applied in a project to obtain the thematic map for a flightline over Washington, DC, with very satisfactory results--the qualification being in both the visual and the statistical sense. The project execution is presented as a case study in this paper. Index Terms--Digital elevation map, holistic solution, human--computer interaction, hyperspectral data classification, masking, methodology, segmentation.
- Published
- 2002
14. An adaptive classifier design for high-dimensional data analysis with a limited training data set
- Author
-
Jackson, Qiong and Landgrebe, David A.
- Subjects
Iterative methods (Mathematics) -- Usage ,Dimensional analysis -- Usage ,Business ,Earth sciences ,Electronics and electrical industries - Abstract
In this paper, we propose a self-learning and self-improving adaptive classifier to mitigate the problem of small training sample size that can severely affect the recognition accuracy of classifiers when the dimensionality of the multispectral data is high. This proposed adaptive classifier utilizes classified samples (referred as semilabeled samples) in addition to original training samples iteratively. In order to control the influence of semilabeled samples, the proposed method gives full weight to the training samples and reduced weight to semilabeled samples. We show that by using additional semilabeled samples that are available without extra cost, the additional class label information may be extracted and utilized to enhance statistics estimation and hence improve the classifier performance, and therefore the Hughes phenomenon (peak phenomenon) may be mitigated. Experimental results show this proposed adaptive classifier can improve the classification accuracy as well as representation of estimated statistics significantly. Index Terms--Adaptive iterative classifier, high-dimensional data, labeled samples, limited training data set, semilabeled samples.
- Published
- 2001
15. Hyperspectral data analysis and supervised feature reduction via projection pursuit
- Author
-
Jimenez, Luis O. and Landgrebe, David A.
- Subjects
Dimensional analysis -- Research ,Remote sensing -- Research ,Pattern recognition -- Research ,Business ,Earth sciences ,Electronics and electrical industries - Abstract
As the number of spectral bands of high-spectral resolution data increases, the ability to detect more detailed classes should also increase, and the classification accuracy should increase as well. Often the number of labeled samples used for supervised classification techniques is limited, thus limiting the precision with which class characteristics can be estimated. As the number of spectral bands becomes large, the limitation on performance imposed by the limited number of training samples can become severe. A number of techniques for case-specific feature extraction have been developed to reduce dimensionality without loss of class separability. Most of these techniques require the estimation of statistics at full dimensionality in order to extract relevant features for classification. If the number of training samples is not adequately large, the estimation of parameters in high-dimensional data will not be accurate enough. As a result, the estimated features may not be as effective as they could be. This suggests the need for reducing the dimensionality via a preprocessing method that takes into consideration high-dimensional feature-space properties. Such reduction should enable the estimation of feature-extraction parameters to be more accurate. Using a technique referred to as projection pursuit (PP), such an algorithm has been developed. This technique is able to bypass many of the problems of the limitation of small numbers of training samples by making the computations in a lower-dimensional space, and optimizing a function called the projection index. A current limitation of this method is that, as the number of dimensions increases, it is likely that a local maximum of the projection index will be found that does not enable one to fully exploit hyperspectral-data capabilities. A method to estimate an initial value that can lead to a maximum that increases the classification accuracy significantly will be presented. This method also leads to a high-dimensional version of a feature-selection algorithm, which requires significantly less computation than the normal procedure. Index Terms: Band subset selection, dimensionality reduction, feature extraction, hyperspectral data analysis, pattern recognition, projection pursuit, supervised classification.
- Published
- 1999
16. On the Classification of Classes with Nearly Equal Spectral Response in Remote Sensing Hyperspectral Image Data
- Author
-
Haertel, Victor and Landgrebe, David A.
- Subjects
Remote sensing -- Information management ,Multispectral photography -- Information management ,Image processing -- Digital techniques ,Digital mapping -- Research ,Business ,Earth sciences ,Electronics and electrical industries - Abstract
It is well known that high-dimensional image data allows for the separation of classes that are spectrally very similar, i.e., possess nearly equal first-order statistics, provided that their second-order statistics differ significantly. The aim of this study is to contribute to a better understanding, from a more geometrically oriented point of view, of the role played by the second-order statistics in remote sensing digital image classification of natural scenes when the classes of interest are spectrally very similar and high dimensional multispectral image data is available. A number of the investigations that have been developed in this area deal with the fact that as the data dimensionality increases, so does the difficulty in obtaining a reasonably accurate estimate of the within-class covariance matrices from the number of available labeled samples, which is usually limited. Several approaches have been proposed to deal with this problem. This study aims toward a complementary goal. Assuming that reasonably accurate estimates for the within-class covariance matrices have been obtained, we seek to better understand what kind of geometrically-oriented interpretation can be given to them as the data dimensionality increases and also to understand how this knowledge can help the design of a classifier. In order to achieve this goal, the covariance matrix is decomposed into a number of parameters that are then analyzed separately with respect to their ability to separate the classes. Methods for image classification based on these parameters are investigated. Results of tests using data provided by the sensor system AVIRIS are presented and discussed. Index Terms: AVIRIS sensor, digital image classification, high-dimensional data, remote sensing, second-order statistics.
- Published
- 1999
17. Covariance estimation with limited training samples
- Author
-
Tadjudin, Saldju and Landgrebe, David A.
- Subjects
Analysis of covariance -- Usage ,Gaussian processes -- Analysis ,Bayesian statistical decision theory -- Analysis ,Maximum likelihood estimates (Statistics) -- Analysis ,Business ,Earth sciences ,Electronics and electrical industries - Abstract
This paper describes a covariance estimator formulated under an empirical Bayesian setting to mitigate the problem of limited training samples in the Gaussian maximum likelihood (ML) classification for remote sensing. The most suitable covariance mixture is selected by maximizing the average leave-one-out log likelihood. Experimental results using AVIRIS data are presented. Index Terms - Covariance estimation, Gaussian maximum likelihood, leave-one-out log likelihood, regularization.
- Published
- 1999
18. Partially supervised classification using weighted unsupervised clustering
- Author
-
Jeon, Byeungwoo and Landgrebe, David A.
- Subjects
Classification -- Research ,Cluster analysis -- Research ,Statistical hypothesis testing -- Research ,Business ,Earth sciences ,Electronics and electrical industries - Abstract
This paper addresses a classification problem in which class definition through training samples or otherwise is provided a priori only for a particular class of interest. Considerable time and effort may be required to label samples necessary for defining all the classes existent in a given data set by collecting ground truth or by other means. Thus, this problem is very important in practice, because one is often interested in identifying samples belonging to only one or a small number of classes. The problem is considered as an unsupervised clustering problem with initially one known cluster. The definition and statistics of the other classes are automatically developed through a weighted unsupervised clustering procedure that keeps the known cluster from losing its identity as the 'class of interest.' Once all the classes are developed, a conventional supervised classifier such as the maximum likelihood classifier is used in the classification. Experimental results with both simulated and real data verify the effectiveness of the proposed method. Index Terms - One-class classifier, partially supervised classifier, significance testing, single hypothesis testing, unsupervised clustering.
- Published
- 1999
19. Decision boundary feature extraction for neural networks
- Author
-
Lee, Chulhee and Landgrebe, David A.
- Subjects
Neural networks -- Research ,Pattern recognition -- Research ,Business ,Computers ,Electronics ,Electronics and electrical industries - Abstract
In this paper, we propose a new feature extraction method for feedforward neural networks. The method is based on the recently published decision boundary feature extraction algorithm which is based on the fact that all the necessary features for classification can be extracted from the decision boundary. The decision boundary feature extraction algorithm can take advantage of characteristics of neural networks which can solve complex problems with arbitrary decision boundaries without assuming underlying probability distribution functions of the data. To apply the decision boundary feature extraction method, we first give a specific definition for the decision boundary in a neural network. Then, we propose a procedure for extracting all the necessary features for classification from the decision boundary. Experiments show promising results. Index Terms - Neural networks, feature extraction, decision boundary, classification, decision boundary feature matrix, pattern recognition.
- Published
- 1997
20. Covariance matrix estimation and classification with limited training data
- Author
-
Hoffbeck, Joseph P. and Landgrebe, David A.
- Subjects
Maximum likelihood estimates (Statistics) -- Research ,Pattern recognition -- Research ,Machine vision -- Research - Abstract
A new covariance matrix estimator useful for designing classifiers with limited training data is developed. In experiments, this estimator achieved higher classification accuracy than the sample covariance matrix and common covariance matrix estimates. In about half of the experiments, it achieved higher accuracy than regularized discriminant analysis, but required much less computation. Index Terms - Covariance matrix, estimation, leave-one-out method, cross validation, classification, high dimensional data.
- Published
- 1996
21. INFORMATION EXTRACTION PRINCIPLES AND METHODS FOR MULTISPECTRAL AND HYPERSPECTRAL IMAGE DATA
- Author
-
LANDGREBE, DAVID, primary
- Published
- 1999
- Full Text
- View/download PDF
22. The effect of unlabeled samples in reducing the small sample size problem and mitigating the Hughes phenomenon
- Author
-
Shahshahani, Behzad M. and Landgrebe, David A.
- Subjects
Pattern recognition -- Research ,Remote sensing -- Research ,Image processing -- Research ,Business ,Earth sciences ,Electronics and electrical industries - Abstract
In this paper, we study the use of unlabeled samples in reducing the problem of small training sample size that can severely affect the recognition rate of classifiers when the dimensionality of the multispectral data is high. We show that by using additional unlabeled samples that are available at no extra cost, the performance may be improved, and therefore the Hughes phenomenon can be mitigated. Furthermore, by experiments, we show that by using additional unlabeled samples more representative estimates can be obtained. We also propos a semiparametric method for incorporating the training (i.e., labeled) and unlabeled samples simultaneously into the parameter estimation process.
- Published
- 1994
23. Fast Parzen density estimation using clustering-based branch and bound
- Author
-
Jeon, Byeungwoo and Landgrebe, David A.
- Subjects
Discriminant analysis -- Research ,Branch and bound algorithms -- Research - Abstract
This correspondence proposes a fast Parzen density estimation algorithm that would be especially useful in the nonparametric discriminant analysis problems. By preclustering the data and applying a simple branch and bound procedure to the clusters, significant numbers of data samples that would contribute little to the density estimate can be excluded without detriment to actual evaluation via the kernel functions. This technique is especially helpful in the multivariant case, and does not require a uniform sampling grid. The proposed algorithm may also be used in conjunction with the data reduction technique of Fukunaga and Hayes to further reduce the computational load. Experimental results are presented to verify the effectiveness of this algorithm.
- Published
- 1994
24. Analyzing high-dimensional multispectral data
- Author
-
Chulhee Lee and Landgrebe, David A.
- Subjects
Multispectral photography -- Analysis ,Statistics -- Usage ,Remote sensing -- Research ,Business ,Earth sciences ,Electronics and electrical industries - Abstract
In this paper, through a series of specific examples, we illustrate some characteristics encountered in analyzing high-dimensional multispectral data. The increased importance of the second-order statistics in analyzing high-dimensional data is illustrated, as is the shortcoming of classifiers such as the minimum distance classifier which rely on first-order variations alone. We also illustrate how inaccurate estimation of first- and second-order statistics, e.g., from use of training sets which are too small, affects the performance of a classifier. Recognizing the importance of second-order statistics on the one hand, but the increased difficulty in perceiving and comprehending information present in statistics derived from high-dimensional data on the other, we propose a method to aid visualization of high-dimensional statistics using a color coding scheme.
- Published
- 1993
25. Feature extraction based on decision boundaries
- Author
-
Lee, Chulhee and Landgrebe, David A.
- Subjects
Pattern recognition -- Research ,Discriminant analysis -- Research ,Decision theory -- Research - Abstract
In this paper, a novel approach to feature extraction for classification is proposed based directly on the decision boundaries. We note that feature extraction is equivalent to retaining informative features or eliminating redundant features; thus, the terms 'discriminantly information feature' and 'discriminantly redundant feature' are first defined relative to feature extraction for classification. Next, it is shown how discriminantly redundant features and discriminantly informative features are related to decision boundaries. A novel characteristic of the proposed method arises by noting that usually only a portion of the decision boundary is effective in discriminating between classes, and the concept of the effective decision boundary is therefore introduced. Next, a procedure to extract discriminantly informative features based on a decision boundary is proposed. The proposed feature extraction algorithm has several desirable properties: 1) It predicts the minimum number of features necessary to achieve the same classification accuracy as in the original space for a given pattern recognition problem; 2) it finds the necessary feature vectors. The proposed algorithm does not deteriorate under the circumstances of equal class means or equal class covariances as some previous algorithms do. Experiments show that the performance of the proposed algorithm compares favorably with those of previous algorithms.
- Published
- 1993
26. Decision boundary feature extraction for nonparametric classification
- Author
-
Lee, Chulhee and Landgrebe, David A.
- Subjects
Decision theory -- Research ,Pattern recognition -- Research ,Boundary value problems -- Numerical solutions - Abstract
Feature extraction has long been an important topic in pattern recognition. Although many authors have studied feature extraction for parametric classifiers, relatively few feature extraction algorithms are available for nonparametric classifiers. A new feature extraction algorithm based on decision boundaries for nonparametric classifiers is proposed. It is noted that feature extraction for pattern recognition is equivalent to retaining 'discriminantly informative features' and a discriminantly informative feature is related to the decision boundary. Since nonparametric classifiers do not define decision boundaries in analytic form, the decision boundary and normal vectors must be estimated numerically. A procedure to extract discriminantly informative features based on a decision boundary for non-parametric classification is proposed. Experiments show that the proposed algorithm finds effective features for the nonparametric classifier with Parzen density estimation.
- Published
- 1993
27. Classification with spatio-temporal interpixel class dependency contexts
- Author
-
Byeungwoo Jeon and Landgrebe, David A.
- Subjects
Remote sensing -- Research ,Imaging systems -- Research ,Image processing -- Methods ,Business ,Earth sciences ,Electronics and electrical industries - Abstract
A contextual classifier which can utilize both spatial and temporal interpixel dependency contexts is investigated. After spatial and temporal neighbors are defined, a general form of maximum a posterior spatio-temporal contextual classifier is derived. This contextual classifier is simplified under several assumptions. Joint prior probabilities of the classes of each pixel and its spatial neighbors are modeled by the Gibbs random filed. The classification is performed in a recursive manner to allow a computationally efficient contextual classification. Experimental results with bitemporal TM data show significant improvement of classification accuracy over noncontextual pixelwise classifier. This spatio-temporal contextual classifier will find its use in many real applications of remote sensing, especially when the classification accuracy is important.
- Published
- 1992
28. MultiSpec—a tool for multispectral–hyperspectral image data analysis
- Author
-
Biehl, Larry and Landgrebe, David
- Published
- 2002
- Full Text
- View/download PDF
29. Effect of radiance-to-reflectance transformation and atmosphere removal on maximum likelihood classification accuracy of high-dimensional remote sensing data
- Author
-
Hoffbeck, Joseph P and Landgrebe, David A
- Subjects
Earth Resources And Remote Sensing - Abstract
Many analysis algorithms for high-dimensional remote sensing data require that the remotely sensed radiance spectra be transformed to approximate reflectance to allow comparison with a library of laboratory reflectance spectra. In maximum likelihood classification, however, the remotely sensed spectra are compared to training samples, thus a transformation to reflectance may or may not be helpful. The effect of several radiance-to-reflectance transformations on maximum likelihood classification accuracy is investigated in this paper. We show that the empirical line approach, LOWTRAN7, flat-field correction, single spectrum method, and internal average reflectance are all non-singular affine transformations, and that non-singular affine transformations have no effect on discriminant analysis feature extraction and maximum likelihood classification accuracy. (An affine transformation is a linear transformation with an optional offset.) Since the Atmosphere Removal Program (ATREM) and the log residue method are not affine transformations, experiments with Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data were conducted to determine the effect of these transformations on maximum likelihood classification accuracy. The average classification accuracy of the data transformed by ATREM and the log residue method was slightly less than the accuracy of the original radiance data. Since the radiance-to-reflectance transformations allow direct comparison of remotely sensed spectra with laboratory reflectance spectra, they can be quite useful in labeling the training samples required by maximum likelihood classification, but these transformations have only a slight effect or no effect at all on discriminant analysis and maximum likelihood classification accuracy.
- Published
- 1994
30. High dimensional feature reduction via projection pursuit
- Author
-
Jimenez, Luis and Landgrebe, David
- Subjects
Earth Resources And Remote Sensing - Abstract
The recent development of more sophisticated remote sensing systems enables the measurement of radiation in many more spectral intervals than previously possible. An example of that technology is the AVIRIS system, which collects image data in 220 bands. As a result of this, new algorithms must be developed in order to analyze the more complex data effectively. Data in a high dimensional space presents a substantial challenge, since intuitive concepts valid in a 2-3 dimensional space to not necessarily apply in higher dimensional spaces. For example, high dimensional space is mostly empty. This results from the concentration of data in the corners of hypercubes. Other examples may be cited. Such observations suggest the need to project data to a subspace of a much lower dimension on a problem specific basis in such a manner that information is not lost. Projection Pursuit is a technique that will accomplish such a goal. Since it processes data in lower dimensions, it should avoid many of the difficulties of high dimensional spaces. In this paper, we begin the investigation of some of the properties of Projection Pursuit for this purpose.
- Published
- 1994
31. Classification of high dimensional multispectral image data
- Author
-
Hoffbeck, Joseph P and Landgrebe, David A
- Subjects
Documentation And Information Science - Abstract
A method for classifying high dimensional remote sensing data is described. The technique uses a radiometric adjustment to allow a human operator to identify and label training pixels by visually comparing the remotely sensed spectra to laboratory reflectance spectra. Training pixels for material without obvious spectral features are identified by traditional means. Features which are effective for discriminating between the classes are then derived from the original radiance data and used to classify the scene. This technique is applied to Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data taken over Cuprite, Nevada in 1992, and the results are compared to an existing geologic map. This technique performed well even with noisy data and the fact that some of the materials in the scene lack absorption features. No adjustment for the atmosphere or other scene variables was made to the data classified. While the experimental results compare favorably with an existing geologic map, the primary purpose of this research was to demonstrate the classification method, as compared to the geology of the Cuprite scene.
- Published
- 1993
32. Use of Unlabeled Samples for Mitigating the Hughes Phenomenon
- Author
-
Landgrebe, David A and Shahshahani, Behzad M
- Subjects
Statistics And Probability - Abstract
The use of unlabeled samples in improving the performance of classifiers is studied. When the number of training samples is fixed and small, additional feature measurements may reduce the performance of a statistical classifier. It is shown that by using unlabeled samples, estimates of the parameters can be improved and therefore this phenomenon may be mitigated. Various methods for using unlabeled samples are reviewed and experimental results are provided.
- Published
- 1993
33. Analyzing High-Dimensional Multispectral Data
- Author
-
Lee, Chulhee and Landgrebe, David A
- Subjects
Earth Resources And Remote Sensing - Abstract
In this paper, through a series of specific examples, we illustrate some characteristics encountered in analyzing high- dimensional multispectral data. The increased importance of the second-order statistics in analyzing high-dimensional data is illustrated, as is the shortcoming of classifiers such as the minimum distance classifier which rely on first-order variations alone. We also illustrate how inaccurate estimation or first- and second-order statistics, e.g., from use of training sets which are too small, affects the performance of a classifier. Recognizing the importance of second-order statistics on the one hand, but the increased difficulty in perceiving and comprehending information present in statistics derived from high-dimensional data on the other, we propose a method to aid visualization of high-dimensional statistics using a color coding scheme.
- Published
- 1993
- Full Text
- View/download PDF
34. Design of partially supervised classifiers for multispectral image data
- Author
-
Jeon, Byeungwoo and Landgrebe, David
- Subjects
Documentation And Information Science - Abstract
A partially supervised classification problem is addressed, especially when the class definition and corresponding training samples are provided a priori only for just one particular class. In practical applications of pattern classification techniques, a frequently observed characteristic is the heavy, often nearly impossible requirements on representative prior statistical class characteristics of all classes in a given data set. Considering the effort in both time and man-power required to have a well-defined, exhaustive list of classes with a corresponding representative set of training samples, this 'partially' supervised capability would be very desirable, assuming adequate classifier performance can be obtained. Two different classification algorithms are developed to achieve simplicity in classifier design by reducing the requirement of prior statistical information without sacrificing significant classifying capability. The first one is based on optimal significance testing, where the optimal acceptance probability is estimated directly from the data set. In the second approach, the partially supervised classification is considered as a problem of unsupervised clustering with initially one known cluster or class. A weighted unsupervised clustering procedure is developed to automatically define other classes and estimate their class statistics. The operational simplicity thus realized should make these partially supervised classification schemes very viable tools in pattern classification.
- Published
- 1993
35. Feature extraction and classification algorithms for high dimensional data
- Author
-
Lee, Chulhee and Landgrebe, David
- Subjects
Computer Programming And Software - Abstract
Feature extraction and classification algorithms for high dimensional data are investigated. Developments with regard to sensors for Earth observation are moving in the direction of providing much higher dimensional multispectral imagery than is now possible. In analyzing such high dimensional data, processing time becomes an important factor. With large increases in dimensionality and the number of classes, processing time will increase significantly. To address this problem, a multistage classification scheme is proposed which reduces the processing time substantially by eliminating unlikely classes from further consideration at each stage. Several truncation criteria are developed and the relationship between thresholds and the error caused by the truncation is investigated. Next an approach to feature extraction for classification is proposed based directly on the decision boundaries. It is shown that all the features needed for classification can be extracted from decision boundaries. A characteristic of the proposed method arises by noting that only a portion of the decision boundary is effective in discriminating between classes, and the concept of the effective decision boundary is introduced. The proposed feature extraction algorithm has several desirable properties: it predicts the minimum number of features necessary to achieve the same classification accuracy as in the original space for a given pattern recognition problem; and it finds the necessary feature vectors. The proposed algorithm does not deteriorate under the circumstances of equal means or equal covariances as some previous algorithms do. In addition, the decision boundary feature extraction algorithm can be used both for parametric and non-parametric classifiers. Finally, some problems encountered in analyzing high dimensional data are studied and possible solutions are proposed. First, the increased importance of the second order statistics in analyzing high dimensional data is recognized. By investigating the characteristics of high dimensional data, the reason why the second order statistics must be taken into account in high dimensional data is suggested. Recognizing the importance of the second order statistics, there is a need to represent the second order statistics. A method to visualize statistics using a color code is proposed. By representing statistics using color coding, one can easily extract and compare the first and the second statistics.
- Published
- 1993
36. Hyperspectral data analysis procedures with reduced sensitivity to noise
- Author
-
Landgrebe, David A
- Subjects
Earth Resources And Remote Sensing - Abstract
Multispectral sensor systems have become steadily improved over the years in their ability to deliver increased spectral detail. With the advent of hyperspectral sensors, including imaging spectrometers, this technology is in the process of taking a large leap forward, thus providing the possibility of enabling delivery of much more detailed information. However, this direction of development has drawn even more attention to the matter of noise and other deleterious effects in the data, because reducing the fundamental limitations of spectral detail on information collection raises the limitations presented by noise to even greater importance. Much current effort in remote sensing research is thus being devoted to adjusting the data to mitigate the effects of noise and other deleterious effects. A parallel approach to the problem is to look for analysis approaches and procedures which have reduced sensitivity to such effects. We discuss some of the fundamental principles which define analysis algorithm characteristics providing such reduced sensitivity. One such analysis procedure including an example analysis of a data set is described, illustrating this effect.
- Published
- 1993
37. Decision boundary feature extraction for neural networks
- Author
-
Lee, Chulhee and Landgrebe, David A
- Subjects
Cybernetics - Abstract
We propose a new feature extraction method for neural networks. The method is based on the recently published decision boundary feature extraction algorithm. It has been shown that all the necessary features for classification can be extracted from the decision boundary. To apply the decision boundary feature extraction method, we first define the decision boundary in neural networks. Next, we propose a procedure for extracting all the necessary features for classification from the decision boundary. The proposed algorithm preserves the characteristics of neural networks, which can define arbitrary decision boundary. Experiments show promising results.
- Published
- 1992
38. Decision fusion with reliabilities in multisource data classification
- Author
-
Jeon, Byeungwoo and Landgrebe, David A
- Subjects
Cybernetics - Abstract
In this paper, a new multisource classifier which is based on a fusion of the class decisions of each separate data set is proposed. Each data set is separately fed into the local classifier and a final classification is performed by summarizing these local class decisions. An optimum decision fusion rule based on the minimum expected cost is derived. This new decision fusion rule can handle not only data set reliabilities but also classwise reliabilities of each data set. Classification experiments with two remotely sensed Thematic Mapper (TM) data sets show promising improvement over conventional multisource classification algorithms.
- Published
- 1992
39. Classification with spatio-temporal interpixel class dependency contexts
- Author
-
Jeon, Byeungwoo and Landgrebe, David A
- Subjects
Earth Resources And Remote Sensing - Abstract
A contextual classifier which can utilize both spatial and temporal interpixel dependency contexts is investigated. After spatial and temporal neighbors are defined, a general form of maximum a posterior spatiotemporal contextual classifier is derived. This contextual classifier is simplified under several assumptions. Joint prior probabilities of the classes of each pixel and its spatial neighbors are modeled by the Gibbs random field. The classification is performed in a recursive manner to allow a computationally efficient contextual classification. Experimental results with bitemporal TM data show significant improvement of classification accuracy over noncontextual pixelwise classifiers. This spatiotemporal contextual classifier should find use in many applications of remote sensing, especially when the classification accuracy is important.
- Published
- 1992
- Full Text
- View/download PDF
40. Analyzing high dimensional data
- Author
-
Lee, Chulhee and Landgrebe, David A
- Subjects
Statistics And Probability - Abstract
Problems encountered in analyzing high dimensional data are discussed and possible solutions are proposed. The increased importance of second-order statistics in analyzing high dimensional data and the shortcoming of the minimum distance classifier in high dimensional data are recognized. By investigating characteristics of high dimensional data, it is shown that second-order statistics must be taken into account in high dimensional data. There is a need to represent second order statistics effectively. As the data dimensionality increases, it becomes more difficult to perceive and compare information present in statistics derived from data. In order to overcome this problem, a method to visualize statistics using color code is proposed. By representing statistics using a color code, the first and the second statistics can be more readily compared.
- Published
- 1992
41. Using partially labeled data for normal mixture identification with application to class definition
- Author
-
Shahshahani, Behzad M and Landgrebe, David A
- Subjects
Cybernetics - Abstract
The problem of estimating the parameters of a normal mixture density when, in addition to the unlabeled samples, sets of partially labeled samples are available is addressed. The density of the multidimensional feature space is modeled with a normal mixture. It is assumed that the set of components of the mixture can be partitioned into several classes and that training samples are available from each class. Since for any training sample the class of origin is known but the exact component of origin within the corresponding class is unknown, the training samples as considered to be partially labeled. The EM iterative equations are derived for estimating the parameters of the normal mixture in the presence of partially labeled samples. These equations can be used to combine the supervised and nonsupervised learning processes.
- Published
- 1992
42. Feature selection for neural networks using Parzen density estimator
- Author
-
Lee, Chulhee, Benediktsson, Jon A, and Landgrebe, David A
- Subjects
Cybernetics - Abstract
A feature selection method for neural networks is proposed using the Parzen density estimator. A new feature set is selected using the decision boundary feature selection algorithm. The selected feature set is then used to train a neural network. Using a reduced feature set, an attempt is made to reduce the training time of the neural network and obtain a simpler neural network, which further reduces the classification time for test data.
- Published
- 1992
43. On the asymptotic improvement of supervised learning by utilizing additional unlabeled samples - Normal mixture density case
- Author
-
Shahshahani, Behzad M and Landgrebe, David A
- Subjects
Cybernetics - Abstract
The effect of additional unlabeled samples in improving the supervised learning process is studied in this paper. Three learning processes. supervised, unsupervised, and combined supervised-unsupervised, are compared by studying the asymptotic behavior of the estimates obtained under each process. Upper and lower bounds on the asymptotic covariance matrices are derived. It is shown that under a normal mixture density assumption for the probability density function of the feature space, the combined supervised-unsupervised learning is always superior to the supervised learning in achieving better estimates. Experimental results are provided to verify the theoretical concepts.
- Published
- 1992
44. On the use of stochastic process-based methods for the analysis of hyperspectral data
- Author
-
Landgrebe, David A
- Subjects
Earth Resources And Remote Sensing - Abstract
Further development in remote sensing technology requires refinement of information system design aspects, i.e., the ability to specify precisely the data to collect and the means to extract increasing amounts of information from the increasingly rich and complex data stream created. One of the principal directions of advance is that data from much larger numbers of spectral bands can be collected, but with significantly increased signal-to-noise ratio. The theory of stochastic or random processes may be applied to the modeling of second-order variations. A multispectral data set with a large number of spectral bands is analyzed using standard pattern recognition techniques. The data were classified using first a single spectral feature, then two, and continuing on with greater and greater numbers of features. Three different classification schemes are used: a standard maximum likelihood Gaussian scheme; the same approach with the mean values of all classes adjusted to be the same; and the use of a minimum distance to means scheme such that mean differences are used.
- Published
- 1992
45. Fast likelihood classification
- Author
-
Lee, Chulhee and Landgrebe, David A
- Subjects
Cybernetics - Abstract
A multistage classification that reduces the processing time substantially is proposed. This classification algorithm consists of several stages, and in each stage likelihood values of classes are calculated and compared. If a class has a likelihood value less than a threshold, the class if truncated at that stage as an unlikely class, thus reducing the number of classes for which likelihood values are to be calculated at the next stage. Thus a host of classes can be truncated using a small portion of the total features at early stages, resulting in substantial reduction of computing time. Several truncation criteria are developed, and the relationship between thresholds and the error caused by the truncation is investigated. Experiments show that the proposed algorithm reduces the processing time by the factor of 3-7, depending on the number of classes and features, while maintaining essentially the same accuracies.
- Published
- 1991
- Full Text
- View/download PDF
46. Hierarchical classifier design in high-dimensional, numerous class cases
- Author
-
Kim, Byungyong and Landgrebe, David A
- Subjects
Computer Programming And Software - Abstract
As progress in new sensor technology continues, increasingly high spectral resolution sensors are being developed. These sensors give more detailed and complex data for each picture element and greatly increase the dimensionality of data over past systems. Three methods for designing a decision tree classifier are discussed; a top down approach, a bottom up approach, and a hybrid approach. Three feature extraction techniques are implemented. Canonical and extended canonical techniques are mainly dependent on the mean difference between two classes. An autocorrelation technique is dependent on the correlation differences. The mathematical relationship among sample size, dimensionality, and risk value is derived.
- Published
- 1991
- Full Text
- View/download PDF
47. A survey of decision tree classifier methodology
- Author
-
Safavian, S. R and Landgrebe, David
- Subjects
Numerical Analysis - Abstract
Decision tree classifiers (DTCs) are used successfully in many diverse areas such as radar signal classification, character recognition, remote sensing, medical diagnosis, expert systems, and speech recognition. Perhaps the most important feature of DTCs is their capability to break down a complex decision-making process into a collection of simpler decisions, thus providing a solution which is often easier to interpret. A survey of current methods is presented for DTC designs and the various existing issues. After considering potential advantages of DTCs over single-state classifiers, subjects of tree structure design, feature selection at each internal node, and decision and search strategies are discussed.
- Published
- 1991
- Full Text
- View/download PDF
48. Decision boundary feature selection for non-parametric classifier
- Author
-
Lee, Chulhee and Landgrebe, David A
- Subjects
Cybernetics - Abstract
Feature selection has been one of the most important topics in pattern recognition. Although many authors have studied feature selection for parametric classifiers, few algorithms are available for feature selection for nonparametric classifiers. In this paper we propose a new feature selection algorithm based on decision boundaries for nonparametric classifiers. We first note that feature selection for pattern recognition is equivalent to retaining 'discriminantly informative features', and a discriminantly informative feature is related to the decision boundary. A procedure to extract discriminantly informative features based on a decision boundary for nonparametric classification is proposed. Experiments show that the proposed algorithm finds effective features for the nonparametric classifier with Parzen density estimation.
- Published
- 1991
49. Parameter trade-offs for imaging spectroscopy systems
- Author
-
Kerekes, John P and Landgrebe, David A
- Subjects
Spacecraft Instrumentation - Abstract
With the advent of the EOS era and of configurable sensors, users of these instruments are faced with the twin problems of specifying data acquisition parameters and extracting desired information from the voluminous data. An application of a system model is made to explore system parameter trade-offs for a model sensor based on the High Resolution Imaging Spectrometer. Radiometric performance was studied, along with the effect on classification accuracy of several system parameters. Using a model scene based on typical agricultural reflectance and atmospheric conditions, the atmosphere and sensor are seen to have significant effects on the mean received signal and noise performance. The effect of random uncorrelated errors in the radiometric calibration of the detector array is seen to degrade system performance, especially in the spectral bands below 1 micron. Accurate pixel-to-pixel relative radiometric calibration and the use of the Image Motion Compensation option are seen to improve classification accuracy, especially at high solar zenith angles. Feature sets chosen from characteristics of the scene performed best overall, but ones chosen based on signal-to-noise ratios were seen to be more robust.
- Published
- 1991
- Full Text
- View/download PDF
50. Topics in inference and decision-making with partial knowledge
- Author
-
Safavian, S. Rasoul and Landgrebe, David
- Subjects
Statistics And Probability - Abstract
Two essential elements needed in the process of inference and decision-making are prior probabilities and likelihood functions. When both of these components are known accurately and precisely, the Bayesian approach provides a consistent and coherent solution to the problems of inference and decision-making. In many situations, however, either one or both of the above components may not be known, or at least may not be known precisely. This problem of partial knowledge about prior probabilities and likelihood functions is addressed. There are at least two ways to cope with this lack of precise knowledge: robust methods, and interval-valued methods. First, ways of modeling imprecision and indeterminacies in prior probabilities and likelihood functions are examined; then how imprecision in the above components carries over to the posterior probabilities is examined. Finally, the problem of decision making with imprecise posterior probabilities and the consequences of such actions are addressed. Application areas where the above problems may occur are in statistical pattern recognition problems, for example, the problem of classification of high-dimensional multispectral remote sensing image data.
- Published
- 1990
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.