57 results on '"Robert Burduk"'
Search Results
2. Weighted Scoring in Geometric Space for Decision Tree Ensemble
- Author
-
Jedrzej Biedrzycki and Robert Burduk
- Subjects
Majority rule ,General Computer Science ,Computer science ,Feature vector ,Decision tree ,02 engineering and technology ,Disjoint sets ,ensemble classifier ,01 natural sciences ,010305 fluids & plasmas ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering ,General Materials Science ,majority voting ,business.industry ,multiple classifier system ,General Engineering ,Pattern recognition ,Matthews correlation coefficient ,Random forest ,020201 artificial intelligence & image processing ,lcsh:Electrical engineering. Electronics. Nuclear engineering ,Artificial intelligence ,business ,lcsh:TK1-9971 ,random forest ,Subspace topology - Abstract
In order to improve the classification performance of a single classification model, Multiple Classifier Systems (MCS) are used. One of the most common techniques utilizing multiple decision trees is the random forest, where diversity between base classifiers is obtained by bagging the training dataset. In this paper, we propose the algorithm that uses horizontal partitioning the learning set and uses decision trees as base models to obtain decision regions. In the proposed approach feature space is divided into disjoint subspace. Additionally, the location of the subspace centroids, as well as the size and location of decision regions, are used in order to determine the weights needed in the last process of creating MCS, i.e. in the integration phase. The proposed algorithm was evaluated employing multiple open-source benchmarking datasets, compared using accuracy and Matthews correlation coefficient performance measures with two existing MCS methods - random forest and majority voting. The statistical analysis confirms an improvement in recognition compared to the random forest. In addition, we proved that for infinitely dense space division proposed algorithm is equivalent to majority voting.
- Published
- 2020
- Full Text
- View/download PDF
3. Distance Metrics in Clustering and Weighted Scoring Algorithm
- Author
-
Jakub Klikowski and Robert Burduk
- Subjects
Computer science ,business.industry ,Value (computer science) ,Score ,Pattern recognition ,Class (biology) ,Euclidean distance ,ComputingMethodologies_PATTERNRECOGNITION ,Scoring algorithm ,Decision boundary ,Classification methods ,Artificial intelligence ,business ,Cluster analysis - Abstract
One of the current challenges for the supervised classification methods is to obtain acceptable values of the performance measures for an imbalanced dataset. There is a significant disproportion in the number of objects from different class labels in datasets with a high imbalanced ratio. This article analyzes the clustering and weighted scoring algorithm based on estimating the number of clusters that consider the minimum number of objects from the minority class label in each cluster. This algorithm uses the distance metric when determining the value of the score function. Therefore, this article aims to analyze the impact of selecting the distance metric on the six classification performance measures’ value. The performed experiments show that the Euclidean distance allows obtaining the best classification results for imbalanced datasets.
- Published
- 2021
- Full Text
- View/download PDF
4. Progress on Pattern Classification, Image Processing and Communications : Proceedings of the CORES and IP&C Conferences 2023
- Author
-
Robert Burduk, Michał Choraś, Rafał Kozik, Paweł Ksieniewicz, Tomasz Marciniak, Paweł Trajdos, Robert Burduk, Michał Choraś, Rafał Kozik, Paweł Ksieniewicz, Tomasz Marciniak, and Paweł Trajdos
- Subjects
- Computational intelligence, Artificial intelligence, Pattern recognition systems, Signal processing, Computer vision
- Abstract
This book presents a collection of high-quality research papers accepted to multi-conference consisting of the 13th International Conference on Image Processing and Communications (IP&C 2023), the 13th International Conference on Computer Recognition Systems (CORES 2023) held jointly in Wroclaw, Poland (virtually), in June 2023.The accepted papers address current computer science and computer systems-related technological challenges and solutions, as well as many practical applications and results.The first part of the book deals with advances in pattern recognition and classifiers, the second part is devoted to image processing and computer vision, while the third part addresses practical applications of computer recognition systems.We believe this book will be interesting for researchers and practitioners in many fields of computer science and IT applications.
- Published
- 2023
5. Clustering and Weighted Scoring in Geometric Space Support Vector Machine Ensemble for Highly Imbalanced Data Classification
- Author
-
Robert Burduk and Paweł Ksieniewicz
- Subjects
business.industry ,Computer science ,Context (language use) ,Function (mathematics) ,Machine learning ,computer.software_genre ,Ensemble learning ,Task (project management) ,Support vector machine ,Statistical classification ,ComputingMethodologies_PATTERNRECOGNITION ,Decision boundary ,Artificial intelligence ,Cluster analysis ,business ,computer - Abstract
Learning from imbalanced datasets is a challenging task for standard classification algorithms. In general, there are two main approaches to solve the problem of imbalanced data: algorithm-level and data-level solutions. This paper deals with the second approach. In particular, this paper shows a new proposition for calculating the weighted score function to use in the integration phase of the multiple classification system. The presented research includes experimental evaluation over multiple, open-source, highly imbalanced datasets, presenting the results of comparing the proposed algorithm with three other approaches in the context of six performance measures. Comprehensive experimental results show that the proposed algorithm has better performance measures than the other ensemble methods for highly imbalanced datasets.
- Published
- 2020
- Full Text
- View/download PDF
6. Novel Approach to Gentle AdaBoost Algorithm with Linear Weak Classifiers
- Author
-
Szymon Zacher, Robert Burduk, and Wojciech Bożejko
- Subjects
Computer science ,business.industry ,Score ,Pattern recognition ,02 engineering and technology ,Adaboost algorithm ,Sequential structure ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Decision boundary ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,Classifier (UML) - Abstract
This paper presents the problem of calculating the value of the scoring function for weak classifiers operating in the sequential structure. An example of such a structure is Gentle AdaBoost algorithm whose modification we propose in this work. In the proposed approach the distance of the object from the decision boundary is scaled in decision regions defined by the weak classifier at first and later transformed by the log-normal function. The described algorithm was tested on sixth public available data sets and compared with Gentle AdaBoost algorithm.
- Published
- 2020
- Full Text
- View/download PDF
7. Dynamic Ensemble Selection – Application to Classification of Cutting Tools
- Author
-
Izabela Rojek, Robert Burduk, and Paulina Heda
- Subjects
Cutting tool ,Ensemble selection ,Computer science ,business.industry ,0211 other engineering and technologies ,Pattern recognition ,02 engineering and technology ,Multiple classifier ,ComputingMethodologies_PATTERNRECOGNITION ,Quartile ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,Classifier (UML) ,021106 design practice & management - Abstract
In order to improve pattern recognition performance of an individual classifier an ensemble of classifiers is used. One of the phases of creating the multiple classifier system is the selection of base classifiers which are used as the original set of classifiers. In this paper we propose the algorithm of the dynamic ensemble selection that uses median and quartile of correctly classified objects. The resulting values are used to define the decision schemes, which are used in the selection of the base classifiers process. The proposed algorithm has been verified on a real dataset regarding the classification of cutting tools. The obtained results clearly indicate that the proposed algorithm improves the classification measure. The improvement concerns the comparison with the ensemble of classifiers method without the selection.
- Published
- 2020
- Full Text
- View/download PDF
8. Fusion of linear base classifiers in geometric space
- Author
-
Robert Burduk, Paweł Zyblewski, and Paweł Ksieniewicz
- Subjects
Information Systems and Management ,Rank (linear algebra) ,business.industry ,Computer science ,Stability (learning theory) ,Score ,Pattern recognition ,02 engineering and technology ,Function (mathematics) ,Base (topology) ,Management Information Systems ,ComputingMethodologies_PATTERNRECOGNITION ,Ensembles of classifiers ,Artificial Intelligence ,020204 information systems ,Classifier (linguistics) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,Limit (mathematics) ,business ,Software - Abstract
Ensembles of classifiers deserve attention because their stability and accuracy are usually superior compared to the single classifier. One of the aspects regarding the construction of multiple classifier systems is the fusion of each base model output. The state-of-the-art fusion of base classifiers approaches uses class labels, a rank array, or a score function to determine the classifier ensemble’s final decision. On the other hand, in this study, we use the base classifiers’ decision boundaries in the fusion process. Therefore the integration process occurs in a geometric space. In this paper, a new definition of the function that measures the central tendency has been proposed. This function allows integrating any number of linear base classifiers in the geometry space, removing the limit on the number of these classifiers in the ensemble. The limit on the number of base classifiers is noticeable in our earlier works. The proposal was compared with other fusion approaches to base classifiers outputs. The experiments on multiple binary datasets from UCI and KEEL datasets repositories demonstrate the effectiveness of our proposal of the fusion process in the geometric space. To discuss the results of our experiments, we use standard and imbalanced datasets separately.
- Published
- 2021
- Full Text
- View/download PDF
9. Combination of Linear Classifiers Using Score Function – Analysis of Possible Combination Strategies
- Author
-
Pawel Trajdos and Robert Burduk
- Subjects
Majority rule ,business.industry ,Score ,Value (computer science) ,Pattern recognition ,0102 computer and information sciences ,02 engineering and technology ,Function (mathematics) ,01 natural sciences ,Simple average ,010201 computation theory & mathematics ,0202 electrical engineering, electronic engineering, information engineering ,Decision boundary ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,Mathematics - Abstract
In this work, we addressed the issue of combining linear classifiers using their score functions. The value of the scoring function depends on the distance from the decision boundary. Two score functions have been tested and four different combination strategies were investigated. During the experimental study, the proposed approach was applied to the heterogeneous ensemble and it was compared to two reference methods – majority voting and model averaging respectively. The comparison was made in terms of seven different quality criteria. The result shows that combination strategies based on simple average, and trimmed average are the best combination strategies of the geometrical combination.
- Published
- 2019
- Full Text
- View/download PDF
10. Gentle AdaBoost Algorithm with Score Function Dependent on the Distance to Decision Boundary
- Author
-
Wojciech Bożejko and Robert Burduk
- Subjects
Boosting (machine learning) ,Computer science ,business.industry ,020208 electrical & electronic engineering ,Score ,Pattern recognition ,02 engineering and technology ,Adaboost algorithm ,symbols.namesake ,ComputingMethodologies_PATTERNRECOGNITION ,0202 electrical engineering, electronic engineering, information engineering ,Decision boundary ,Gaussian function ,symbols ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,Classifier (UML) - Abstract
This paper presents a new extension of Gentle AdaBoost algorithm based on the distance of the object to the decision boundary, which is defined by the weak classifier used in boosting. In the proposed approach this distance is transformed by Gaussian function and defines the value of a score function. The assumed form of transforming functions means that the objects closest or farthest located from the decision boundary of the basic classifier have the lowest value of the scoring function. The described algorithm was tested on four data sets from UCI repository and compared with Gentle AdaBoost algorithm.
- Published
- 2019
- Full Text
- View/download PDF
11. Linear classifier combination via multiple potential functions
- Author
-
Pawel Trajdos and Robert Burduk
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Calibration (statistics) ,Computer science ,Feature vector ,Machine Learning (stat.ML) ,Linear classifier ,02 engineering and technology ,01 natural sciences ,Machine Learning (cs.LG) ,Statistics - Machine Learning ,Artificial Intelligence ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering ,010306 general physics ,business.industry ,Pattern recognition ,Function (mathematics) ,Object (computer science) ,Signal Processing ,Decision boundary ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Software - Abstract
A vital aspect of the classification based model construction process is the calibration of the scoring function. One of the weaknesses of the calibration process is that it does not take into account the information about the relative positions of the recognized objects in the feature space. To alleviate this limitation, in this paper, we propose a novel concept of calculating a scoring function based on the distance of the object from the decision boundary and its distance to the class centroid. An important property is that the proposed score function has the same nature for all linear base classifiers, which means that outputs of these classifiers are equally represented and have the same meaning. The proposed approach is compared with other ensemble algorithms and experiments on multiple Keel datasets demonstrate the effectiveness of our method. To discuss the results of our experiments, we use multiple classification performance measures and statistical analysis.
- Published
- 2021
- Full Text
- View/download PDF
12. Integration of Linear SVM Classifiers in Geometric Space Using the Median
- Author
-
Jedrzej Biedrzycki and Robert Burduk
- Subjects
0209 industrial biotechnology ,Majority rule ,Computer science ,business.industry ,Linear svm ,Pattern recognition ,02 engineering and technology ,ComputingMethodologies_PATTERNRECOGNITION ,020901 industrial engineering & automation ,0202 electrical engineering, electronic engineering, information engineering ,Decision boundary ,020201 artificial intelligence & image processing ,Artificial intelligence ,Geometric space ,business ,Classifier (UML) - Abstract
An ensemble of classifiers can improve the performance of a pattern recognition system. The task of constructing multiple classifier systems can be generally divided into three steps: generation, selection and integration. In this paper, we propose an integration process which takes place in the geometric space. It means that the fusion of base classifiers is done using decision boundaries. In our approach, we use the linear SVM model as a base classifier, the selection process is based on the accuracy and the final decision boundary is calculated by using the median of the decision boundary. The aim of the experiments was to compare the proposed algorithm and the majority voting method.
- Published
- 2018
- Full Text
- View/download PDF
13. The Use of Geometric Mean in the Process of Integration of Three Base Classifiers
- Author
-
Robert Burduk and Andrzej Kasprzak
- Subjects
0209 industrial biotechnology ,Computer science ,business.industry ,Process (computing) ,Pattern recognition ,02 engineering and technology ,Space (commercial competition) ,Base (topology) ,Class (biology) ,020901 industrial engineering & automation ,Development (topology) ,Discriminant ,0202 electrical engineering, electronic engineering, information engineering ,Decision boundary ,020201 artificial intelligence & image processing ,Artificial intelligence ,Geometric mean ,business - Abstract
One of the most important steps in the formation of multiple classifier systems is the integration process also called the base classifiers fusion. The fusion process may be applied either to class labels or confidence levels (discriminant functions). These are the two main methods for combining base classifiers. In this paper, we propose an integration process which takes place in the geometry space. It means that the fusion of base classifiers is done using decision boundaries. In our approach, the final decision boundary is calculated by using the geometric mean. The algorithm presented in the paper concerns the case of 3 basic classifiers and two-dimensional features space. The results of the experiment based on several data sets show that the proposed integration algorithm is a promising method for the development of multiple classifiers systems.
- Published
- 2018
- Full Text
- View/download PDF
14. Dynamic confidence values selection — Experimental studies
- Author
-
Robert Burduk
- Subjects
Decision support system ,Neutral network ,business.industry ,Computer science ,Decision tree ,020206 networking & telecommunications ,02 engineering and technology ,Machine learning ,computer.software_genre ,Support vector machine ,Binary classification ,0202 electrical engineering, electronic engineering, information engineering ,Task analysis ,020201 artificial intelligence & image processing ,Artificial intelligence ,Medical diagnosis ,business ,Classifier (UML) ,computer - Abstract
The machine learning methods are often used in the development of the effective medical decision support systems. One of the latest trends in data mining is the ensemble selection. In this paper, we present the algorithm of the dynamic confidence values selection, which is dedicated to the binary classification task. In the experiment we use Support Vector Machine, k-Nearest-Neighbors, Neutral Network and Decision Trees models as base classifiers. Experiments on several publicly available medical diagnosis data sets verify the effectiveness of the proposed algorithm. The results demonstrate that the dynamic confidence values selection outperforms the ensemble classifier built with all base learning models.
- Published
- 2017
- Full Text
- View/download PDF
15. The AdaBoost Algorithm with Linear Modification of the Weights
- Author
-
Robert Burduk
- Subjects
Boosting (machine learning) ,business.industry ,Computer science ,Pattern recognition ,Linear classifier ,02 engineering and technology ,Adaboost algorithm ,03 medical and health sciences ,ComputingMethodologies_PATTERNRECOGNITION ,0302 clinical medicine ,030220 oncology & carcinogenesis ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,Classifier (UML) - Abstract
This paper presents a new extension of the AdaBoost algorithm. This extension concerns the weights used in this algorithm. In our approach the original weights are modified, we propose a linear modification of the weights. In our study we use the boosting by the reweighting method where each weak classifier is based on the linear classifier. The described algorithm was tested on Pima data set. The obtained results are compared with the original the AdaBoost algorithm.
- Published
- 2017
- Full Text
- View/download PDF
16. Classifier Selection for Motor Imagery Brain Computer Interface
- Author
-
Robert Burduk, Izabela Rejer, West Pomeranian University of Technology Szczecin, Wroclaw University of Science and Technology, Khalid Saeed, Władysław Homenda, Rituparna Chaki, and TC 8
- Subjects
Boosting (machine learning) ,Computer science ,[SHS.INFO]Humanities and Social Sciences/Library and information sciences ,0206 medical engineering ,02 engineering and technology ,Machine learning ,computer.software_genre ,Boosting ,03 medical and health sciences ,0302 clinical medicine ,Motor imagery ,[INFO]Computer Science [cs] ,Brain–computer interface ,Imagery brain computer interface ,business.industry ,Pattern recognition ,Classification ,020601 biomedical engineering ,Support vector machine ,Statistical classification ,ComputingMethodologies_PATTERNRECOGNITION ,Artificial intelligence ,business ,Classifier (UML) ,computer ,030217 neurology & neurosurgery - Abstract
Part 2: Biometrics and Pattern Recognition Applications; International audience; The classification process in the domain of brain computer interfaces (BCI) is usually carried out with simple linear classifiers, like LDA or SVM. Non-linear classifiers rarely provide a sufficient increase in the classification accuracy to use them in BCI. However, there is one more type of classifiers that could be taken into consideration when looking for a way to increase the accuracy - boosting classifiers. These classification algorithms are not common in BCI practice, but they proved to be very efficient in other applications.
- Published
- 2017
- Full Text
- View/download PDF
17. Drift Detection Algorithm Using the Discriminant Function of the Base Classifiers
- Author
-
Robert Burduk
- Subjects
Multiple discriminant analysis ,Concept drift ,Drift detection ,business.industry ,Computer science ,Pattern recognition ,02 engineering and technology ,01 natural sciences ,Random subspace method ,010104 statistics & probability ,ComputingMethodologies_PATTERNRECOGNITION ,Template ,Discriminant function analysis ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,0101 mathematics ,business ,Classifier (UML) ,Algorithm - Abstract
Recently, several approaches have been proposed to deal with the concept drift detection. In this paper we propose the new concept drift detection algorithm based on the decision templates. The decision templates are obtained from the outputs of the base classifier that form an ensemble of classifiers. Experiments on several publicly available data sets verify the effectiveness of the proposed algorithm.
- Published
- 2017
- Full Text
- View/download PDF
18. Integration Base Classifiers Based on Their Decision Boundary
- Author
-
Robert Burduk
- Subjects
0209 industrial biotechnology ,Process (engineering) ,business.industry ,Computer science ,02 engineering and technology ,Space (commercial competition) ,Base (topology) ,Machine learning ,computer.software_genre ,Class (biology) ,Random subspace method ,020901 industrial engineering & automation ,Ranking ,0202 electrical engineering, electronic engineering, information engineering ,Decision boundary ,020201 artificial intelligence & image processing ,Integration algorithm ,Artificial intelligence ,business ,computer - Abstract
Multiple classifier systems are used to improve the performance of base classifiers. One of the most important steps in the formation of multiple classifier systems is the integration process in which the base classifiers outputs are combined. The most commonly used classifiers outputs are class labels, the ranking list of possible classes or confidence levels. In this paper, we propose an integration process which takes place in the “geometry space”. It means that we use the decision boundary in the integration process. The results of the experiment based on several data sets show that the proposed integration algorithm is a promising method for the development of multiple classifiers systems.
- Published
- 2017
- Full Text
- View/download PDF
19. Classifier fusion with interval-valued weights
- Author
-
Robert Burduk
- Subjects
Computer science ,business.industry ,Rank (computer programming) ,Pattern recognition ,Context (language use) ,Base (topology) ,Machine learning ,computer.software_genre ,Measure (mathematics) ,Random subspace method ,ComputingMethodologies_PATTERNRECOGNITION ,Classifier fusion ,Artificial Intelligence ,Signal Processing ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,computer ,Software ,Cascading classifiers - Abstract
The article presents a new approach of calculating the weight of base classifiers from a committee of classifiers. The obtained weights are interpreted in the context of the interval-valued sets. The work proposes four different ways of calculating weights which consider both the correctness and incorrectness of the classification. The proposed weights have been used in the algorithms which combine the outputs of base classifiers. In this work we use both the outputs, represented by rank and measure level. Research experiments have involved several bases available in the UCI repository and two data sets that have generated distributions. The performed experiments compare algorithms which are based on calculating the weights according to the resubstitution and algorithms proposed in the work. The ensemble of classifiers has also been compared with the base classifiers entering the committee.
- Published
- 2013
- Full Text
- View/download PDF
20. Intelligent Data Engineering and Automated Learning – IDEAL 2015 : 16th International Conference, Wroclaw, Poland, October 14-16, 2015, Proceedings
- Author
-
Konrad Jackowski, Robert Burduk, Krzysztof Walkowiak, Michal Wozniak, Hujun Yin, Konrad Jackowski, Robert Burduk, Krzysztof Walkowiak, Michal Wozniak, and Hujun Yin
- Subjects
- Data mining, Pattern recognition systems, Artificial intelligence, Algorithms, Information storage and retrieval systems, Computer science
- Abstract
This book constitutes the refereed proceedings of the 16th International Conference on Intelligent Data Engineering and Automated Learning, IDEAL 2015, held in Wroclaw, Poland, in October 2015. The 64 revised full papers presented were carefully reviewed and selected from 127 submissions. These papers provided a valuable collection of recent research outcomes in data engineering and automated learning, from methodologies, frameworks, and techniques to applications. In addition to various topics such as evolutionary algorithms, neural networks, probabilistic modeling, swarm intelligent, multi-objective optimization, and practical applications in regression, classification, clustering, biological data processing, text processing, video analysis, IDEAL 2015 also featured a number of special sessions on several emerging topics such as computational intelligence for optimization of communication networks, discovering knowledge fromdata, simulation-driven DES-like modeling and performance evaluation, and intelligent applications in real-world problems.
- Published
- 2015
21. Ensemble of Classifiers with Modification of Confidence Values
- Author
-
Robert Burduk, Paulina Baczyńska, Wroclaw University of Science and Technology, Khalid Saeed, Władysław Homenda, and TC 8
- Subjects
Computer science ,business.industry ,[SHS.INFO]Humanities and Social Sciences/Library and information sciences ,Confidence value ,Pattern recognition ,02 engineering and technology ,01 natural sciences ,Ensemble learning ,Random subspace method ,Multiple classifier system ,ComputingMethodologies_PATTERNRECOGNITION ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Decision profile ,[INFO]Computer Science [cs] ,Artificial intelligence ,010306 general physics ,business ,Classifier (UML) - Abstract
Part 7: Decisions; International audience; In the classification task, the ensemble of classifiers have attracted more and more attention in pattern recognition communities. Generally, ensemble methods have the potential to significantly improve the prediction base classifier which are included in the team. In this paper, we propose the algorithm which modifies the confidence values. This values are obtained as an outputs of the base classifiers. The experiment results based on thirteen data sets show that the proposed method is a promising method for the development of multiple classifiers systems. We compared the proposed method with other known ensemble of classifiers and with all base classifiers.
- Published
- 2016
- Full Text
- View/download PDF
22. Discriminant Function Selection in Binary Classification Task
- Author
-
Robert Burduk
- Subjects
Normalization (statistics) ,Multiple discriminant analysis ,Ensemble selection ,Computer science ,business.industry ,Pattern recognition ,02 engineering and technology ,Machine learning ,computer.software_genre ,03 medical and health sciences ,ComputingMethodologies_PATTERNRECOGNITION ,0302 clinical medicine ,Binary classification ,Discriminant ,Discriminant function analysis ,Optimal discriminant analysis ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,Kernel Fisher discriminant analysis ,business ,computer ,030217 neurology & neurosurgery - Abstract
The ensemble selection is one of the important problems in building multiple classifier systems (MCSs). This paper presents dynamic ensemble selection based on the analysis of discriminant functions. The idea of the selection is presented on the basis of binary classification tasks. The paper presents two approaches: one takes into account the normalization of the discrimination functions, and in the second approach, normalization is not performed. The reported results based on the data sets form the UCI repository show that the proposed ensemble selection is a promising method for the development of MCSs.
- Published
- 2016
- Full Text
- View/download PDF
23. Different decision tree induction strategies for a medical decision problem
- Author
-
Michal Wozniak and Robert Burduk
- Subjects
Incremental decision tree ,business.industry ,Computer science ,Decision tree learning ,Decision tree ,Evidential reasoning approach ,ID3 algorithm ,General Medicine ,Machine learning ,computer.software_genre ,acute abdominal pain ,multistage classifier ,univariate and multivariate decision trees ,Alternating decision tree ,Medicine ,bayes decision theory ,Artificial intelligence ,Decision stump ,medical decision support systems ,business ,computer ,Decision analysis - Abstract
The paper presents a comparative study of selected recognition methods for the medical decision problem -acute abdominal pain diagnosis. We consider if it is worth using expert knowledge and learning set at the same time. The article shows two groups of decision tree approaches to the problem under consideration. The first does not use expert knowledge and generates classifier only on the basis of learning set. The second approach utilizes expert knowledge for specifying the decision tree structure and learning set for determining mode of decision making in each node based on Bayes decision theory. All classifiers are evaluated on the basis of computer experiments.
- Published
- 2012
24. Imprecise information in Bayes classifier
- Author
-
Robert Burduk
- Subjects
Bayes' rule ,Mathematics::General Mathematics ,business.industry ,Fuzzy set ,Pattern recognition ,Bayes classifier ,Fuzzy logic ,Naive Bayes classifier ,Bayes' theorem ,ComputingMethodologies_PATTERNRECOGNITION ,Artificial Intelligence ,Bayes error rate ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Classifier (UML) ,Mathematics - Abstract
The paper considers the problem of classification error in pattern recognition. This model of classification is primarily based on the Bayes rule and secondarily on the notion of intuitionistic or interval-valued fuzzy sets. A probability of misclassifications is derived for a classifier under the assumption that the features are class-conditionally statistically independent, and we have intuitionistic or interval-valued fuzzy information on object features instead of exact information. A probability of the intuitionistic or interval-valued fuzzy event is represented by the real number. Additionally, the received results are compared with the bound on the probability of error based on information energy. Numerical example concludes the work.
- Published
- 2011
- Full Text
- View/download PDF
25. Classification error in Bayes multistage recognition task with fuzzy observations
- Author
-
Robert Burduk
- Subjects
business.industry ,Pattern recognition ,Bayes classifier ,Machine learning ,computer.software_genre ,Fuzzy logic ,Hierarchical classifier ,Tree (data structure) ,Bayes' theorem ,ComputingMethodologies_PATTERNRECOGNITION ,Artificial Intelligence ,Pattern recognition (psychology) ,Bayes error rate ,Fuzzy number ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,computer ,Mathematics - Abstract
The paper considers the problem of classification error in multistage pattern recognition. This model of classification is based primarily on the Bayes rule and secondarily on the notion of fuzzy numbers. In adopting a probability-fuzzy model two concepts of hierarchical rules are proposed. In the first approach the local criterion that denote the probabilities of misclassification for particular nodes of a tree is considered. In the second approach the global optimal strategy that minimises the mean probability of misclassification on the whole multistage recognition process is considered. A probability of misclassifications is derived for a multiclass hierarchical classifier under the assumption that the features at different nodes of the tree are class-conditionally statistically independent, and we have fuzzy information on object features instead of exact information. Numerical example of this difference concludes the work.
- Published
- 2009
- Full Text
- View/download PDF
26. Method of Static Classifiers Selection Using the Weights of Base Classifiers
- Author
-
Robert Burduk
- Subjects
Basis (linear algebra) ,business.industry ,Computer science ,Interval temporal logic ,Context (language use) ,Pattern recognition ,Base (topology) ,Machine learning ,computer.software_genre ,Oracle ,Random subspace method ,ComputingMethodologies_PATTERNRECOGNITION ,Artificial intelligence ,business ,computer ,Selection (genetic algorithm) ,Cascading classifiers - Abstract
The choice of a pertinent objective function is one of the most crucial elements in static ensemble selection. In this study, a new approach of calculating the weight of base classifiers is developed. The values of these weights are the basis for the selection process of classifiers from the initial pool. The obtained weights are interpreted in the context of the interval logic. A number of experiments have been carried out on several datasets available in the UCI repository. The performed experiments compare the proposed algorithms with base classifiers, oracle, sum, product, and mean methods.
- Published
- 2015
- Full Text
- View/download PDF
27. Static Classifier Selection with Interval Weights of Base Classifiers
- Author
-
Robert Burduk and Krzysztof Walkowiak
- Subjects
Random subspace method ,ComputingMethodologies_PATTERNRECOGNITION ,Classifier fusion ,Computer science ,business.industry ,Interval temporal logic ,Pattern recognition ,Artificial intelligence ,Machine learning ,computer.software_genre ,business ,computer ,Classifier (UML) - Abstract
The selection of classifiers is one of the important problems in the creation of an ensemble of classifiers. The paper presents the static selection in which a new method of calculating the weights of individual classifiers is used. The obtained weights can be interpreted in the context of the interval logic. It means that the particular weights will not be provided precisely but their lower and upper values will be used. A number of experiments have been carried out on several data sets from the UCI repository.
- Published
- 2015
- Full Text
- View/download PDF
28. Two-stage binary classifier with fuzzy-valued loss function
- Author
-
Robert Burduk and Marek Kurzynski
- Subjects
Fuzzy classification ,business.industry ,Pattern recognition ,Type-2 fuzzy sets and systems ,Fuzzy logic ,Defuzzification ,Artificial Intelligence ,Fuzzy mathematics ,Fuzzy number ,Fuzzy set operations ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Algorithm ,Membership function ,Mathematics - Abstract
In this paper we present the decision rules of a two-stage binary Bayesian classifier. The loss function in our case is fuzzy-valued and is dependent on the stage of the decision tree or on the node of the decision tree. The decision rules minimize the mean risk, i.e., the mean value of the fuzzy loss function. The model is first based on the notion of fuzzy random variable and secondly on the subjective ranking of fuzzy number defined by Campos and Gonzalez. In this paper also, influence of choice of parameter λ in selected comparison fuzzy number method on classification results are presented. Finally, an example illustrating the study developed in the paper is considered.
- Published
- 2006
- Full Text
- View/download PDF
29. The AdaBoost Algorithm with the Imprecision Determine the Weights of the Observations
- Author
-
Robert Burduk
- Subjects
Boosting (machine learning) ,Computer science ,business.industry ,Learning set ,Recursive partitioning ,Pattern recognition ,Artificial intelligence ,business ,Adaboost algorithm ,Classifier (UML) ,Statistical hypothesis testing - Abstract
This paper presents the AdaBoost algorithm that provides for the imprecision in the calculation of weights. In our approach the obtained values of weights are changed within a certain range of values. This range represents the uncertainty of the calculation of the weight of each element of the learning set. In our study we use the boosting by the reweighting method where each weak classifier is based on the recursive partitioning method. A number of experiments have been carried out on eight data sets available in the UCI repository and on two randomly generated data sets. The obtained results are compared with the original AdaBoost algorithm using appropriate statistical tests.
- Published
- 2014
- Full Text
- View/download PDF
30. Proceedings of the 8th International Conference on Computer Recognition Systems CORES 2013
- Author
-
Konrad Jackowski, Robert Burduk, and Marek Kurzynski
- Subjects
Biometrics ,Sketch recognition ,business.industry ,Computer science ,Big data ,Pattern recognition (psychology) ,Image processing ,Robotics ,Artificial intelligence ,business ,Data science ,Field (computer science) ,Word (computer architecture) - Abstract
The computer recognition systems are nowadays one of the most promising directions in artificial intelligence. This book is the most comprehensive study of this field. It contains a collection of 86 carefully selected articles contributed by experts of pattern recognition. It reports on current research with respect to both methodology and applications. In particular, it includes the following sections:Biometrics Data Stream Classification and Big Data AnalyticsFeatures, learning, and classifiers Image processing and computer vision Medical applications Miscellaneous applications Pattern recognition and image processing in roboticsSpeech and word recognitionThis book is a great reference tool for scientists who deal with the problems of designing computer pattern recognition systems. Its target readers can be the as well researchers as students of computer science, artificial intelligence or robotics.
- Published
- 2013
- Full Text
- View/download PDF
31. The Method of Improving the Structure of the Decision Tree Given by the Experts
- Author
-
Robert Burduk
- Subjects
Incremental decision tree ,Computer science ,business.industry ,Decision tree learning ,ID3 algorithm ,Decision tree ,computer.software_genre ,Machine learning ,Alternating decision tree ,Influence diagram ,Decision stump ,Artificial intelligence ,Data mining ,business ,computer ,Decision tree model - Abstract
This paper presents the problem of sequential decision making in the pattern recognition task. This task can be presented using a decision tree. In this case, it is assumed that the structure of the decision tree is determined by experts. The classification process is made in each node of the tree. This paper proposes a way to change the structure of the decision tree to improve the quality of classification. The split criterion is based on the confusion matrix. The obtained results were verified on the basis of the example of the computer-aided medical diagnosis.
- Published
- 2013
- Full Text
- View/download PDF
32. Construction of Sequential Classifier Based on Broken Stick Model
- Author
-
Robert Burduk and Pawel Trajdos
- Subjects
Data set ,Computer science ,business.industry ,Confusion matrix ,Pattern recognition ,Artificial intelligence ,Sequential model ,Machine learning ,computer.software_genre ,business ,computer ,Classifier (UML) - Abstract
This paper presents the problem of building the sequential model of the classification task. In our approach the structure of the model is built in the learning phase of classification. In this paper a split criterion based on the broken stick model is proposed. The broken stick distribution is created for each column of the confusion matrix. The split criterion is associated with the analysis of the received distributions. The obtained results were verified on ten data sets. Nine data sets come from UCI repository and one is a real-life data set.
- Published
- 2013
- Full Text
- View/download PDF
33. Construction of Sequential Classifier Using Confusion Matrix
- Author
-
Robert Burduk, Pawel Trajdos, Department of Systems and Computer Networks, Wroclaw University of Science and Technology, Khalid Saeed, Rituparna Chaki, Agostino Cortesi, Sławomir Wierzchoń, and TC 8
- Subjects
Incremental decision tree ,business.industry ,Computer science ,[SHS.INFO]Humanities and Social Sciences/Library and information sciences ,Decision tree learning ,Decision tree ,Confusion matrix ,sequential classifier ,Multistage classifier ,computer.software_genre ,Machine learning ,Decision scheme ,Data set ,confusion matrix ,[INFO]Computer Science [cs] ,Data mining ,Artificial intelligence ,Medical diagnosis ,business ,computer ,Classifier (UML) - Abstract
Part 7: Algorithms; International audience; This paper presents the problem of building the decision scheme in the multistage pattern recognition task. This task can be presented using a decision tree. This decision tree is built in the learning phase of classification. This paper proposes a split criterion based on the analysis of the confusion matrix. Specifically, we propose the division associated with an incorrect classification. The obtained results were verified on the data sets form UCI Machine Learning Repository and one real-life data set of the computer-aided medical diagnosis.
- Published
- 2013
- Full Text
- View/download PDF
34. Comparison of Cost for Zero-One and Stage-Dependent Fuzzy Loss Function
- Author
-
Robert Burduk
- Subjects
Computer science ,Total cost ,business.industry ,Pattern recognition ,Function (mathematics) ,Fuzzy logic ,Bayes' theorem ,Tree structure ,Binary classification ,Hinge loss ,Node (circuits) ,Artificial intelligence ,business ,Algorithm - Abstract
In the paper we consider the two-stage binary classifier based on Bayes rule. Assuming that both the tree structure and the feature used at each non-terminal node have been specified, we present the expected total cost. This cost is considered for two types of loss function. First is the zero-one loss function and second is the node-dependent fuzzy loss function. The work focuses on the difference between the expected total costs for these two cases of loss function in the two-stage binary classifier. The obtained results are presented on the numerical example.
- Published
- 2012
- Full Text
- View/download PDF
35. Recognition Task with Feature Selection and Weighted Majority Voting Based on Interval-Valued Fuzzy Sets
- Author
-
Robert Burduk
- Subjects
Majority rule ,Randomized weighted majority algorithm ,Weighted Majority Algorithm ,business.industry ,Computer science ,Fuzzy set ,Pattern recognition ,Feature selection ,Machine learning ,computer.software_genre ,Task (project management) ,Data set ,Artificial intelligence ,business ,computer ,Membership function - Abstract
This paper presents the recognition algorithm with random selection of features. In the proposed procedure of classification the choice of weights is one of the main problems. In this paper we propose the weighted majority vote rule in which weights are represented by interval-valued fuzzy set (IVFS). In our approach the weights have a lower and upper membership function. The described algorithm was tested on one data set from UCI repository. The obtained results are compared with the most popular majority vote and the weighted majority vote rule.
- Published
- 2012
- Full Text
- View/download PDF
36. Decomposition of Classification Task with Selection of Classifiers on the Medical Diagnosis Example
- Author
-
Robert Burduk and Marcin Zmyślony
- Subjects
Incremental decision tree ,business.industry ,Computer science ,Decision tree ,Pattern recognition ,Machine learning ,computer.software_genre ,Hierarchical classifier ,Task (project management) ,Random subspace method ,Decomposition (computer science) ,Artificial intelligence ,Medical diagnosis ,business ,computer ,Selection (genetic algorithm) - Abstract
The article presents the concept of decomposition of the multidimensional classification task. The recognition procedure is divided into independent blocks. These blocks can be interpreted as lower classification problems. The structure of these blocks is presented as a decision tree. In this model the experts give the decision tree structure. The problem discussed in the work shows a selection of different classifiers (or their parameters) to the internal nodes of the decision tree. Experiments conducted for selected medical diagnosis problem show that the use of different classifiers can improve the quality of classification.
- Published
- 2012
- Full Text
- View/download PDF
37. New AdaBoost Algorithm Based on Interval-Valued Fuzzy Sets
- Author
-
Robert Burduk
- Subjects
Fuzzy classification ,Boosting (machine learning) ,business.industry ,Fuzzy set ,Recursive partitioning ,Pattern recognition ,Defuzzification ,ComputingMethodologies_PATTERNRECOGNITION ,Fuzzy number ,Artificial intelligence ,business ,Classifier (UML) ,Membership function ,Mathematics - Abstract
This paper presents a new extension of AdaBoost algorithm based on interval-valued fuzzy sets. This extension is for the weights used in samples of the training sets. The original weights are the real number from the interval [0, 1]. In our approach the weights are represented by the interval-valued fuzzy set, that is any weight has a lower and upper membership function. The same value of lower and upper membership function has a weight of the appropriate weak classifier. In our study we use the boosting by the reweighting method where each weak classifier is based on the recursive partitioning method. The described algorithm was tested on two generation data sets and two sets from UCI repository. The obtained results are compared with the original AdaBoost algorithm.
- Published
- 2012
- Full Text
- View/download PDF
38. Estimations of the Error in Bayes Classifier with Fuzzy Observations
- Author
-
Robert Burduk
- Subjects
Computer science ,business.industry ,Pattern recognition ,Function (mathematics) ,Bayes classifier ,Fuzzy logic ,Bayes' theorem ,Naive Bayes classifier ,ComputingMethodologies_PATTERNRECOGNITION ,Pattern recognition (psychology) ,Bayes error rate ,Artificial intelligence ,business ,Energy (signal processing) - Abstract
The paper presents the problem of the error estimation in the Bayes classifier. The model of pattern recognition with fuzzy or exact observations of features and the zero-one loss function was assumed. For this model of pattern recognition difference of the probability of error for exact and fuzzy data was demonstrated. Received results were compared to the bound on the probability of error based on information energy for fuzzy events. The paper presents that the bound on probability of an error based on information energy is very inaccurate.
- Published
- 2011
- Full Text
- View/download PDF
39. Costs-Sensitive Classification in Two-Stage Binary Classifier
- Author
-
Robert Burduk and Andrzej Kasprzak
- Subjects
Bayes' theorem ,Tree structure ,Binary classification ,Total cost ,business.industry ,Pattern recognition ,Artificial intelligence ,Bayes classifier ,business ,Classifier (UML) ,Mathematics - Abstract
In the paper the problem of cost in the two-stage binary classifier is presented. Assuming that both the tree structure and the feature used at each non-terminal node have been specified, we present the expected total cost for two cases. The first one concerns the zero-one loss function, the second concerns the stage-dependent loss function. The work focuses on the difference between the expected total costs for these two cases of loss function. Obtained results relate to the globally optimal strategy of Bayes multistage classifier.
- Published
- 2011
- Full Text
- View/download PDF
40. Costs-Sensitive Classification in Multistage Classifier with Fuzzy Observations of Object Features
- Author
-
Robert Burduk
- Subjects
Fuzzy classification ,Neuro-fuzzy ,business.industry ,Computer science ,Pattern recognition ,computer.software_genre ,Fuzzy logic ,Defuzzification ,Hierarchical classifier ,ComputingMethodologies_PATTERNRECOGNITION ,Fuzzy set operations ,Fuzzy number ,Artificial intelligence ,Data mining ,business ,Classifier (UML) ,computer - Abstract
In the paper the problem of cost in hierarchical classifier is presented. Assuming that both the tree structure and the feature used at each non-terminal node have been specified, we present the expected total cost for two cases. The first one concerns the non fuzzy observation of object features, the second concerns the fuzzy observation. At the end of the work the difference between expected total cost of fuzzy and non fuzzy data is determined. Obtained results relate to the locally optimal strategy of Bayes multistage classifier.
- Published
- 2011
- Full Text
- View/download PDF
41. Some Properties of Binary Classifier with Fuzzy-Valued Loss Function
- Author
-
Robert Burduk
- Subjects
Binary classification ,Ranking ,Computer science ,business.industry ,Decision tree ,Fuzzy number ,Pattern recognition ,Node (circuits) ,Decision rule ,Artificial intelligence ,Function (mathematics) ,business ,Fuzzy logic - Abstract
In this paper we present some prosperities of binary classifier with fuzzyvalued loss function. The loss function in our case is dependent on the stage of the decision tree or depends on the node of the decision tree. The decision rules of a two-stage binary classifier minimize the mean risk, that is the mean value of the fuzzy loss function. In the paper the effect of a loss function on the value of the separation point of decision regions is presented. In this paper we will are not going to study the impact of the choice of ranking of fuzzy numbers method on the results of the classification.
- Published
- 2011
- Full Text
- View/download PDF
42. Exact classification error in bayes classifier with fuzzy observations
- Author
-
Robert Burduk
- Subjects
Fuzzy classification ,business.industry ,Fuzzy set ,Pattern recognition ,Bayes classifier ,Machine learning ,computer.software_genre ,Fuzzy logic ,Bayes' theorem ,Naive Bayes classifier ,ComputingMethodologies_PATTERNRECOGNITION ,Bayes error rate ,Fuzzy number ,Artificial intelligence ,business ,computer ,Mathematics - Abstract
The paper considers the problem of classification error in pattern recognition. This model of classification is primarily based on the Bayes rule and secondarily on the notion of fuzzy numbers. A probability of misclassifications is derived for a classifier under the assumption that the features are class-conditionally statistically independent, and we have fuzzy information on object features instead of exact information. Numerical example of this difference concludes the work.
- Published
- 2010
- Full Text
- View/download PDF
43. Some characteristics of an error in the two-class problem with fuzzy observations
- Author
-
Robert Burduk
- Subjects
Fuzzy classification ,business.industry ,Fuzzy set ,Pattern recognition ,Bayes classifier ,Type-2 fuzzy sets and systems ,Fuzzy logic ,ComputingMethodologies_PATTERNRECOGNITION ,Bayes error rate ,Fuzzy set operations ,Fuzzy number ,Artificial intelligence ,business ,Mathematics - Abstract
In the work we consider the situation with exact classes and fuzzy information of object features. The classification error is presented for the two-class Bayes classifier. The results are received for the full probabilistic information. The new upper bound of the probability of an error is precise twice as much as the bound based on the information energy of fuzzy events.
- Published
- 2010
- Full Text
- View/download PDF
44. A Partition of Feature Space Based on Information Energy in Classification with Fuzzy Observations
- Author
-
Robert Burduk
- Subjects
Fuzzy classification ,business.industry ,Feature vector ,Partition problem ,Pattern recognition ,computer.software_genre ,Fuzzy logic ,Partition (database) ,Bayes' theorem ,ComputingMethodologies_PATTERNRECOGNITION ,Fuzzy number ,Fuzzy set operations ,Artificial intelligence ,Data mining ,business ,computer ,Mathematics - Abstract
The paper considers the partition problem of feature space in classification. The partition is based on information energy for fuzzy events. In this paper we use Bayes rule for classification with fuzzy observations and exact classes. Additionally a probability of misclassifications is derived for fuzzy information on object features. The results show deterioration the quality of classification when we use fuzzy information on object features instead of exact information and are compared with the partition of feature space. Numerical example concludes the work.
- Published
- 2010
- Full Text
- View/download PDF
45. Probability Error in Bayes Optimal Classifier with Intuitionistic Fuzzy Observations
- Author
-
Robert Burduk
- Subjects
Fuzzy classification ,Mathematics::General Mathematics ,business.industry ,Pattern recognition ,Bayes classifier ,Bayes' theorem ,ComputingMethodologies_PATTERNRECOGNITION ,Fuzzy number ,Fuzzy set operations ,Bayes error rate ,Artificial intelligence ,business ,Classifier (UML) ,Real number ,Mathematics - Abstract
The paper considers the problem of classification error in pattern recognition. This model of classification is primarily based on the Bayes rule and secondarily on the notion of intuitionistic fuzzy sets. A probability of misclassifications is derived for a classifier under the assumption that the features are class-conditionally statistically independent, and we have intuitionistic fuzzy information on object features instead of exact information. Additionally, a probability of the intuitionistic fuzzy event is represented by the real number. Numerical example concludes the work.
- Published
- 2009
- Full Text
- View/download PDF
46. Interval-Valued Fuzzy Observations in Bayes Classifier
- Author
-
Robert Burduk
- Subjects
Fuzzy classification ,business.industry ,Pattern recognition ,Bayes classifier ,Bayesian inference ,Fuzzy logic ,Bayes' theorem ,Naive Bayes classifier ,ComputingMethodologies_PATTERNRECOGNITION ,Fuzzy number ,Bayes error rate ,Artificial intelligence ,business ,Mathematics - Abstract
The paper considers the problem of pattern recognition based on Bayes rule. In this model of classification, we use interval-valued fuzzy observations. The paper focuses on the probability of error on certain assumptions. A probability of misclassifications is derived for a classifier under the assumption that the features are class-conditionally statistically independent, and we have interval-valued fuzzy information on object features instead of exact information. Additionally, a probability of the interval-valued fuzzy event is represented by the real number as upper and lower probability. Numerical example concludes the work.
- Published
- 2009
- Full Text
- View/download PDF
47. Probability Error in Global Optimal Hierarchical Classifier with Intuitionistic Fuzzy Observations
- Author
-
Robert Burduk
- Subjects
Mathematics::General Mathematics ,Computer science ,business.industry ,Intuitionistic fuzzy ,Pattern recognition ,Bayes classifier ,Hierarchical classifier ,Global optimal ,Naive Bayes classifier ,Bayes' theorem ,ComputingMethodologies_PATTERNRECOGNITION ,Bayes error rate ,Artificial intelligence ,business ,Classifier (UML) - Abstract
The paper considers the problem of classification error in pattern recognition. This model of classification is primarily based on the Bayes rule and secondarily on the notion of intuitionistic fuzzy sets. A probability of misclassifications is derived for a classifier under the assumption that the features are class-conditionally statistically independent, and we have intuitionistic fuzzy information on object features instead of exact information. Additionally, we consider the global optimal hierarchical classifier.
- Published
- 2009
- Full Text
- View/download PDF
48. Intuitionistic Fuzzy Observations in Local Optimal Hierarchical Classifier
- Author
-
Robert Burduk
- Subjects
business.industry ,Computer science ,Mean value ,Bayesian probability ,Intuitionistic fuzzy ,Pattern recognition ,Fuzzy event ,Decision rule ,Hierarchical classifier ,ComputingMethodologies_PATTERNRECOGNITION ,Artificial intelligence ,business ,Classifier (UML) ,Statistic - Abstract
The paper deals with the multistage recognition task. In this problem of recognition the Bayesian statistic is applied. This model of classification is based on the notion of intuitionistic fuzzy sets. A probability of misclassifications is derived for a classifier under the assumption that the features are class-conditionally statistically independent, and we have intuitionistic fuzzy information on object features instead of exact information. The decision rules minimize the mean risk, that is the mean value of the zero-one loss function. Additionally, we consider the local optimal hierarchical classifier.
- Published
- 2009
- Full Text
- View/download PDF
49. Selection of Fuzzy-Valued Loss Function in Two Stage Binary Classifier
- Author
-
Robert Burduk
- Subjects
Fuzzy classification ,business.industry ,Bayesian probability ,Pattern recognition ,Fuzzy logic ,Hierarchical classifier ,Tree (data structure) ,ComputingMethodologies_PATTERNRECOGNITION ,Binary classification ,Fuzzy number ,Artificial intelligence ,business ,Selection (genetic algorithm) ,Mathematics - Abstract
In this paper, a model to deal with Bayesian hierarchical classifier, in which consequences of decision are fuzzy-valued, is introduced. The model is based on the notion of fuzzy random variable and also on a subjective ranking method for fuzzy number defined by Campos and Gonzalez. The Bayesian hierarchical classifier is based on a decision-tree scheme for given tree skeleton and features to be used in each inertial nodes. The influence of selection of fuzzy-valued loss function on classification result is given. Finally, an example illustrating this case of Bayesian analysis is considered.
- Published
- 2008
- Full Text
- View/download PDF
50. Possibility of Use a Fuzzy Loss Function in Medical Diagnostics
- Author
-
Robert Burduk
- Subjects
Medical diagnostic ,business.industry ,Computer science ,Decision tree ,Pattern recognition ,computer.software_genre ,Fuzzy logic ,ComputingMethodologies_PATTERNRECOGNITION ,Fuzzy number ,Artificial intelligence ,Data mining ,business ,Classifier (UML) ,computer - Abstract
An application of a two-stage classifier to the prognosis of sacroileitis is presented in the paper. The method of classification is based on a decision tree scheme. A k-nearest neighbors is applied in pattern recognition task. In this model of classification a fuzzy loss function is used. The efficiency of this algorithm is compared with the algorithm based on zero-one loss function. In this paper also influence of choice of parameter λ in selected comparison fuzzy number method on classification results are presented.
- Published
- 2008
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.