89 results on '"Duoqian Miao"'
Search Results
2. An improved decision tree algorithm based on variable precision neighborhood similarity
- Author
-
Caihui Liu, Bowen Lin, Jianying Lai, and Duoqian Miao
- Subjects
Information Systems and Management ,Artificial Intelligence ,Control and Systems Engineering ,Software ,Computer Science Applications ,Theoretical Computer Science - Published
- 2022
- Full Text
- View/download PDF
3. Selective label enhancement for multi-label classification based on three-way decisions
- Author
-
Tianna Zhao, Yuanjian Zhang, Duoqian Miao, and Witold Pedrycz
- Subjects
Artificial Intelligence ,Applied Mathematics ,Software ,Theoretical Computer Science - Published
- 2022
- Full Text
- View/download PDF
4. Generalized multigranulation sequential three-way decision models for hierarchical classification
- Author
-
Jin Qian, Chengxin Hong, Ying Yu, Caihui Liu, and Duoqian Miao
- Subjects
Information Systems and Management ,Artificial Intelligence ,Control and Systems Engineering ,Software ,Computer Science Applications ,Theoretical Computer Science - Published
- 2022
- Full Text
- View/download PDF
5. Control Distance IoU and Control Distance IoU Loss for Better Bounding Box Regression
- Author
-
Dong, Chen, primary and Duoqian, Miao, additional
- Published
- 2023
- Full Text
- View/download PDF
6. Combining attention mechanism and Retinex model to enhance low-light images
- Author
-
Yong Wang, Jin Chen, Yujuan Han, and Duoqian Miao
- Subjects
Human-Computer Interaction ,General Engineering ,Computer Graphics and Computer-Aided Design - Published
- 2022
- Full Text
- View/download PDF
7. Granular-conditional-entropy-based attribute reduction for partially labeled data with proxy labels
- Author
-
Duoqian Miao, Can Gao, Xiaodong Yue, Jie Zhou, and Jun Wan
- Subjects
Conditional entropy ,Reduct ,Measure (data warehouse) ,Information Systems and Management ,Computer science ,business.industry ,Pattern recognition ,Monotonic function ,Computer Science Applications ,Theoretical Computer Science ,Reduction (complexity) ,ComputingMethodologies_PATTERNRECOGNITION ,Artificial Intelligence ,Control and Systems Engineering ,Code (cryptography) ,Granularity ,Artificial intelligence ,Rough set ,business ,Software - Abstract
Attribute reduction is attracting considerable attention in the theory of rough sets, and thus many rough-set-based attribute reduction methods have been presented. However, most of them are specifically designed for either labeled or unlabeled data, whereas many real-world applications involve partial supervision. In this paper, we propose a rough-set-based semi-supervised attribute reduction method for partially labeled data. Specifically, using prior class-distribution information, we first develop a simple yet effective strategy to produce proxy labels for unlabeled data. Then, the concept of information granularity is integrated into an information-theoretic measure, based on which, a novel granular conditional entropy measure is proposed, and its monotonicity is theoretically proved. Furthermore, a fast heuristic algorithm is provided to generate the optimal reduct of partially labeled data, which could accelerate the process of attribute reduction by removing irrelevant examples and simultaneously excluding redundant attributes. Extensive experiments conducted on UCI data sets demonstrate that the proposed semi-supervised attribute reduction method is promising and, in terms of classification performance, it even compares favorably with supervised methods on labeled and unlabeled data with true labels (Our code and experimental data are released at Mendeley Data https://doi.org/10.17632/v3byhx2v8s.1).
- Published
- 2021
- Full Text
- View/download PDF
8. Multi-Granularity Cross Transformer Network for Person Re-Identification
- Author
-
Yanping Li, Duoqian Miao, Hongyun Zhang, Jie Zhou, and Cairong Zhao
- Published
- 2023
- Full Text
- View/download PDF
9. Class-specific information measures and attribute reducts for hierarchy and systematicness
- Author
-
Xianyong Zhang, Zhiying Lv, Duoqian Miao, and Hong Yao
- Subjects
Information Systems and Management ,Theoretical computer science ,Computer science ,Monotonic function ,02 engineering and technology ,Information theory ,Theoretical Computer Science ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,Entropy (information theory) ,Conditional entropy ,Hierarchy ,Hierarchy (mathematics) ,Specific-information ,05 social sciences ,Information processing ,050301 education ,Mutual information ,Knowledge acquisition ,Computer Science Applications ,Control and Systems Engineering ,020201 artificial intelligence & image processing ,Rough set ,Isomorphism ,Decision table ,0503 education ,Software - Abstract
Attribute reduction of rough set theory underlies knowledge acquisition and has two hierarchical types (classification-based and class-specific attribute reducts) and two perspectives from algebra and information theory; thus, there are four combined modes in total. Informational class-specific reducts are fundamental but lacking and are thus investigated by correspondingly constructing class-specific information measures. First, three types of information measures (i.e., information entropy, conditional entropy, and mutual information) are novelly established at the class level by hierarchical decomposition to acquire their hierarchical connection, systematical relationship, uncertainty semantics, and granulation monotonicity. Second, three types of informational class-specific reducts are correspondingly proposed to acquire their internal relationship, basic properties, and heuristic algorithm. Third, the informational class-specific reducts achieve their transverse connections, including the strength feature and consistency degeneration, with the algebraic class-specific reducts and their hierarchical connections, including the hierarchical strength and balance, with the informational classification-based reducts. Finally, relevant information measures and attribute reducts are effectively verified by decision tables and data experiments. Class-specific information measures deepen existing classification-based information measures by a hierarchical isomorphism, while the informational class-specific reducts systematically perfect attribute reduction by level and viewpoint isomorphisms; these results facilitate uncertainty measurement and information processing, especially at the class level.
- Published
- 2021
- Full Text
- View/download PDF
10. Variable-precision three-way concepts in L-contexts
- Author
-
Duoqian Miao, Hamido Fujita, and Xue Rong Zhao
- Subjects
Theoretical computer science ,Computer science ,Applied Mathematics ,02 engineering and technology ,Fuzzy logic ,Theoretical Computer Science ,Complete lattice ,Artificial Intelligence ,020204 information systems ,Lattice (order) ,Three way ,0202 electrical engineering, electronic engineering, information engineering ,Fuzzy concept ,020201 artificial intelligence & image processing ,Positive and negative parts ,Software ,Variable precision - Abstract
The notion of fuzzy concept is proposed to deal with object-attribute data with L-values (where L is a truth-value structure). One disadvantage of fuzzy concept is that a fuzzy context contains a considerable number of fuzzy concepts. This makes it very time-consuming to generate a fuzzy concept lattice, and it is very difficult to find important concepts. In addition, the fuzzy concept shows great strictness when applying to crisp sets. To overcome these problems, we propose several new kinds of variable-precision concepts within L-contexts in this paper. First, we present two kinds of variable-precision two-way (VP2W) concepts: α-positive concept and β-negative concept. The family of each kind of VP2W concept forms a complete lattice. Next, considering both the positive and negative parts, we investigate two kinds of variable-precision three-way (VP3W) concepts: ( α , β ) -object-induced three-way concept and ( α , β ) -attribute-induced three-way concept. The family of each kind of VP3W concept forms a complete lattice. Then, we study the relationships between VP2W concepts and VP3W concepts. The results show that VP3W concept lattices can be directly generated by VP2W concept lattices. Finally, the experiments are preformed to verify the effectiveness of our model.
- Published
- 2021
- Full Text
- View/download PDF
11. Three-way decision with co-training for partially labeled data
- Author
-
Can Gao, Jie Zhou, Duoqian Miao, Xiaodong Yue, and Jiajun Wen
- Subjects
Information Systems and Management ,Computer science ,02 engineering and technology ,Machine learning ,computer.software_genre ,Theoretical Computer Science ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,Co-training ,Training set ,Uncertain data ,business.industry ,05 social sciences ,050301 education ,Computer Science Applications ,Data set ,ComputingMethodologies_PATTERNRECOGNITION ,Control and Systems Engineering ,Three way ,Labeled data ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,0503 education ,computer ,Classifier (UML) ,Software - Abstract
The theory of three-way decision plays an important role in decision making and knowledge reasoning. However, little attention has been paid to the problem of learning from partially labeled data with three-way decision. In this paper, we propose a three-way co-decision model for partially labeled data. More specifically, the problem of attribute reduction for partially labeled data is first investigated, and two semi-supervised attribute reduction algorithms based on novel confidence discernibility matrix are proposed. Then, a three-way co-decision model is introduced to classify unlabeled data into useful, useless, and uncertain data, and the model is iteratively retrained on the carefully selected useful data to improve its performance. Moreover, we theoretically analyze the effectiveness of the proposed model. The experimental results conducted on UCI data sets demonstrate that the proposed model is promising, and even compares favourably with the single supervised classifier trained on all training data with true labels.
- Published
- 2021
- Full Text
- View/download PDF
12. Granular regression with a gradient descent method
- Author
-
Duoqian Miao and Yumin Chen
- Subjects
Statistics::Theory ,Information Systems and Management ,Computer science ,Computation ,02 engineering and technology ,Measure (mathematics) ,Theoretical Computer Science ,Set (abstract data type) ,Statistics::Machine Learning ,Artificial Intelligence ,Convergence (routing) ,0202 electrical engineering, electronic engineering, information engineering ,Statistics::Methodology ,05 social sciences ,050301 education ,Regression analysis ,Regression ,Statistics::Computation ,Computer Science Applications ,Control and Systems Engineering ,020201 artificial intelligence & image processing ,Regression algorithm ,Gradient descent ,0503 education ,Algorithm ,Software - Abstract
The regression is one of classical models in machine learning. Traditional regression algorithms involve operations of real values, which are difficult to handle the discrete or set data in information systems. Granules are structural objects on which agents perform complex computations. The structural objects are forms of sets that can measure the uncertainty of data. In order to deal with uncertain and vague data in the real world, we propose a set-based regression model: granular regression. Granules are constructed by introducing a distance metric on single-atom features. Meanwhile, we establish conditional granular vectors, weight granular vectors and decision granules. The operations among them induce a granular regression model. Furthermore, we propose a gradient descent method for the granular regression model, and the optimal solution of granular regression is achieved. We prove the convergence of granular regression and design a gradient descent algorithm. Finally, several UCI data sets are used to test and verify the granular regression model. We compare our proposed model with popular regression models from three aspects of convergence, fitting and prediction. The results show that the granular regression model is valid and effective.
- Published
- 2020
- Full Text
- View/download PDF
13. Improved general attribute reduction algorithms
- Author
-
Zhihua Wei, Duoqian Miao, Lijun Sun, Chang Gong, Baizhen Li, Wen Shen, Hongyun Zhang, and Nan Zhang
- Subjects
Information Systems and Management ,Relation (database) ,Series (mathematics) ,Computer science ,05 social sciences ,050301 education ,02 engineering and technology ,Computer Science Applications ,Theoretical Computer Science ,Reduction (complexity) ,Artificial Intelligence ,Control and Systems Engineering ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Rough set ,Focus (optics) ,Representation (mathematics) ,0503 education ,Algorithm ,Software - Abstract
Attribute reduction is a critical issue in rough sets theory. In recent years, there are many kinds of attribute reduction proposed, such as positive region preservation reduction, generalized decision preservation reduction, distribution preservation reduction, maximum distribution preservation reduction, and relative discernibility relation preservation reduction. General reduction approaches to obtaining various types of reducts also have been explored, but they are computationally time-consuming in the condition of large-scale data processing. In this study, we focus on the efficient general reduction algorithm to obtain five typical reducts mentioned above. At first, we introduce a concept called granularity space to establish a unified representation of five typical reducts. Based on the unified representation, we construct two quick general reduction algorithms by extending the positive region approximation to the granularity space. Then, we conduct a series of comparisons with existing reduction algorithms in aspects of theoretical analysis and experiments to evaluate the performance of the proposed algorithms. The results of analysis and experiments indicate that the proposed algorithms are effective and efficient.
- Published
- 2020
- Full Text
- View/download PDF
14. On relationship between three-way concept lattices
- Author
-
Xue Rong Zhao, Bao Qing Hu, and Duoqian Miao
- Subjects
Pure mathematics ,Information Systems and Management ,05 social sciences ,050301 education ,02 engineering and technology ,Computer Science Applications ,Theoretical Computer Science ,Complement (complexity) ,Artificial Intelligence ,Control and Systems Engineering ,Three way ,0202 electrical engineering, electronic engineering, information engineering ,Order (group theory) ,020201 artificial intelligence & image processing ,0503 education ,Software ,Mathematics - Abstract
By reformulating and extending the properties of three-way operators, this paper investigates the relationship between different kinds of three-way concept lattices. Three-way operators are defined through eight kinds of two-way operators which are connected by the complement operation. To examine the interrelations systematically, we study (a) the relationship between two-way operators, (b) the relationship between two-way concepts, (c) the relationship between three-way operators, and (d) the relationship between three-way concepts. The results show that the four kinds of object-induced three-way concept lattices are order isomorphic to each other and the four kinds of attribute-induced three-way concept lattices are also order isomorphic to each other.
- Published
- 2020
- Full Text
- View/download PDF
15. Novel matrix-based approaches to computing minimal and maximal descriptions in covering-based rough sets
- Author
-
Jin Qian, Duoqian Miao, Kecan Cai, and Caihui Liu
- Subjects
Information Systems and Management ,Computational complexity theory ,Binary relation ,Computer science ,05 social sciences ,050301 education ,Scale (descriptive set theory) ,02 engineering and technology ,Space (mathematics) ,Computer Science Applications ,Theoretical Computer Science ,Set (abstract data type) ,Matrix (mathematics) ,Knowledge extraction ,Artificial Intelligence ,Control and Systems Engineering ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Rough set ,0503 education ,Algorithm ,Software - Abstract
Minimal and maximal descriptions of concepts are two important notions in covering-based rough sets. Many issues in covering-based rough sets (e.g., reducts, approximations, etc.) are related to them. It is well known that, it is time-consuming and error-prone when set representations are used to compute minimal and maximal descriptions in a large scale covering approximation space. To address this problem, matrix-based methods have been proposed in which calculations can be conveniently implemented by computers. In this paper, motivated by the need for knowledge discovery from large scale covering information systems and inspired by the previous research work, we present two novel matrix-based approaches to compute minimal and maximal descriptions in covering-based rough sets, which can reduce the computational complexity of traditional methods. First, by introducing the operation “sum” into the calculation of matrix instead of the operation “ ⊕ ”, we propose a new matrix-based approach, called approach-1, to compute minimal and maximal descriptions, which does not need to compare the elements in two matrices. Second, by using the binary relation of inclusion between elements in a covering, we propose another approach to compute minimal and maximal descriptions. Finally, we present experimental comparisons showing the computational efficiency of the proposed approaches on six UCI datasets. Experimental results show that the proposed approaches are promising and comparable with other tested methods.
- Published
- 2020
- Full Text
- View/download PDF
16. Three-way decisions based blocking reduction models in hierarchical classification
- Author
-
Hongyun Zhang, Zhihua Wei, Duoqian Miao, Wen Shen, and Qianwen Li
- Subjects
Topic model ,Hierarchy ,Information Systems and Management ,Uncertain data ,Computer science ,business.industry ,05 social sciences ,050301 education ,02 engineering and technology ,Machine learning ,computer.software_genre ,Computer Science Applications ,Theoretical Computer Science ,Artificial Intelligence ,Control and Systems Engineering ,Three way ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,Raw data ,business ,0503 education ,Classifier (UML) ,computer ,Software - Abstract
Hierarchical classification (HC) is effective when categories are organized hierarchically. However, the blocking problem makes the effect of hierarchical classification greatly reduced. Blocking means that samples are easily getting misclassified in high-level classifiers so that the samples are blocked at the high-level of the hierarchy. This issue is caused by the inconsistency between the artificially defined hierarchy and the actual hierarchy of the raw data. Another issue is that it is flippant to strictly process data following the hierarchy. Therefore, special treatment is required for some uncertain data. To address the first issue, we learn category relationships and modify the hierarchy. To address the second issue, we introduce three-way decisions (3WD) to targetedly deal with the ambiguous data. We extend original studies and propose two HC models based on 3WD, collectively referred to as TriHC, for carefully modifying the hierarchy to alleviate the blocking problem. The proposed TriHC model learns new category hierarchies by the following three steps: (1) mining category relations; (2) modifying category hierarchies according to the latent category relations; and (3) using 3WD to divide observed objects into three regions: positive region, boundary region, and negative region, and making decisions based on different strategies. Specifically, based on different category relation mining methods, there are two versions of TriHC, cross-level blocking priori knowledge based TriHC (CLPK-TriHC) and expert classifier based TriHC (EC-TriHC). The CLPK-TriHC model defines a cross-level blocking distribution matrix to mine the category relations between the higher and lower levels. To better exploit category hierarchical relations, the EC-TriHC model builds expert classifiers using topic model to learn latent category topics. Experimental results validate that the proposed methods can simultaneously reduce the blocking and improve the classification accuracy.
- Published
- 2020
- Full Text
- View/download PDF
17. A neighborhood rough set model with nominal metric embedding
- Author
-
Sheng Luo, Duoqian Miao, Yuanjian Zhang, Zhifei Zhang, and Shengdan Hu
- Subjects
Information Systems and Management ,Heuristic ,Computer science ,05 social sciences ,050301 education ,Feature selection ,02 engineering and technology ,Computer Science Applications ,Theoretical Computer Science ,Reduction (complexity) ,Operator (computer programming) ,Artificial Intelligence ,Control and Systems Engineering ,Metric (mathematics) ,0202 electrical engineering, electronic engineering, information engineering ,Embedding ,020201 artificial intelligence & image processing ,Rough set ,0503 education ,Algorithm ,Software - Abstract
Rough set theory is an essential tool for measuring uncertainty, which has been widely applied in attribute reduction algorithms. Most of the related researches focus on how to update the lower and the upper approximation operator to match data characteristics or how to improve the efficiency of the attribute reduction algorithm. However, in the nominal data environment, existing rough set models that use the Hamming metric and its variants to evaluate the relations between nominal objects can not capture the inherent ordered relationships and statistic information from nominal values due to the complexity of data. The missing information will affect the accuracy and validity of the data representation, thereby reducing the reliability of rough set models. To overcome this challenge, we propose a novel object dissimilarity measure, i.e., relative object dissimilarity metric(RODM) that learned from nominal data to replace the Hamming metric and then construct a ψ-neighborhood rough set model. It extends the classical rough set model to a robust, representative, and effective model which is close to the characteristics of nominal data. Based on the ψ-neighborhood rough set model, we propose a heuristic two-stage attribute reduction algorithm(HTSAR) to perform the feature selection task. Experiments show that the ψ-neighborhood rough set model can take advantage of more potential knowledge in nominal data and achieve better performance for attribute reduction than the existing rough set model.
- Published
- 2020
- Full Text
- View/download PDF
18. Fuzzy neighborhood covering for three-way classification
- Author
-
Xiaodong Yue, Hamido Fujita, Duoqian Miao, and Yufei Chen
- Subjects
Information Systems and Management ,Uncertain data ,Computer science ,05 social sciences ,Data classification ,Nonparametric statistics ,050301 education ,02 engineering and technology ,computer.software_genre ,Fuzzy logic ,Computer Science Applications ,Theoretical Computer Science ,ComputingMethodologies_PATTERNRECOGNITION ,Artificial Intelligence ,Control and Systems Engineering ,Robustness (computer science) ,Three way ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Data mining ,0503 education ,computer ,Software - Abstract
Neighborhood Covering (NC) is the union of homogeneous neighborhoods and provides a set-level approximation of data distribution. Because of the nonparametric property and the robustness to complex data, neighborhood covering has been widely used for data classification . Most existing methods directly classify data samples according to the nearest neighborhoods. However, the certain classification methods strictly classify the uncertain data and may lead to serious classification mistakes. To tackle this problem, we extend traditional neighborhood coverings to fuzzy ones and thereby propose a Three-Way Classification method with Fuzzy Neighborhood Covering (3WC-FNC). Fuzzy neighborhood covering consists of membership functions and forms an approximate distribution of neighborhood belongingness. Based on the soft partition induced by the memberships of fuzzy neighborhood coverings of different classes, data samples are classified into Positive (certainly belonging to a class), Negative (certainly beyond classes) and Uncertain cases. Experiments verify that the proposed three-way classification method is effective to handle the uncertain data and in the meantime reduce the classification risk.
- Published
- 2020
- Full Text
- View/download PDF
19. Global and local multi-view multi-label learning
- Author
-
Duoqian Miao, Changming Zhu, Xiafen Zhang, Zhe Wang, Lai Wei, and Ri-Gui Zhou
- Subjects
0209 industrial biotechnology ,Computer science ,Cognitive Neuroscience ,Process (computing) ,Multi label learning ,02 engineering and technology ,computer.software_genre ,Computer Science Applications ,Data set ,020901 industrial engineering & automation ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Data mining ,Representation (mathematics) ,computer - Abstract
In order to process multi-view multi-label data sets, we propose global and local multi-view multi-label learning (GLMVML). This method can exploit global and local label correlations of both the whole data set and each view simultaneously. What’s more, GLMVML introduces a consensus multi-view representation which encodes the complementary information from different views. Related experiments on three multi-view data sets, fourteen multi-label data sets, and one multi-view multi-label data set have validated that (1) GLMVML has a better average AUC and precision and it is superior to the classical multi-view learning methods and multi-label learning methods in statistical; (2) the running time of GLMVML won’t add too much; (3) GLMVML has a good convergence and ability to process multi-view multi-label data sets; (4) since the model of GLMVML consists of both the global label correlations and local label correlations, so parameter values should be moderate rather than too large or too small.
- Published
- 2020
- Full Text
- View/download PDF
20. Sequential three-way decisions via multi-granularity
- Author
-
Duoqian Miao, Xiaodong Yue, Caihui Liu, and Jin Qian
- Subjects
Information Systems and Management ,Theoretical computer science ,Computer science ,media_common.quotation_subject ,05 social sciences ,050301 education ,Rationality ,02 engineering and technology ,Disjoint sets ,Pessimism ,Computer Science Applications ,Theoretical Computer Science ,Artificial Intelligence ,Control and Systems Engineering ,Three way ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Granularity ,0503 education ,Software ,media_common - Abstract
Three-way decisions provide a trisecting-and-acting framework for complex problem solving. For a cost-sensitive decision-making problem under multiple levels of granularity, sequential three-way decisions have come into being. Within this framework, how to act upon the three pair-wise disjoint regions is the most important issue. To this end, we propose a generalized model of sequential three-way decisions via multi-granularity in this paper. Subsequently, we adopt the typical aggregation strategies to implement the following five kinds of multigranulation sequential three-way decisions—the weighted arithmetic mean multigranulation sequential three-way decisions, the optimistic multigranulation sequential three-way decisions, the pessimistic multigranulation sequential three-way decisions, the pessimistic-optimistic multigranulation sequential three-way decisions and the optimistic-pessimistic multigranulation sequential three-way decisions. Furthermore, we discuss the rightness and rationality of the five kinds of multigranulation sequential three-way decisions and also analyze the relationships and differences between them. Finally, the experimental results demonstrate that the first four different multigranulation sequential three-way decisions are effective. These models will accelerate and enrich the development of multigranulation three-way decisions.
- Published
- 2020
- Full Text
- View/download PDF
21. Three-way confusion matrix for classification: A measure driven view
- Author
-
Duoqian Miao, Jianfeng Xu, and Yuanjian Zhang
- Subjects
Information Systems and Management ,Theoretical computer science ,Gini coefficient ,Computer science ,media_common.quotation_subject ,05 social sciences ,Stakeholder ,Probabilistic logic ,050301 education ,Confusion matrix ,02 engineering and technology ,Measure (mathematics) ,Computer Science Applications ,Theoretical Computer Science ,Promotion (rank) ,Artificial Intelligence ,Control and Systems Engineering ,Three way ,0202 electrical engineering, electronic engineering, information engineering ,Entropy (information theory) ,020201 artificial intelligence & image processing ,0503 education ,Software ,media_common - Abstract
Three-way decisions (3WD) is an important methodology in solving problems with uncertainty. A systematic analysis on three-way based uncertainty measures is conducive to the promotion of three-way decisions. Meanwhile, confusion matrix, with multifaceted views, serves as a fundamental role in evaluating classification performance. In this paper, confusion matrix is endowed with semantics of three-way decisions. A collection of measures are thus deduced and summarized into seven measure modes. We further investigate the formulation of three-way regions from a measure driven view. To satisfy the preferences of stakeholder, two different objective functions are formulated, and each of them can include different combinations of measures. To demonstrate the effectiveness, we generate probabilistic three-way decisions for a wealth of datasets. Compared with Gini coefficient based and Shannon entropy based objective functions, our model can deduce more satisfying three-way regions.
- Published
- 2020
- Full Text
- View/download PDF
22. Causality measures and analysis: A rough set framework
- Author
-
Ning Yao, Witold Pedrycz, Zhifei Zhang, Duoqian Miao, and Hongyun Zhang
- Subjects
Counterfactual thinking ,0209 industrial biotechnology ,Counterfactual conditional ,Knowledge representation and reasoning ,business.industry ,Computer science ,General Engineering ,Intelligent decision support system ,02 engineering and technology ,Directed acyclic graph ,computer.software_genre ,Machine learning ,Causality ,Expert system ,Computer Science Applications ,020901 industrial engineering & automation ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,Rough set ,Causation ,business ,computer - Abstract
Data and rules power expert systems and intelligent systems. Rules, as a form of knowledge representation, can be acquired by experts or learned from data. The accuracy and precision of knowledge largely determines the success of the systems, which awakens the concern for causality. The ability to elicit cause–effect rules directly from data is key and difficult to any expert systems and intelligent systems. Rough set theory has succeeded in automatically transforming data into knowledge, where data are often presented as an attribute-value table. However, the existing tools in this theory are currently incapable of interpreting counterfactuals and interventions involved in causal analysis. This paper offers an attempt to characterize the cause–effect relationships between attributes in attribute-value tables with intent to overcome existing limitations. First, we establish the main conditions that attributes need to satisfy in order to estimate the causal effects between them, by employing the back-door criterion and the adjustment formula for a directed acyclic graph. In particular, based on the notion of lower approximation, we extend the back-door criterion to an original data table without any graphical structures. We then identify the effects of the interventions and the counterfactual interpretation of causation between attributes in such tables. Through illustrative studies completed for some attribute-value tables, we show the procedure for identifying the causation between attributes and examine whether the dependency of the attributes can describe causality between them.
- Published
- 2019
- Full Text
- View/download PDF
23. Constrained three-way approximations of fuzzy sets: From the perspective of minimal distance
- Author
-
Duoqian Miao, Jie Zhou, Zhihui Lai, Xiaodong Yue, and Can Gao
- Subjects
Mathematical optimization ,Information Systems and Management ,Approximations of π ,Computer science ,Decision theory ,05 social sciences ,Fuzzy set ,050301 education ,02 engineering and technology ,Disjoint sets ,Computer Science Applications ,Theoretical Computer Science ,Artificial Intelligence ,Control and Systems Engineering ,Three way ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,0503 education ,Software - Abstract
Three-way approximations of fuzzy sets aim at abstracting fuzzy sets into three pair-wise disjoint categories which facilitate semantic-oriented interpretations and reduce computing burden. Shadowed sets are a schema of three-way approximations of fuzzy sets which are formed based on a specific optimization mechanism. Among different principles guiding the construction of shadowed sets, the criterion of minimum distance offers a new insight within the framework of three-way decision theory. In this paper, the essential mathematical properties of the objective function used as a criterion to construct three-way approximations of fuzzy sets based on the principle of minimal distance, as well the characteristics of the optimal solutions, are analyzed. It is demonstrated that this optimization objective function is continuous but nonconvex with respect to the optimized variables. The nonconvex property makes the solution difficult and different approximate region partitions are obtainable even under the same optimization model. Therefore, further criteria are required to select final partition thresholds and make the construction process well-defined. To address this limitation, the notion of constrained three-way approximations of fuzzy sets is proposed from the perspective of minimal distance. Moreover, a constructive algorithm is provided to obtain the proposed constrained three-way approximations rather than using a direct enumeration method, and its performance is illustrated by considering some typical fuzzy sets along with some data from UCI repository.
- Published
- 2019
- Full Text
- View/download PDF
24. Improved adaptive image retrieval with the use of shadowed sets
- Author
-
Ting Zhang, Cairong Zhao, Hongyun Zhang, Witold Pedrycz, and Duoqian Miao
- Subjects
business.industry ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Process (computing) ,Pattern recognition ,02 engineering and technology ,Image segmentation ,01 natural sciences ,Edge detection ,Image (mathematics) ,Artificial Intelligence ,Salient ,0103 physical sciences ,Signal Processing ,Pattern recognition (psychology) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Artificial intelligence ,010306 general physics ,business ,Image retrieval ,Software - Abstract
Image retrieval algorithms based on the whole image exhibit high complexity due to background interference, low-level description abilities and large storage requirements, while image retrieval algorithms based on the saliency detection have been found to have low accuracy owing to the lack of important information in extracted salient regions caused by the uncertainty of the salient regions of the image. In this paper, we propose a shadowed-set-based image retrieval algorithm, and develop techniques of an automatic selection of two threshold parameters by combining saliency detection and edge detection, which automatically determine shadowed regions. The developed algorithm uses shadowed set theory to divide the image into salient regions, non-salient regions and shadowed regions, in order to extract the useful information of the image and ignore irrelevant one. As a consequence, this leads to the salient regions and the shadowed regions to be jointly involved in the retrieval process. The experimental results reported for several datasets show that the proposed algorithm can effectively improve the retrieval accuracy compared with the existing state-of-the-art algorithms.
- Published
- 2019
- Full Text
- View/download PDF
25. Identification of structures and causation in flow graphs
- Author
-
Duoqian Miao and Ning Yao
- Subjects
Information Systems and Management ,Theoretical computer science ,Relation (database) ,Computer science ,Computation ,05 social sciences ,050301 education ,02 engineering and technology ,Graph ,Computer Science Applications ,Theoretical Computer Science ,Identification (information) ,Flow (mathematics) ,Artificial Intelligence ,Control and Systems Engineering ,0202 electrical engineering, electronic engineering, information engineering ,Control flow graph ,020201 artificial intelligence & image processing ,Markov property ,Rough set ,Causation ,0503 education ,Equivalence class ,Software - Abstract
Flow graphs in rough set theory exhibit intuitive and explicit formalization , straightforward computation, parallel processing and Markov property . This paper focuses on the extraction of flow graphs directly from data in the form of attribute-value tables as well as the identification of the causation between variables in such tables. Using the equivalence classes and the partitions derived directly from data tables, the variables having Markov property can be founded to form the structures that can be integrated into flow graphs. Then based on these structures, the causation hidden in data tables can be identified via the front-door criterion proposed by Pearl, also sometimes via the back-door criterion. The relation between the existence of the causation and the flow graph among variables has been established. According to the illustrations, the identification of the causation also relates to the selection of data sample, besides these flow graphs which are similar to the structures hidden in front-door criterion and part in back-door criterion.
- Published
- 2019
- Full Text
- View/download PDF
26. Three-way enhanced convolutional neural networks for sentence-level sentiment classification
- Author
-
Yuebing Zhang, Zhifei Zhang, Jiaqi Wang, and Duoqian Miao
- Subjects
Information Systems and Management ,Computer science ,02 engineering and technology ,Machine learning ,computer.software_genre ,Convolutional neural network ,Theoretical Computer Science ,Naive Bayes classifier ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,Feature (machine learning) ,Interpretability ,Artificial neural network ,business.industry ,Deep learning ,05 social sciences ,050301 education ,Computer Science Applications ,Support vector machine ,ComputingMethodologies_PATTERNRECOGNITION ,Control and Systems Engineering ,Benchmark (computing) ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,0503 education ,computer ,Software - Abstract
Deep neural network models have achieved remarkable results in sentiment classification. Traditional feature-based methods perform slightly worse than deep learning methods in terms of classification accuracy , but they have their own advantages in interpretability and time complexity. To the best of our knowledge, few works study the ensemble of deep learning methods and traditional feature-based methods. Inspired by the methodology of three-way decisions, we proposed a three-way enhanced convolutional neural network model named 3W-CNN. 3W-CNN can be seen as an ensemble method which uses the enhance model to optimize convolutional neural networks (CNN). The enhance model is selected according to the classification accuracy and the difference in classification results compared to CNN. Support vector machine with naive bayes features (NB-SVM) is selected as the enhance model after comparing with several baseline models . However, the performance of NB-SVM is worse than CNN on most of benchmark datasets. To address this issue, we construct a component named confidence divider and design a confidence function to distinguish the classification quality of CNN. NB-SVM is further utilized to reclassify the predictions with weak confidence. The experimental results validated the effectiveness of 3W-CNN and showed three-way decisions could further improve the accuracy of sentiment classification.
- Published
- 2019
- Full Text
- View/download PDF
27. A self-adaptive cascade ConvNets model based on label relation mining
- Author
-
Cairong Zhao, Wen Shen, Zhihua Wei, and Duoqian Miao
- Subjects
0209 industrial biotechnology ,Contextual image classification ,Computer science ,business.industry ,Cognitive Neuroscience ,Word error rate ,Self adaptive ,02 engineering and technology ,Machine learning ,computer.software_genre ,Computer Science Applications ,020901 industrial engineering & automation ,Artificial Intelligence ,Cascade ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,computer ,Classifier (UML) - Abstract
Uncertainty is a fundamental and unavoidable feature in daily life, which is the same for a single classifier. Thus, combining the predictions of many different classifiers is a very successful way to reduce the uncertainty. In this paper, we present a Correcting Reliability Level (CRL) supervised three-way decision (3WD) cascade model to implement image classification tasks. Our model simulates the human decision process by using 3WD to judge “certainty” or “uncertainty” of the classification result. When judged as “uncertainty”, CRL will supervise the 3WD and learn more information to make the final prediction. In addition, we introduce two Class Grouping methods to mining the relation between labels, which help us to train several expert ConvNets for different types of images. Experimental results show that our model can effectively reduce the classification error rate compared with the base classifier.
- Published
- 2019
- Full Text
- View/download PDF
28. Multilevel triplet deep learning model for person re-identification
- Author
-
Yipeng Chen, Cairong Zhao, Kang Chen, Wei Wang, Zhihua Wei, and Duoqian Miao
- Subjects
Matching (statistics) ,Point (typography) ,Computer science ,business.industry ,Deep learning ,Feature extraction ,Pattern recognition ,02 engineering and technology ,Object (computer science) ,01 natural sciences ,Artificial Intelligence ,Feature (computer vision) ,0103 physical sciences ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,Benchmark (computing) ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Artificial intelligence ,010306 general physics ,Representation (mathematics) ,business ,Software - Abstract
Person re-identification (Re-ID) is a typical computer vision problem which matches pedestrians from different cameras. It remains challenging to cope with the variation in light, the change of human pose and view point difference. Many existing person re-identification methods may have difficulty in matching pedestrians when their pictures are similar in appearance or there is object occlusion in pictures. The main problem with these existing methods is that the detail and global features of the images are not well combined. In this paper, we improved the performance of deep CNN network with the proposed Multilevel feature extraction strategy and built a novel Multilevel triplet deep learning model corresponding to our method. The Multilevel feature extraction strategy focuses on combining fine, shallow layer information with coarse, deeper layer information by extracting fusion feature maps from different layers for a better representation of pedestrians. The Multilevel triplet deep learning model (MT-net) provides an end-to-end training and testing plain for our feature extraction strategy. The experiment on the benchmark datasets validated that our multilevel triplet deep learning model had better performance comparing with many state-of-the-art person re-identification methods.
- Published
- 2019
- Full Text
- View/download PDF
29. Multi granularity based label propagation with active learning for semi-supervised classification
- Author
-
Shengdan Hu, Duoqian Miao, and Witold Pedrycz
- Subjects
Artificial Intelligence ,General Engineering ,Computer Science Applications - Published
- 2022
- Full Text
- View/download PDF
30. A three-way selective ensemble model for multi-label classification
- Author
-
Sheng Luo, Yuanjian Zhang, Zhifei Zhang, Duoqian Miao, and Jianfeng Xu
- Subjects
Computer science ,media_common.quotation_subject ,02 engineering and technology ,Machine learning ,computer.software_genre ,Theoretical Computer Science ,Reduction (complexity) ,Artificial Intelligence ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,media_common ,Multi-label classification ,Ensemble forecasting ,business.industry ,Applied Mathematics ,Probabilistic logic ,Ambiguity ,Ensemble learning ,Statistical classification ,ComputingMethodologies_PATTERNRECOGNITION ,020201 artificial intelligence & image processing ,Artificial intelligence ,Rough set ,business ,computer ,Software - Abstract
Label ambiguity and data complexity are widely recognized as major challenges in multi-label classification. Existing studies strive to find approximate representations concerning label semantics, however, most of them are predefined, neglecting the personality of instance-label pair. To circumvent this drawback, this paper proposes a three-way selective ensemble (TSEN) model. In this model, three-way decisions is responsible for minimizing uncertainty, whereas ensemble learning is in charge of optimizing label associations. Both label ambiguity and data complexity are firstly reduced, which is realized by a modified probabilistic rough set. For reductions with shared attributes, we further promote the prediction performance by an ensemble strategy. The components in base classifiers are label-specific, and the voting results of instance-based level are utilized for tri-partition. Positive and negative decisions are determined directly, whereas the deferment region is determined by label-specific reduction. Empirical studies on a collection of benchmarks demonstrate that TSEN achieves competitive performance against state-of-the-art multi-label classification algorithms.
- Published
- 2018
- Full Text
- View/download PDF
31. Maximal granularity structure and generalized multi-view discriminant analysis for person re-identification
- Author
-
Cairong Zhao, David Zhang, Duoqian Miao, Yong Xu, Wei-Shi Zheng, Xuekuan Wang, and Hanli Wang
- Subjects
Matching (statistics) ,Computer science ,business.industry ,020207 software engineering ,Pattern recognition ,02 engineering and technology ,Linear discriminant analysis ,Discriminant ,Artificial Intelligence ,Feature (computer vision) ,Signal Processing ,Metric (mathematics) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Representation (mathematics) ,business ,Software - Abstract
This paper proposes a novel descriptor called Maximal Granularity Structure Descriptor (MGSD) for feature representation and an effective metric learning method called Generalized Multi-view Discriminant Analysis based on representation consistency (GMDA-RC) for person re-identification (Re-ID). The proposed descriptor of MGSD captures rich local structural information from overlapping macro-pixels in an image, analyzes the horizontal occurrence of multi-granularity and maximizes the occurrence to extract a robust representation for viewpoint changes. As a result, the proposed descriptor of MGSD can obtain rich person appearance whilst being robust against different condition changes. Besides, considering multi-view information, we present a new GMDA-RC for different views, inspired by the observation that different views share similar data structures. The proposed metric learning method of GMDA-RC seeks multiple discriminant common spaces for multiple views by jointly learning multiple view-specific linear transforms. Finally, we evaluate the proposed method of (MGSD+GMDA-RC) on three publicly available person Re-ID datasets: VIPeR, CUHK-01 and Wide Area Re-ID dataset (WARD). For the VIPeR and CUHK-01, the experimental results show that our method significantly outperforms the state-of-the-art methods, achieving the rank-1 matching rates of 67.09%, 70.61%, and the improvements of 17.41%, 5.34%, respectively. For the WARD, we consider different pairwise camera views (camera 1–2, camera 1–3, camera 2–3) and our method can achieve the rank-1 matching rates of 64.33%, 59.42%, 70.32%, increasing of 5.68%, 11.04%, 9.06% compared with the state-of-the-art methods, respectively.
- Published
- 2018
- Full Text
- View/download PDF
32. Maximum decision entropy-based attribute reduction in decision-theoretic rough set model
- Author
-
Zhihui Lai, Duoqian Miao, Can Gao, Cairong Zhao, and Jie Zhou
- Subjects
Reduct ,Mathematical optimization ,Information Systems and Management ,Computer science ,Entropy (statistical thermodynamics) ,05 social sciences ,Probabilistic logic ,050301 education ,Monotonic function ,02 engineering and technology ,Management Information Systems ,Data set ,Entropy (classical thermodynamics) ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,Effective method ,Entropy (information theory) ,020201 artificial intelligence & image processing ,Rough set ,Entropy (energy dispersal) ,0503 education ,Entropy (arrow of time) ,Software ,Entropy (order and disorder) - Abstract
Decision-theoretic rough set model, as a probabilistic generalization of the Pawlak rough set model, is an effective method for decision making from vague, uncertain or imprecise data. Attribute reduction is one of the most important problems in the decision-theoretic rough set model and several uncertainty measures for attribute reduction have been presented. However, the monotonicity of the uncertainty measures does not always hold. In this paper, a novel monotonic uncertainty measure is introduced for attribute reduction in the decision-theoretic rough set model. More specifically, based on the concepts of the maximum inclusion degree and maximum decision, a new uncertainty measure, named maximum decision entropy, is first proposed, and the definitions of the positive, boundary and negative region preservation reducts are then provided by using the proposed uncertainty measure. Theoretically, it is proved that the proposed uncertainty measure is monotonic when adding or deleting the condition attributes. Additionally, a heuristic attribute reduction algorithm based on the maximum decision entropy is developed, which maximizes the relevance of the reduct to the class attribute and also minimizes the redundancy of the condition attributes within the reduct. The experimental results on artificial as well as real data sets demonstrate the competitive performance of our proposal in comparison with the state-of-the-art algorithms.
- Published
- 2018
- Full Text
- View/download PDF
33. Kernelized random KISS metric learning for person re-identification
- Author
-
Jingsheng Lei, Yipeng Chen, Xuekuan Wang, Wai Keung Wong, Cairong Zhao, and Duoqian Miao
- Subjects
business.industry ,Covariance matrix ,Cognitive Neuroscience ,Gaussian ,Pattern recognition ,02 engineering and technology ,Covariance ,Computer Science Applications ,KISS (TNC) ,KISS principle ,symbols.namesake ,Artificial Intelligence ,020204 information systems ,Metric (mathematics) ,0202 electrical engineering, electronic engineering, information engineering ,symbols ,Feature (machine learning) ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,Subspace topology ,Mathematics - Abstract
Person re-identification is critical for human tracking in the video surveillance which has attracted more and more attention in recent years. Various recent approaches have made great progress in re-identification performance using metric learning techniques and among them, Keep It Simple and Straightforward (KISS) metric learning method has shown remarkably importance because of its simpleness and high-efficiency. The KISS method is based on an assumption that the differences between feature pairs obey the Gaussian distribution. However, for most existing features of person re-identification, the distributions of differences between feature pairs are irregular and undulant. Therefore, prior to the Guassian based metric learning step, it's important to augment the Guassian distribution of data without losing discernment. Moreover, most metric learning methods were greatly influenced by the small sample size (SSS) problem and the KISS method is no exception, which causing the inexistence of inverse of covariance matrices. To solve the above two problems, we present Kernelized Random KISS (KRKISS) metric learning method. By transforming the original features into kernelized features, the differences between feature pairs can better fit the Gaussian distribution and thus they can be more suitable for the Guassian assumption based models. To solve the inverse of covariance matrix estimation problem, we apply a random subspace ensemble method to obtain exact estimation of covariance matrix by randomly selecting and combining several different subspaces. In each subspace, the influence of SSS problem can be minimized. Experimental results on three challenging person re-identification datasets demonstrate that the KRKISS method significantly improves the KISS method and achieves better performance than most existing metric learning approaches.
- Published
- 2018
- Full Text
- View/download PDF
34. Three-layer granular structures and three-way informational measures of a decision table
- Author
-
Xianyong Zhang and Duoqian Miao
- Subjects
Information Systems and Management ,05 social sciences ,Granular computing ,Information processing ,050301 education ,02 engineering and technology ,Information theory ,computer.software_genre ,Computer Science Applications ,Theoretical Computer Science ,Bayes' theorem ,Artificial Intelligence ,Control and Systems Engineering ,0202 electrical engineering, electronic engineering, information engineering ,Table (database) ,020201 artificial intelligence & image processing ,Rough set ,Data mining ,Decision table ,Representation (mathematics) ,0503 education ,computer ,Software ,Mathematics - Abstract
Attribute reduction in rough set theory serves as a fundamental topic for information processing, and its basis is usually a decision table (D-Table). D-Table attribute reduction concerns three hierarchical types, and only classification-based reduction is related to information-theoretic representation. Aiming at inducing comprehensive D-Table attribute reduction with hierarchies and information, this paper concretely constructs a D-Table’s three-layer granular structures and three-way informational measures via granular computing and Bayes’ theorem. With regard to the D-Table, the micro-bottom, meso-middle, and macro-top are hierarchically organized according to the formal structure and systematic granularity. Then, different layers produce different three-way informational measures by developing Bayes’ theorem. Thus, three-way weighted entropies originate from three-way probabilities at the micro-bottom and further evolve from the meso-middle to the macro-top, and their granulation monotonicity and evolution systematicness are acquired. Furthermore, three-way informational measures are analyzed by three-layer granular structures to achieve their hierarchical evolution, superiority, and algorithms. Finally, structural and informational results are effectively illustrated by a D-Table example. This study establishes D-Table’s hierarchical structures to reveal constructional mechanisms and systematic relationships of informational measures. The obtained results underlie the D-Table’s hierarchical, systematic, and informational attribute reduction, and they also enrich the three-way decisions theory.
- Published
- 2017
- Full Text
- View/download PDF
35. Incremental approaches for updating reducts in dynamic covering information systems
- Author
-
Mingjie Cai, Guangming Lang, Zhifei Zhang, and Duoqian Miao
- Subjects
Information Systems and Management ,Computer science ,02 engineering and technology ,Variation (game tree) ,computer.software_genre ,Management Information Systems ,Reduction (complexity) ,Artificial Intelligence ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Information system ,020201 artificial intelligence & image processing ,Data mining ,Rough set ,computer ,Algorithm ,Software - Abstract
In various real-world situations, there are actually a large number of dynamic covering information systems, and non-incremental learning technique is time consuming for updating approximations of sets in dynamic covering information systems. In this paper, we investigate incremental mechanisms of updating the second and sixth lower and upper approximations of sets in dynamic covering information systems with variations of attributes. Especially, we design effective algorithms for calculating the second and sixth lower and upper approximations of sets in dynamic covering information systems. The experimental results indicate that incremental algorithms outperform non-incremental algorithms in the presence of dynamic variation of attributes. Finally, we explore several examples to illustrate that the proposed approaches are feasible to perform knowledge reduction of dynamic covering information systems.
- Published
- 2017
- Full Text
- View/download PDF
36. Three-way attribute reducts
- Author
-
Xianyong Zhang and Duoqian Miao
- Subjects
Dependency (UML) ,Generalization ,Applied Mathematics ,05 social sciences ,050301 education ,Monotonic function ,02 engineering and technology ,computer.software_genre ,Power set ,Measure (mathematics) ,Theoretical Computer Science ,Dual (category theory) ,Set (abstract data type) ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Data mining ,Decision table ,0503 education ,computer ,Software ,Mathematics - Abstract
Three-way decisions are a fundamental methodology with extensive applications, while attribute reducts play an important role in data analyses. The combination of both topics has theoretical significance and applicable prospects, but rarely gains direct research at present. In this paper, three-way decisions are introduced into attribute reducts and thus three-way attribute reducts are systematically investigated. Firstly, classical qualitative reducts are reviewed by the dependency degree. Then, the dependency degree implements approximation analyses to be improved to a controllable measure: the relative dependency degree, which is monotonic to relatively measure the attribute dependency. Given an approximate bar, the relative dependency degree defines the applicable quantitative reducts, which approach, expand, and weaken the classical qualitative reducts. This type of quantitative reducts is actually the positive quantitative reducts for three-way reducts. Thus, three-way quantitative reducts are established by the relative dependency degree and dual thresholds. The positive, boundary, and negative quantitative reducts divide the power set of the condition attribute set and thus gain acceptance, noncommitment, and rejection decisions, respectively; they exhibit the potential derivation from the higher level to the lower level. Furthermore, three-way qualitative reducts are established by degeneration to implement three-way decisions, and three-way quantitative and qualitative reducts exhibit the approximation, expansion, and strength; by virtue of superiority analyses, three-way reducts improve the latent two-way reducts with only acceptance and rejection decisions. Finally, three-way reducts are practically illustrated by observing an example of decision tables. By developing the relative dependency degree with controllability, three-way reducts implement both a quantitative generalization for qualitative reducts and a structural completion for attribute reducts. The relevant study provides a new insight into both three-way decisions and attribute reducts.
- Published
- 2017
- Full Text
- View/download PDF
37. Three-way decision approaches to conflict analysis using decision-theoretic rough set theory
- Author
-
Guangming Lang, Duoqian Miao, and Mingjie Cai
- Subjects
Sequence ,Information Systems and Management ,Management science ,05 social sciences ,Dominance-based rough set approach ,Probabilistic logic ,050301 education ,02 engineering and technology ,Conflict analysis ,Constructive ,Computer Science Applications ,Theoretical Computer Science ,Artificial Intelligence ,Control and Systems Engineering ,0202 electrical engineering, electronic engineering, information engineering ,Information system ,020201 artificial intelligence & image processing ,Rough set ,0503 education ,Advice (complexity) ,Software ,Mathematics - Abstract
Social progress normally occurs through a sequence of struggles and conflicts, and there has been relatively little progress in developing effective methods for conflict analysis. Decision-theoretic rough set theory is a powerful mathematical tool for depicting ambiguous information, and it can provide constructive advice for decision making. In this paper, we first present the concepts of probabilistic conflict, neutral, and allied sets of conflicts and then discuss the mechanism for computing the thresholds and for conflict analysis using decision-theoretic rough set theory. Then, we describe incremental algorithms for constructing the probabilistic conflict, neutral, and allied sets in dynamic information systems, and their effectiveness is illustrated by experimental results. In light of the relationship between maximal coalitions and allied sets, we finally provide efficient approaches to help a government adjust various policies according to changes in the present international situation to calculate the maximal coalitions in dynamic information systems.
- Published
- 2017
- Full Text
- View/download PDF
38. A three-way decisions model with probabilistic rough sets for stream computing
- Author
-
Yuanjian Zhang, Zhifei Zhang, Jianfeng Xu, and Duoqian Miao
- Subjects
Theoretical computer science ,business.industry ,Data stream mining ,Computer science ,Applied Mathematics ,Probabilistic rough sets ,Stream ,Big data ,Probabilistic logic ,Perfect information ,02 engineering and technology ,computer.software_genre ,Theoretical Computer Science ,Artificial Intelligence ,Robustness (computer science) ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Rough set ,Data mining ,business ,computer ,Software - Abstract
Stream computing paradigm, with the characteristics of real-time arrival and departure, has been admitted as a major computing paradigm in big data. Relevant theories are flourishing recently with the surge development of stream computing platforms such as Storm, Kafka and Spark. Rough set theory is an effective tool to extract knowledge with imperfect information, however, related discussions on synchronous immigration and emigration of objects have not been investigated. In this paper, stream computing learning method is proposed on the basis of existing incremental learning studies. This method aims at solving challenges resulted from simultaneous addition and deletion of objects. Based on novel learning method, a stream computing algorithm called single-object stream-computing-based three-way decisions (SS3WD) is developed. In this algorithm, the probabilistic rough set model is applied to approximate the dynamic variation of concepts. Three-way regions can be determined without multiple scans of existing information granular. Extensive experiments not only demonstrate better efficiency and robustness of SS3WD in the presence of objects streaming variation, but also illustrate that stream computing learning method is an effective computing strategy for big data.
- Published
- 2017
- Full Text
- View/download PDF
39. Granular multi-label feature selection based on mutual information
- Author
-
Witold Pedrycz, Duoqian Miao, and Feng Li
- Subjects
business.industry ,Computer science ,Dimensionality reduction ,Feature selection ,Pattern recognition ,02 engineering and technology ,Mutual information ,computer.software_genre ,Redundancy (information theory) ,Artificial Intelligence ,Feature (computer vision) ,020204 information systems ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,Minimum redundancy feature selection ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Data mining ,business ,computer ,Software ,Curse of dimensionality - Abstract
We granulate the label space into information granules to exploit label dependency.We present a multi-label maximal correlation minimal redundancy criterion.The proposed method can select compact and specific feature subsets.The proposed method can significantly improve the algorithm performance. Like the traditional machine learning, the multi-label learning is faced with the curse of dimensionality. Some feature selection algorithms have been proposed for multi-label learning, which either convert the multi-label feature selection problem into numerous single-label feature selection problems, or directly select features from the multi-label data set. However, the former omit the label dependency, or produce too many new labels leading to learning with significant difficulties; the latter, taking the global label dependency into consideration, usually select a few redundant or irrelevant features, because actually not all labels depend on each other, which may confuse the algorithm and degrade its classification performance. To select a more relevant and compact feature subset as well as explore the label dependency, a granular feature selection method for multi-label learning is proposed with a maximal correlation minimal redundancy criterion based on mutual information. The maximal correlation minimal redundancy criterion makes sure that the selected feature subset contains the most class-discriminative information, while in the meantime exhibits the least intra-redundancy. Granulation can help explore the label dependency. We study the relation of the label granularity and the performance on four data sets, and compare the proposed method with other three multi-label feature selection methods. The experimental results demonstrate that the proposed method can select compact and specific feature subsets, improve the classification performance and performs better than other three methods on the widely-used multi-label learning evaluation criteria.
- Published
- 2017
- Full Text
- View/download PDF
40. Tri-partition neighborhood covering reduction for robust classification
- Author
-
Duoqian Miao, Yufei Chen, Jin Qian, and Xiaodong Yue
- Subjects
Mathematical optimization ,Applied Mathematics ,Decision theory ,05 social sciences ,050301 education ,Data space ,02 engineering and technology ,Theoretical Computer Science ,Artificial Intelligence ,Homogeneous ,Outlier ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,0503 education ,Classifier (UML) ,Noisy data ,Software ,Mathematics - Abstract
Neighborhood Covering Reduction extracts rules for classification through formulating the covering of data space with neighborhoods. The covering of neighborhoods is constructed based on distance measure and strictly constrained to be homogeneous. However, this strategy over-focuses on individual samples and thus makes the neighborhood covering model sensitive to noise and outliers. To tackle this problem, we construct a flexible Tri-partition Neighborhood for robust classification. This novel neighborhood originates from Three-way Decision theory and is partitioned into the regions of certain neighborhood, neighborhood boundary and non-neighborhood. The neighborhood boundary consists of uncertain neighbors and is helpful to tolerate noise. Besides the neighborhood construction, we also proposed complete and partial strategies to reduce redundant neighborhoods to optimize the neighborhood covering for classification. The reduction process preserves lower and upper approximations of neighborhood covering and thereby provides a flexible way to handle uncertain samples and noise. Experiments verify the classification based on tri-partition neighborhood covering is robust and achieves precise and stable results on noisy data. Propose Tri-partition Neighborhood Covering Reduction for robust classification.Extend neighborhoods to form covering approximations of data space.Propose three reduction strategies for tri-partition neighborhood covering.Investigate the properties of tri-partition neighborhood covering reduction.Design a classifier based on neighborhood covering models.
- Published
- 2017
- Full Text
- View/download PDF
41. Double-quantitative distance measurement and classification learning based on the tri-level granular structure of neighborhood system
- Author
-
Hongyuan Gou, Xianyong Zhang, Duoqian Miao, and Zhiying Lv
- Subjects
Information Systems and Management ,business.industry ,Computer science ,Granular computing ,Structure (category theory) ,Swarm behaviour ,Pattern recognition ,02 engineering and technology ,Extension (predicate logic) ,Management Information Systems ,Distance measurement ,Artificial Intelligence ,020204 information systems ,Classifier (linguistics) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,Rough set ,business ,Software ,Valuation (algebra) - Abstract
In terms of neighborhood rough sets, the tri-level granular structure of neighborhood system (carrying the neighborhood granule, swarm, and library) establishes a granular computing mechanism for knowledge-based learning. However, its hierarchical exploration is inadequate, while its measurement can be extended for robust applications. Regarding this tri-level granular structure, the double-quantification technology is novelly introduced to make a thorough investigation, especially on the double-quantitative distance measurement and classification learning. Firstly, the size valuation and logical operation are hierarchically supplemented at higher levels. Secondly, the relative and absolute distances of bottom neighborhood granules are linearly combined to a double-quantitative distance, and all the three types of distances are promoted to both the middle swarm level and the top library level. Finally, the double-quantitative distance powerfully characterizing the difference of neighborhood granules is utilized to generate a double-quantitative classifier KNGD, and relevant data experiments show that this new classifier outperforms or balances two existing classifiers, i.e., the relative classifier KNGR and absolute classifier KNGA. By theory, example, and experiment, this study hierarchically perfects the tri-level granular structure of neighborhood system, and the corresponding double-quantification integration and extension offer the robust knowledge measurement and effective classification learning.
- Published
- 2021
- Full Text
- View/download PDF
42. Fusion of multiple channel features for person re-identification
- Author
-
Zhihua Wei, Duoqian Miao, Renxian Zhang, Xuekuan Wang, Cairong Zhao, and Tingfei Ye
- Subjects
0209 industrial biotechnology ,Similarity (geometry) ,Channel (digital image) ,business.industry ,Cognitive Neuroscience ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Representation (systemics) ,Pattern recognition ,02 engineering and technology ,HSL and HSV ,Visual appearance ,Computer Science Applications ,Image (mathematics) ,020901 industrial engineering & automation ,Artificial Intelligence ,Feature (computer vision) ,Metric (mathematics) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,business ,Mathematics - Abstract
Person re-identification plays an important role for automatic search of a person's presence in a surveillance video, and feature representation is a critical and fundamental problem for person re-identification. Besides, an reliable feature representation should effectively adapt to the changes of illumination, pose, viewpoint, etc. In this paper, we propose an effective feature representation called fusion of multiple channel features (FMCF) which captures different low-level features from multiple channels of HSV color space, considering the characteristics of different color channels and fusing color, texture and correlation of spatial structure. Furthermore, it takes advantage of an overlapping strategy to eliminate contrast of local cells in an image. In addition, we apply the simple weight distance metric to measure the similarity of different images, rather than metric learning which relies on a specific feature and requires more computing resources. Finally, we apply the proposed method of FMCF on the i-LIDS Multiple-Camera Tracking Scenario(MCTS) and CUHK-01person re-identification datasets, and the experimental results demonstrate that it is more robust to the variation of visual appearance.
- Published
- 2016
- Full Text
- View/download PDF
43. A study on information granularity in formal concept analysis based on concept-bases
- Author
-
Duoqian Miao and Xiangping Kang
- Subjects
Information Systems and Management ,Computer science ,business.industry ,Granular computing ,02 engineering and technology ,Knowledge acquisition ,Management Information Systems ,Artificial Intelligence ,020204 information systems ,Lattice (order) ,0202 electrical engineering, electronic engineering, information engineering ,Formal concept analysis ,020201 artificial intelligence & image processing ,Artificial intelligence ,Granularity ,business ,Time complexity ,Software - Abstract
As one of mature theories, formal concept analysis (FCA) possesses remarkable mathematical properties, but it may generate massive concepts and complicated lattice structure when dealing with large-scale data. With a view to the fact that granular computing (GrC) can significantly lower the difficulty by selecting larger and appropriate granulations when processing large-scale data or solving complicated problems, the paper introduces GrC into FCA, it not only helps to expand the extent and intent of classical concept, but also can effectively reduce the time complexity and space complexity of FCA in knowledge acquisition to some degree. In modeling, concept-base, as a kind of low-level knowledge, plays an important role in the whole process of information granularity. Based on concept-base, attribute granules, object granules and relation granules in formal contexts are studied. Meanwhile, supremum and infimum operations are introduced in the precess of information granularity, whose biggest distinction from traditional models is integrating the structural information of concept lattice. In addition, the paper also probes into reduction, core, and implication rules in granularity formal contexts. Theories and examples verify the reasonability and effectiveness of the conclusions drawn in the paper. In short, the paper not only can be viewed as an effective means for the expansion of FCA, but also is an attempt for the fusion study of the two theories.
- Published
- 2016
- Full Text
- View/download PDF
44. Knowledge reduction of dynamic covering decision information systems when varying covering cardinalities
- Author
-
Tian Yang, Duoqian Miao, Mingjie Cai, and Guangming Lang
- Subjects
Mathematical optimization ,Information Systems and Management ,Approximations of π ,Computation ,05 social sciences ,Set approximation ,050301 education ,02 engineering and technology ,Object (computer science) ,Computer Science Applications ,Theoretical Computer Science ,Reduction (complexity) ,Artificial Intelligence ,Control and Systems Engineering ,Upper set ,0202 electrical engineering, electronic engineering, information engineering ,Information system ,020201 artificial intelligence & image processing ,Rough set ,0503 education ,Software ,Mathematics - Abstract
In covering-based rough set theory, non-incremental approaches are time-consuming for performing knowledge reduction of dynamic covering decision information systems when the cardinalities of coverings change as a result of object immigration and emigration. Because computing approximations of sets is an important step for knowledge reduction of dynamic covering decision information systems, efficient approaches to calculating the second and sixth lower and upper approximations of sets using the type-1 and type-2 characteristic matrices, respectively, are essential. In this paper, we provide incremental approaches to computing the type-1 and type-2 characteristic matrices of dynamic coverings whose cardinalities vary with the immigration and emigration of objects. We also design incremental algorithms to compute the second and sixth lower and upper set approximations. Experimental results demonstrate that the incremental approaches effectively improve the efficiency of set approximation computation. Finally, we employ several examples to illustrate the feasibility of the incremental approaches for knowledge reduction of dynamic covering decision information systems when increasing the cardinalities of coverings.
- Published
- 2016
- Full Text
- View/download PDF
45. A variable precision rough set model based on the granularity of tolerance relation
- Author
-
Duoqian Miao and Xiangping Kang
- Subjects
Strongly connected component ,Information Systems and Management ,Computer science ,Algebraic structure ,05 social sciences ,Dominance-based rough set approach ,050301 education ,02 engineering and technology ,Management Information Systems ,Artificial Intelligence ,Robustness (computer science) ,Lattice (order) ,0202 electrical engineering, electronic engineering, information engineering ,Information system ,020201 artificial intelligence & image processing ,Rough set ,Granularity ,0503 education ,Algorithm ,Software - Abstract
As one of core problems in rough set theory, normally, classification analysis requires that "all" rather than "most"elements in one class are similar to each other. Nevertheless, the situation is just opposite to that in many actual applications. This means users actually just require "most" rather than "all"elements in a class are similar to each other. In the case, to further enhance the robustness and generalization ability of rough set based on tolerance relation, this paper, with concept lattice as theoretical foundation, presents a variable precision rough set model based on the granularity of tolerance relation, in which users can flexibly adjust parameters so as to meet the actual needs. The so-called relation granularity means that the tolerance relation can be decomposed into several strongly connected sub-relations and several weakly connected sub-relations. In essence, classes defined by people usually correspond to strongly connected sub-relations, but classes defined in the paper always correspond to weakly connected sub-relations. In the paper, an algebraic structure can be inferred from an information system, which can organize all hidden covers or partitions in the form of lattice structure. In addition, solutions to the problems are studied, such as reduction, core and dependency. In short, the paper offers a new idea for the expansion of classical rough set models from the perspective of concept lattice.
- Published
- 2016
- Full Text
- View/download PDF
46. Quantitative/qualitative region-change uncertainty/certainty in attribute reduction: Comparative region-change analyses based on granular computing
- Author
-
Duoqian Miao and Xianyong Zhang
- Subjects
Information Systems and Management ,media_common.quotation_subject ,05 social sciences ,Granular computing ,050301 education ,Monotonic function ,02 engineering and technology ,Certainty ,computer.software_genre ,Computer Science Applications ,Theoretical Computer Science ,Reduction (complexity) ,Artificial Intelligence ,Control and Systems Engineering ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Rough set ,Data mining ,0503 education ,computer ,Software ,Mathematics ,media_common - Abstract
Knowledge-coarsening is investigated to describe attribute deletion.Granule-merging and its region-distribution are used to gain region-change functions.Region-change with certainty/monotonicity is analyzed in qualitative Pawlak-Model.Region-change with uncertainty/drifting is analyzed in quantitative DTRS-Model. Attribute reduction is a fundamental research theme in rough sets and granular computing (GrC). Its scientific construction originally depends on the region-change law. At present, only region-change non-monotonicity/monotonicity is mined in the quantitative/qualitative model. The in-depth region-change truth and its GrC mechanism have significance, especially for follow-up attribute reduction. This paper commences probing region-change essence, mainly from a novel uncertainty/certainty viewpoint. Concretely, we make comparative region-change analyses based on GrC, by resorting to the qualitative Pawlak-Model and quantitative DTRS-Model (the decision-theoretic rough set model). (1) Knowledge-coarsening is investigated to describe attribute deletion. (2) Granule-merging and its region-distribution are studied to probe region-change functions. (3) Region-change is analyzed in Pawlak-Model to mine qualitative region-change certainty and its relevant properties. (4) Region-change is analyzed in DTRS-Model to mine quantitative region-change uncertainty and its relevant properties. (5) Comparative region-change analyses are summarized, and further experiment verification is provided. Knowledge-coarsening and granule-merging establish GrC mechanisms for extensive region-change analyses. Quantitative/qualitative region-change uncertainty/certainty and relevant principles are discovered via DTRS-Model/Pawlak-Model. By virtue of the GrC technology and comparative strategy, this study reveals region-change uncertainty/certainty to deepen region-change non-monotonicity/monotonicity; furthermore, it underlies attribute reduction, especially with regard to quantitative models.
- Published
- 2016
- Full Text
- View/download PDF
47. Granular structure-based incremental updating for multi-label classification
- Author
-
Yuanjian Zhang, Duoqian Miao, Witold Pedrycz, Ying Yu, Jianfeng Xu, and Tianna Zhao
- Subjects
Multi-label classification ,Information Systems and Management ,Computer science ,02 engineering and technology ,Management Information Systems ,ComputingMethodologies_PATTERNRECOGNITION ,Artificial Intelligence ,020204 information systems ,Incremental learning ,0202 electrical engineering, electronic engineering, information engineering ,Structure based ,020201 artificial intelligence & image processing ,Algorithm ,Software - Abstract
Incremental learning is an efficient computational paradigm of acquiring approximate knowledge of data in dynamic environment. Most of the research focuses on knowledge updating for single-label classification, whereas incremental mechanism for multi-label classification is of preliminary nature. This leads to considerable computation complexity to maintain desired performance. To address this challenge, we formulate a granular structure system ( G S S ). The proposed granular structure system in bottom-up way provides a systematic view on label-specific based classification. We demonstrate that the three-way selective ensemble ( T S E N ) model, a state-of-the-art solution for multi-label classification, is compatible with G S S in granulation. An incremental mechanism of G S S is introduced for both label-specific feature generation and optimization, and an incremental three-way selective ensemble algorithm for multiple instances immigration ( I M O T S E N ) is presented. Experiments completed on six datasets show that the proposed algorithm can maintain considerable classification performance while significantly accelerating the knowledge ( G S S ) updating.
- Published
- 2020
- Full Text
- View/download PDF
48. Double-quantitative fusion of accuracy and importance: Systematic measure mining, benign integration construction, hierarchical attribute reduction
- Author
-
Duoqian Miao and Xianyong Zhang
- Subjects
Reduct ,Information Systems and Management ,Computer science ,Heuristic (computer science) ,Monotonic function ,02 engineering and technology ,computer.software_genre ,Machine learning ,Measure (mathematics) ,Management Information Systems ,Causality (physics) ,Reduction (complexity) ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,Hierarchy ,business.industry ,05 social sciences ,Granular computing ,050301 education ,020201 artificial intelligence & image processing ,Rough set ,Data mining ,Artificial intelligence ,Decision table ,business ,0503 education ,computer ,Software - Abstract
IP-Accuracy is mined by systematic double-quantitative fusion of causality measures.IP-Accuracy GrC integration is constructed to gain benign granulation monotonicity.IP-Accuracy attribute reduction is studied to establish a hierarchical reduct system. Uncertainty measure mining and applications are fundamental, and it is possible for double-quantitative fusion to acquire benign measures via heterogeneity and complementarity. This paper investigates the double-quantitative fusion of relative accuracy and absolute importance to provide systematic measure mining, benign integration construction, and hierarchical attribute reduction. (1) First, three-way probabilities and measures are analyzed. Thus, the accuracy and importance are systematically extracted, and both are further fused into importance-accuracy (IP-Accuracy), a synthetic causality measure. (2) By sum integration, IP-Accuracy gains a bottom-top granulation construction and granular hierarchical structure. IP-Accuracy holds benign granulation monotonicity at both the knowledge concept and classification levels. (3) IP-Accuracy attribute reduction is explored based on decision tables. A hierarchical reduct system is thereby established, including qualitative/quantitative reducts, tolerant/approximate reducts, reduct hierarchies, and heuristic algorithms. Herein, the innovative tolerant and approximate reducts quantitatively approach/expand/weaken the ideal qualitative reduct. (4) Finally, a decision table example is provided for illustration. This paper performs double-quantitative fusion of causality measures to systematically mine IP-Accuracy, and this measure benignly constructs a granular computing platform and hierarchical reduct system. By resorting to a monotonous uncertainty measure, this study provides an integration-evolution strategy of granular construction for attribute reduction.
- Published
- 2016
- Full Text
- View/download PDF
49. Constructive methods of rough approximation operators and multigranulation rough sets
- Author
-
Duoqian Miao, Xiaohong Zhang, Meilong Le, and Caihui Liu
- Subjects
0209 industrial biotechnology ,Information Systems and Management ,Important conclusion ,Computer science ,02 engineering and technology ,Constructive ,Management Information Systems ,Algebra ,020901 industrial engineering & automation ,Artificial Intelligence ,Approximation operators ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Rough set ,Software - Abstract
Four kinds of constructive methods of rough approximation operators from existing rough sets are established, and the important conclusion is obtained: some rough sets are essentially direct applications of these constructive methods. Moreover, the new notions of non-dual multigranulation rough sets and hybrid multigranulation rough sets are introduced, and some properties are investigated.
- Published
- 2016
- Full Text
- View/download PDF
50. Multi-view attribute reduction model for traffic bottleneck analysis
- Author
-
Yumin Chen, Duoqian Miao, Longbing Cao, Xiaodong Yue, and B. Xu
- Subjects
Information Systems and Management ,business.industry ,Process (engineering) ,Computer science ,computer.software_genre ,Machine learning ,Field (computer science) ,Management Information Systems ,Task (project management) ,Workflow ,Traffic congestion ,Artificial Intelligence ,Data pre-processing ,Artificial intelligence ,Data mining ,Representation (mathematics) ,business ,Traffic bottleneck ,computer ,Software - Abstract
In the field of traffic bottleneck analysis, it is expected to discover traffic congestion patterns from the reports of road conditions. However, data patterns mined by existing KDD algorithms may not coincide with the real application requirements. Different from academic researchers, traffic management officers do not pursue the most frequent patterns but always hold multiple views for mining task to facilitate traffic planning. They expect to study the correlation between traffic congestion and various kinds of road properties, especially the road properties easily to be improved. In this multi-view analysis, each view actually denotes a kind of user preference of road properties. Thus it is required to integrate user-defined attribute preferences into pattern mining process. To tackle this problem, we propose a multi-view attribute reduction model to discover the patterns of user interests. In this model, user views are expressed with attribute preferences and formally represented by attribute orders. Based on this, we implement a workflow of multi-view traffic bottleneck analysis, which consists of data preprocessing, preference representation and congestion pattern mining. We validate our approach based on the reports of road conditions from Shanghai. Experimental results show that the resultant multi-view mining outcomes are effective for analyzing congestion causes and traffic management.
- Published
- 2015
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.