27 results on '"Jiye Liang"'
Search Results
2. Multiple metric learning via local metric fusion
- Author
-
Xinyao Guo, Lin Li, Chuangyin Dang, Jiye Liang, and Wei Wei
- Subjects
Information Systems and Management ,Artificial Intelligence ,Control and Systems Engineering ,Software ,Computer Science Applications ,Theoretical Computer Science - Published
- 2023
3. Centroids-guided deep multi-view K-means clustering
- Author
-
Jing Liu, Fuyuan Cao, and Jiye Liang
- Subjects
Information Systems and Management ,Artificial Intelligence ,Control and Systems Engineering ,Software ,Computer Science Applications ,Theoretical Computer Science - Published
- 2022
4. Weak multi-label learning with missing labels via instance granular discrimination
- Author
-
Anhui Tan, Xiaowan Ji, Jiye Liang, Yuzhi Tao, Wei-Zhi Wu, and Witold Pedrycz
- Subjects
Information Systems and Management ,Artificial Intelligence ,Control and Systems Engineering ,Software ,Computer Science Applications ,Theoretical Computer Science - Published
- 2022
5. Group-wise interactive region learning for zero-shot recognition
- Author
-
Ting Guo, Jiye Liang, and Guo-Sen Xie
- Subjects
Information Systems and Management ,Artificial Intelligence ,Control and Systems Engineering ,Software ,Computer Science Applications ,Theoretical Computer Science - Published
- 2023
6. Semi-supervised learning with mixed-order graph convolutional networks
- Author
-
Junbiao Cui, Jiye Liang, Jie Wang, and Jianqing Liang
- Subjects
Information Systems and Management ,Exploit ,Computer science ,02 engineering and technology ,Semi-supervised learning ,Machine learning ,computer.software_genre ,Theoretical Computer Science ,Artificial Intelligence ,Simple (abstract algebra) ,Node (computer science) ,0202 electrical engineering, electronic engineering, information engineering ,Feature (machine learning) ,Adjacency matrix ,business.industry ,05 social sciences ,050301 education ,Computer Science Applications ,Control and Systems Engineering ,Benchmark (computing) ,Graph (abstract data type) ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,0503 education ,computer ,Software - Abstract
Recently, graph convolutional networks (GCN) have made substantial progress in semi-supervised learning (SSL). However, established GCN-based methods have two major limitations. First, GCN-based methods are restricted by the oversmoothing issue that limits their ability to extract knowledge from distant but informative nodes. Second, most available GCN-based methods exploit only the feature information of unlabeled nodes, and the pseudo-labels of unlabeled nodes, which contain important information about the data distribution, are not fully utilized. To address these issues, we propose a novel end-to-end ensemble framework, which is named mixed-order graph convolutional networks (MOGCN). MOGCN consists of two modules. (1) It constructs multiple simple GCN learners with multi-order adjacency matrices, which can directly capture the high-order connectivity among the nodes to alleviate the problem of oversmoothing. (2) To efficiently combine the results from multiple GCN learners, MOGCN employs a novel ensemble module, in which the pseudo-labels of unlabeled nodes from various GCN learners are used to augment the diversity among the learners. We conduct experiments on three public benchmark datasets to evaluate the performance of MOGCN on semi-supervised node classification tasks. The experimental results demonstrate that MOGCN consistently outperforms state-of-the-art methods.
- Published
- 2021
7. k-Mnv-Rep: A k-type clustering algorithm for matrix-object data
- Author
-
Liqin Yu, Jiye Liang, Xiao-Zhi Gao, Fuyuan Cao, and Jing Liu
- Subjects
Information Systems and Management ,Heuristic (computer science) ,Computer science ,Feature vector ,02 engineering and technology ,Theoretical Computer Science ,Set (abstract data type) ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,Cluster (physics) ,Cluster analysis ,Measure (data warehouse) ,business.industry ,05 social sciences ,050301 education ,Pattern recognition ,Object (computer science) ,Computer Science Applications ,Data set ,Task (computing) ,Control and Systems Engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,0503 education ,Software - Abstract
In matrix-object data, an object (or a sample) is described by more than one feature vector (record) and all of those feature vectors are responsible for the observed classification of the object. A task for matrix-object data is to cluster it into a set of groups by analyzing and utilizing the information of feature vectors. Matrix-object data are widespread in many real applications. Previous studies typically address data sets that an object is generally represented by a feature vector, which may be violated in many real-world tasks. In this paper, we propose a k-multi-numeric-values-representatives (abbr. k-Mnv-Rep) algorithm to cluster numeric matrix-object data. In this algorithm, a new dissimilarity measure between two numeric matrix-objects is defined and a new heuristic method of updating cluster centers is given. Furthermore, we also propose a k-multi-values-representatives (abbr. k-Mv-Rep) algorithm to cluster hybrid matrix-object data. The two proposed algorithms break the limitations of the previous studies, and can be applied to address matrix-object data sets that exist widely in many real-world tasks. The benefits and effectiveness of the two algorithms are shown by some experiments on real and synthetic data sets.
- Published
- 2021
8. A fusion collaborative filtering method for sparse data in recommender systems
- Author
-
Jiye Liang, Chenjiao Feng, Zhiqiang Wang, and Peng Song
- Subjects
Information Systems and Management ,Computer science ,05 social sciences ,050301 education ,02 engineering and technology ,Similarity measure ,Recommender system ,computer.software_genre ,Computer Science Applications ,Theoretical Computer Science ,Matrix decomposition ,Factorization ,Artificial Intelligence ,Control and Systems Engineering ,Robustness (computer science) ,0202 electrical engineering, electronic engineering, information engineering ,Collaborative filtering ,020201 artificial intelligence & image processing ,Data mining ,0503 education ,computer ,Software ,Sparse matrix - Abstract
Collaborative filtering is a fundamental technique in recommender systems, for which memory-based and matrix-factorization-based collaborative filtering are the two types of widely used methods. However, the performance of these two types of methods is limited in the case of sparse data, particularly with extremely sparse data. To improve the effectiveness of the methods in a sparse scenario, this paper proposes a multi-factor similarity measure that captures linear and nonlinear correlations between users resulting from extreme behavior. Subsequently, a fusion method that simultaneously considers the multi-factor similarity and the global rating information in a probability matrix factorization framework is proposed. In our framework, users’ local relations are integrated into the global ratings optimization process, so that prediction accuracy and robustness are improved in sparse data, particularly in extremely sparse data. To verify the performance of the proposed methods, we conduct experiments on four public datasets. The experimental results show that the fusion method is superior to the typical matrix factorization models used in collaborative filtering and significantly improves both the prediction results and robustness in sparse data.
- Published
- 2020
9. Interval-valued hesitant fuzzy multi-granularity three-way decisions in consensus processes with applications to multi-attribute group decision making
- Author
-
Chao Zhang, Jiye Liang, and Deyu Li
- Subjects
Information Systems and Management ,Operations research ,Computer science ,05 social sciences ,050301 education ,Context (language use) ,02 engineering and technology ,Space (commercial competition) ,Fuzzy logic ,Computer Science Applications ,Theoretical Computer Science ,Group decision-making ,Artificial Intelligence ,Control and Systems Engineering ,0202 electrical engineering, electronic engineering, information engineering ,Selection (linguistics) ,Collective wisdom ,020201 artificial intelligence & image processing ,Rough set ,0503 education ,Software - Abstract
Multi-attribute group decision making (MAGDM) is a common activity for multi-variable complicated decision making situations by integrating collective wisdom. Aiming at fusing granular computing with three-way decisions (3WD) to study scheme synthesis and analysis of solution space, multi-granularity three-way decisions (MG-3WD) provide multi-dimension problem solving methods for MAGDM problems. By using MG-3WD frameworks, this paper intends to study viable strategies of processing consensus and conflicting opinions provided by different decision makers in the interval-valued hesitant fuzzy (IVHF) MAGDM problem. More specifically, after reviewing the relevant literature, four kinds of IVHF multigranulation decision-theoretic rough sets (MG-DTRSs) over two universes are proposed according to different risk appetites of experts firstly. Then, we explore some fundamental propositions of newly proposed models. Afterwards, solutions to MAGDM problems in the context of mergers and acquisitions (M&A) target selections by using the presented IVHF MG-DTRSs over two universes are constructed. At last, a M&A target selection case study, along with a sensitivity analysis and a comparative analysis, is applied to illustrate the established decision making approaches.
- Published
- 2020
10. Multi-granularity three-way decisions with adjustable hesitant fuzzy linguistic multigranulation decision-theoretic rough sets over two universes
- Author
-
Deyu Li, Chao Zhang, and Jiye Liang
- Subjects
Information Systems and Management ,Basis (linear algebra) ,business.industry ,Computer science ,05 social sciences ,050301 education ,Context (language use) ,Rationality ,02 engineering and technology ,Term (logic) ,Computer Science Applications ,Theoretical Computer Science ,Group decision-making ,Decision-theoretic rough sets ,Artificial Intelligence ,Control and Systems Engineering ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,Granularity ,Rough set ,business ,0503 education ,Software - Abstract
The notion of hesitant fuzzy linguistic term sets (HFLTSs), which enables experts to utilize a few possible linguistic terms to evaluate varieties of common qualitative information, plays a significant role in handling situations in cases where these experts are hesitant in offering linguistic expressions. For addressing the challenges of information analysis and information fusion in hesitant fuzzy linguistic (HFL) group decision making, in accordance with the multi-granularity three-way decisions paradigm, the primary purpose of this study is to develop the notion of multigranulation decision-theoretic rough sets (MG-DTRSs) into the HFL background within the two-universe framework. Having revisited the relevant literature, we first propose a hybrid model named adjustable HFL MG-DTRSs over two universes by introducing an adjustable parameter for the expected risk appetite of experts, in which both optimistic and pessimistic versions of HFL MG-DTRSs over two universes are special cases of the adjustable version. Second, some of the fundamental properties of the proposed model are discussed. Then, on the basis of the presented hybrid model, a group decision making approach within the HFL context is further constructed. Finally, a practical example, a comparative analysis, and a validity test concerning person-job fit problems are explored to reveal the rationality and practicability of the constructed decision making rule.
- Published
- 2020
11. Corrigendum to 'Weak multi-label learning with missing labels via instance granular discrimination' [Inform. Sci. 594 (2022) 200–216]
- Author
-
Anhui Tan, Xiaowan Ji, Jiye Liang, Yuzhi Tao, Wei-Zhi Wu, and Witold Pedrycz
- Subjects
Information Systems and Management ,Artificial Intelligence ,Control and Systems Engineering ,Software ,Computer Science Applications ,Theoretical Computer Science - Published
- 2023
12. Protein complex detection algorithm based on multiple topological characteristics in PPI networks
- Author
-
Xingwang Zhao, Wenping Zheng, Jiye Liang, Junfang Mu, and Jie Wang
- Subjects
Information Systems and Management ,Degree (graph theory) ,Heuristic (computer science) ,Computer science ,Node (networking) ,05 social sciences ,050301 education ,02 engineering and technology ,Topology ,Computer Science Applications ,Theoretical Computer Science ,Task (computing) ,ComputingMethodologies_PATTERNRECOGNITION ,Artificial Intelligence ,Control and Systems Engineering ,Metric (mathematics) ,0202 electrical engineering, electronic engineering, information engineering ,Cluster (physics) ,020201 artificial intelligence & image processing ,Cluster analysis ,0503 education ,Algorithm ,Software ,Clustering coefficient - Abstract
Detecting protein complexes from available protein–protein interaction (PPI) networks is an important task, and several related algorithms have been proposed. These algorithms usually consider a single topological metric and ignore the rich topological characteristics and inherent organization information of protein complexes. However, the effective use of such information is crucial to protein complex detection. To overcome this deficiency, this study presents a heuristic clustering algorithm to identify protein complexes by fully exploiting the topological information of PPI networks. By considering the clustering coefficient and the node degree, a new nodal metric is proposed to quantify the importance of each node within a local subgraph. An iterative paradigm is used to incrementally identify seed proteins and expand each seed to a cluster. First, among the unclustered nodes, the node with the highest nodal metric is selected as a new seed. Then, the seed is expanded to a cluster by adding candidate nodes recursively from its neighbors according to both the density of the cluster and the connection between a candidate node and the cluster. The experimental results demonstrate that the proposed algorithm outperforms other competing algorithms in terms of F-measure and accuracy.
- Published
- 2019
13. Fast graph clustering with a new description model for community detection
- Author
-
Xueqi Cheng, Yike Guo, Liang Bai, and Jiye Liang
- Subjects
Information Systems and Management ,Theoretical computer science ,Iterative method ,Community structure ,Network data ,02 engineering and technology ,computer.software_genre ,Partition (database) ,Computer Science Applications ,Theoretical Computer Science ,Description model ,Important research ,Artificial Intelligence ,Control and Systems Engineering ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Iterative search ,Data mining ,computer ,Software ,Mathematics ,Clustering coefficient - Abstract
Efficiently describing and discovering communities in a network is an important research concept for graph clustering. In the paper, we present a community description model that evaluates the local importance of a node in a community and its importance concentration in all communities to reflect its representability to the community. Based on the description model, we propose a new evaluation criterion and an iterative search algorithm for community detection (ISCD). The new algorithm can quickly discover communities in a large-scale network, due to the average linear-time complexity with the number of edges. Furthermore, we provide an initial method of input parameters including the number of communities and the initial partition before algorithm implementation, which can enhance the local-search quality of the iterative algorithm. The proposed algorithm with the initial method is called ISCD+. Finally, we compare the effectiveness and efficiency of the ISCD+ algorithm with six representative algorithms on several real network data sets. The experimental results illustrate that the proposed algorithm is suitable to address large-scale networks.
- Published
- 2017
14. Grouping granular structures in human granulation intelligence
- Author
-
Jiye Liang, Chuangyin Dang, Jieting Wang, Yuhua Qian, Witold Pedrycz, and Honghong Cheng
- Subjects
Structure (mathematical logic) ,Information Systems and Management ,Knowledge representation and reasoning ,Human intelligence ,business.industry ,05 social sciences ,Granular computing ,050301 education ,02 engineering and technology ,Partition (database) ,Computer Science Applications ,Theoretical Computer Science ,Artificial Intelligence ,Control and Systems Engineering ,Feature (computer vision) ,Scalability ,Convergence (routing) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,0503 education ,Software ,Mathematics - Abstract
Human granulation intelligence means that people can observe and analyze the same problem from various granulation points of view, which generally acknowledge an essential feature of human intelligence. Each granulation view can generate a granular structure through dividing a cognitive target into some meaningful information granules. This means that a large number of granular structures can be generated from the cognitive target. However, people can group these granular structures and select some representative ones for problem solving. This leads to an interesting research topic: how to efficiently and effectively group a family of granular structures. To address this issue, we first introduce a granular structure distance to measure the difference between two granular structures within a unified knowledge representation. Then, we propose a framework for grouping granular structures, called GGS algorithm, which is used to efficiently partition them. Moreover, two indices denoted as DIS and APD are also designed for evaluating the performance of a grouping result of granular structures. Finally, experiments carried out for nine data sets show that the GGS algorithm comes as a sound solution from perspectives of its convergence, effectiveness and scalability. In this way we have proposed and experimented with the general framework for discovering the structure inherent in granular structures, which can be afterwards used to simulate intelligent behavior of human’s abilities of granular structure selection.
- Published
- 2017
15. Multigranulation information fusion: A Dempster-Shafer evidence theory-based clustering ensemble method
- Author
-
Yuhua Qian, Feijiang Li, Jiye Liang, and Jieting Wang
- Subjects
Clustering high-dimensional data ,Information Systems and Management ,Fuzzy clustering ,Correlation clustering ,02 engineering and technology ,computer.software_genre ,Theoretical Computer Science ,Artificial Intelligence ,CURE data clustering algorithm ,Consensus clustering ,0202 electrical engineering, electronic engineering, information engineering ,Cluster analysis ,Mathematics ,business.industry ,05 social sciences ,Constrained clustering ,050301 education ,Pattern recognition ,Computer Science Applications ,ComputingMethodologies_PATTERNRECOGNITION ,Control and Systems Engineering ,Canopy clustering algorithm ,020201 artificial intelligence & image processing ,Artificial intelligence ,Data mining ,business ,0503 education ,computer ,Software - Abstract
Clustering analysis is a fundamental technique in machine learning, which is also widely used in information granulation. Multiple clustering systems granulate a data set into multiple granular structures. Therefore, clustering ensemble can serve as an important branch of multigranulation information fusion. Many approaches have been proposed to solve the clustering ensemble problem. This paper focuses on the direct approaches which involve two steps: finding cluster correspondence and utilizing a fusion strategy to produce a final result. The existing direct approaches mainly discuss the process of finding cluster correspondence, while the fusing process is simply done by voting. In this paper, we mainly focus on the fusing process and propose a Dempster-Shafer evidence theory-based clustering ensemble algorithm. The advantage of the algorithm is that the information of an object's surrounding cluster structure is taken into consideration by using its neighbors to describe it. First, we find neighbors of each object and generate its label probability outputs in every base partition. Second, these label probability outputs are integrated based on DS theory. Theoretically, our method is superior to other voting methods. Besides, several experiments show that the proposed algorithm is statistically better than seven other clustering ensemble methods.
- Published
- 2017
16. Fuzzy rough approximations for set-valued data
- Author
-
Junbiao Cui, Junhong Wang, Wei Wei, and Jiye Liang
- Subjects
Soft computing ,Information Systems and Management ,05 social sciences ,Dominance-based rough set approach ,050301 education ,02 engineering and technology ,Fuzzy logic ,Computer Science Applications ,Theoretical Computer Science ,Set (abstract data type) ,Similarity relation ,Artificial Intelligence ,Control and Systems Engineering ,0202 electrical engineering, electronic engineering, information engineering ,Redundancy (engineering) ,020201 artificial intelligence & image processing ,Rough set ,Fuzzy rough sets ,0503 education ,Algorithm ,Software ,Mathematics - Abstract
Rough set theory is one of important tools of soft computing, and rough approximations are the essential elements in rough set models. However, the existing fuzzy rough set model for set-valued data, which is directly constructed based on a kind of similarity relation, fail to explicitly define fuzzy rough approximations. To solve this issue, in this paper, we propose two types of fuzzy rough approximations, and define two corresponding relative positive region reducts. Furthermore, two discernibility matrices and two discernibility functions are introduced to acquire these new proposed reducts, and the relationships among the new reducts and the existing reducts are also be provided. Theoretical analyses demonstrate that the new types of reducts have less redundancy and are more diverse (no lower number of reducts) than those obtained by means of the existing matrices, and experimental results illustrate the new reducts found by our methods outperform those obtained by existing method.
- Published
- 2016
17. An information fusion approach by combining multigranulation rough sets and evidence theory
- Author
-
Yuhua Qian, Guoping Lin, and Jiye Liang
- Subjects
Information Systems and Management ,Uncertain data ,Pooling ,Granular computing ,computer.software_genre ,Computer Science Applications ,Theoretical Computer Science ,Information fusion ,Artificial Intelligence ,Control and Systems Engineering ,Robustness (computer science) ,Rough set ,Data mining ,computer ,Software ,Mathematics - Abstract
Multigranulation rough set (MGRS) theory provides two kinds of qualitative combination rules that are generated by optimistic and pessimistic multigranulation fusion functions. They are used to aggregate multiple granular structures from a set theoretic standpoint. However, the two combination rules seem to lack robustness because one is too relaxed and the other too restrictive to solve some practical problems. Dempster's combination rule in the evidence theory has been employed to aggregate information coming from multiple sources. However, it fails to deal with conflict evidence. To overcome these limitations, we focus on the combination of granular structures with both reliability and conflict from multiple sources, which has been a challenging task in the field of granular computing. We first address the connection between multigranulation rough set theory and the evidence theory. Then, a two-grade fusion approach involved in the evidence theory and multigranulation rough set theory is proposed, which is based on a well-defined distance function among granulation structures. Finally, an illustrative example is given to show the effectiveness of the proposed fusion method. The results of this study will be useful for pooling the uncertain data from different sources and significant for establishing a new direction of granular computing.
- Published
- 2015
18. Trend analysis of categorical data streams with a concept change method
- Author
-
Jiye Liang, Joshua Zhexue Huang, and Fuyuan Cao
- Subjects
Data stream ,Sequence ,Information Systems and Management ,Computer science ,Carry (arithmetic) ,Window (computing) ,computer.software_genre ,Expression (mathematics) ,Computer Science Applications ,Theoretical Computer Science ,Data set ,Trend analysis ,Artificial Intelligence ,Control and Systems Engineering ,Outlier ,Data mining ,computer ,Categorical variable ,Software - Abstract
This paper proposes a new method to trend analysis of categorical data streams. A data stream is partitioned into a sequence of time windows and the records in each window are assumed to carry a number of concepts represented as clusters. A data labeling algorithm is proposed to identify the concepts or clusters of a window from the concepts of the preceding window. The expression of a concept is presented and the distance between two concepts in two consecutive windows is defined to analyze the change of concepts in consecutive windows. Finally, a trend analysis algorithm is proposed to compute the trend of concept change in a data stream over the sequence of consecutive time windows. The methods for measuring the significance of an attribute that causes the concept change and the outlier degrees of objects are presented to reveal the causes of concept change. Experiments on real data sets are presented to demonstrate the benefits of the trend analysis method.
- Published
- 2014
19. Pessimistic rough set based decisions: A multigranulation fusion strategy
- Author
-
Zhongzhi Shi, Jiye Liang, Shunyong Li, Yuhua Qian, and Feng Wang
- Subjects
Information Systems and Management ,Reduction (recursion theory) ,business.industry ,Binary relation ,Dominance-based rough set approach ,Granular computing ,Decision rule ,Computer Science Applications ,Theoretical Computer Science ,Artificial Intelligence ,Control and Systems Engineering ,Point (geometry) ,Rough set ,Artificial intelligence ,business ,Decision model ,Software ,Mathematics - Abstract
Multigranulation rough sets (MGRS) is one of desirable directions in rough set theory, in which lower/upper approximations are approximated by granular structures induced by multiple binary relations. It provides a new perspective for decision making analysis based on rough set theory. In decision making analysis, people often adopt the decision strategy “Seeking common ground while eliminating differences” (SCED). This strategy implies that one reserves common decisions while deleting inconsistent decisions. From this point of view, the objective of this study is to develop a new multigranulation rough set based decision model based on SCED strategy, called pessimistic multigranulation rough sets. We study this model from three aspects, which are lower/upper approximation and their properties, decision rules and attribute reduction, in this paper.
- Published
- 2014
20. Fast global k-means clustering based on local geometrical information
- Author
-
Chuangyin Dang, Jiye Liang, Liang Bai, and Chao Sui
- Subjects
Information Systems and Management ,Computational complexity theory ,Basis (linear algebra) ,k-means clustering ,A* search algorithm ,Function (mathematics) ,computer.software_genre ,Computer Science Applications ,Theoretical Computer Science ,Local convergence ,law.invention ,Set (abstract data type) ,Artificial Intelligence ,Control and Systems Engineering ,law ,Data mining ,Cluster analysis ,Algorithm ,computer ,Software ,Mathematics - Abstract
The fast global k-means (FGKM) clustering algorithm is one of the most effective approaches for resolving the local convergence of the k-means clustering algorithm. Numerical experiments show that it can effectively determine a global or near global minimizer of the cost function. However, the FGKM algorithm needs a large amount of computational time or storage space when handling large data sets. To overcome this deficiency, a more efficient FGKM algorithm, namely FGKM+A, is developed in this paper. In the development, we first apply local geometrical information to describe approximately the set of objects represented by a candidate cluster center. On the basis of the approximate description, we then propose an acceleration mechanism for the production of new cluster centers. As a result of the acceleration, the FGKM+A algorithm not only yields the same clustering results as that of the FGKM algorithm but also requires less computational time and fewer distance calculations than the FGKM algorithm and its existing modifications. The efficiency of the FGKM+A algorithm is further confirmed by experimental studies on several UCI data sets.
- Published
- 2013
21. Multigranulation rough sets: From partition to covering
- Author
-
Guoping Lin, Yuhua Qian, and Jiye Liang
- Subjects
Discrete mathematics ,Information Systems and Management ,Theoretical computer science ,Artificial Intelligence ,Control and Systems Engineering ,Granular computing ,Approximation operators ,Partition (number theory) ,Rough set ,Software ,Computer Science Applications ,Theoretical Computer Science ,Mathematics - Abstract
The classical multigranulation rough set (MGRS) theory offers a formal theoretical framework for solving the complex problem under multigranulation environment. However, it is noticeable that MGRS theory cannot be applied in multi-source information systems with a covering environment in the real world. To address this issue, we firstly present in this paper three types of covering based multigranulation rough sets, in which set approximations are defined by different covering approximation operators. Then, by using two different approximation strategies, i.e., seeking common reserving difference and seeking common rejecting difference, two kinds of covering based multigranulation rough set are presented, namely, a covering based optimistic multigranulation rough set and a covering based pessimistic multigranulation rough sets. Finally, we develop some properties and several uncertainty measures of the covering based multigranulation rough sets. These results will enrich the MGRS theory and enlarge its application scope.
- Published
- 2013
22. Can fuzzy entropies be effective measures for evaluating the roughness of a rough set?
- Author
-
Yuhua Qian, Wei Wei, Jiye Liang, and Chuangyin Dang
- Subjects
Information Systems and Management ,Fuzzy classification ,Fuzzy set ,Dominance-based rough set approach ,computer.software_genre ,Fuzzy logic ,Measure (mathematics) ,Computer Science Applications ,Theoretical Computer Science ,Artificial Intelligence ,Control and Systems Engineering ,Applied mathematics ,Fuzzy number ,Data mining ,Rough set ,computer ,Software ,Membership function ,Mathematics - Abstract
The roughness of a rough set arises from the existence of its boundary region. In such a boundary region, each object has a non-zero rough membership degree. When an object's rough membership degree is regarded as its fuzzy membership degree, a rough set can induce a fuzzy set. This relationship motivates us to assert that there may exist some inherent relations between the roughness of a rough set and the fuzziness of the fuzzy set induced from the rough set. This assertion leads us to the question: Can the existing fuzzy entropies be used to evaluate the roughness of a rough set? To answer this question, we first analyze how the boundary region varies when the partition of the universe becomes coarser, and then exploit this analysis in the introduction of a more appropriate definition on the roughness of a rough set. To determine whether a fuzzy entropy can be used to evaluate the roughness of a rough set or not, we develop three methods for estimating the ability of a fuzzy entropy to measure the roughness. The experiments show that these methods are very effective and can be applied to select a fuzzy entropy as a measure of the roughness of a rough set.
- Published
- 2013
23. A comparative study of rough sets for hybrid data
- Author
-
Yuhua Qian, Wei Wei, and Jiye Liang
- Subjects
Information Systems and Management ,business.industry ,Approximations of π ,Granular computing ,Dominance-based rough set approach ,computer.software_genre ,Fuzzy logic ,Computer Science Applications ,Theoretical Computer Science ,Artificial Intelligence ,Control and Systems Engineering ,Fuzzy set operations ,Artificial intelligence ,Data mining ,Rough set ,business ,Fuzzy rough sets ,computer ,Software ,Hybrid data ,Mathematics - Abstract
To discover knowledge from hybrid data using rough sets, researchers have developed several fuzzy rough set models and a neighborhood rough set model. These models have been applied to many hybrid data processing applications for a particular purpose, thus neglecting the issue of selecting an appropriate model. To address this issue, this paper mainly concerns the relationships among these rough set models. Investigating fuzzy and neighborhood hybrid granules reveals an important relationship between these two granules. Analyzing the relationships among rough approximations of these models shows that Hu's fuzzy rough approximations are special cases of neighborhood and Wang's fuzzy rough approximations, respectively. Furthermore, one-to-one correspondence relationships exist between Wang's fuzzy and neighborhood rough approximations. This study also finds that Wang's fuzzy and neighborhood rough approximations are cut sets of Dubois' fuzzy rough approximations and Radzikowska and Kerre's fuzzy rough approximations, respectively.
- Published
- 2012
24. MGRS: A multi-granulation rough set
- Author
-
Jiye Liang, Chuangyin Dang, Yuhua Qian, and Yiyu Yao
- Subjects
Reduct ,Discrete mathematics ,Information Systems and Management ,Binary relation ,Dominance-based rough set approach ,Decision rule ,Computer Science Applications ,Theoretical Computer Science ,Set (abstract data type) ,Algebra ,Artificial Intelligence ,Control and Systems Engineering ,Metric (mathematics) ,Equivalence relation ,Rough set ,Software ,Mathematics - Abstract
The original rough set model was developed by Pawlak, which is mainly concerned with the approximation of sets described by a single binary relation on the universe. In the view of granular computing, the classical rough set theory is established through a single granulation. This paper extends Pawlak's rough set model to a multi-granulation rough set model (MGRS), where the set approximations are defined by using multi equivalence relations on the universe. A number of important properties of MGRS are obtained. It is shown that some of the properties of Pawlak's rough set theory are special instances of those of MGRS. Moreover, several important measures, such as accuracy [email protected], quality of [email protected] and precision of [email protected], are presented, which are re-interpreted in terms of a classic measure based on sets, the Marczewski-Steinhaus metric and the inclusion degree measure. A concept of approximation reduct is introduced to describe the smallest attribute subset that preserves the lower approximation and upper approximation of all decision classes in MGRS as well. Finally, we discuss how to extract decision rules using MGRS. Unlike the decision rules (''AND'' rules) from Pawlak's rough set model, the form of decision rules in MGRS is ''OR''. Several pivotal algorithms are also designed, which are helpful for applying this theory to practical issues. The multi-granulation rough set model provides an effective approach for problem solving in the context of multi granulations.
- Published
- 2010
25. Set-valued ordered information systems
- Author
-
Dawei Tang, Jiye Liang, Chuangyin Dang, and Yuhua Qian
- Subjects
Special ordered set ,Information Systems and Management ,Reduction (recursion theory) ,Relation (database) ,Dominance-based rough set approach ,Decision rule ,computer.software_genre ,Computer Science Applications ,Theoretical Computer Science ,Artificial Intelligence ,Control and Systems Engineering ,Information system ,Rough set ,Data mining ,Decision table ,computer ,Software ,Mathematics - Abstract
Set-valued ordered information systems can be classified into two categories: disjunctive and conjunctive systems. Through introducing two new dominance relations to set-valued information systems, we first introduce the conjunctive/disjunctive set-valued ordered information systems, and develop an approach to queuing problems for objects in presence of multiple attributes and criteria. Then, we present a dominance-based rough set approach for these two types of set-valued ordered information systems, which is mainly based on substitution of the indiscernibility relation by a dominance relation. Through the lower/upper approximation of a decision, some certain/possible decision rules from a so-called set-valued ordered decision table can be extracted. Finally, we present attribute reduction (also called criteria reduction in ordered information systems) approaches to these two types of ordered information systems and ordered decision tables, which can be used to simplify a set-valued ordered information system and find decision rules directly from a set-valued ordered decision table. These criteria reduction approaches can eliminate those criteria that are not essential from the viewpoint of the ordering of objects or decision rules.
- Published
- 2009
26. A new measure of uncertainty based on knowledge granulation for rough sets
- Author
-
Yuhua Qian, Jiye Liang, and Junhong Wang
- Subjects
Information Systems and Management ,Dominance-based rough set approach ,computer.software_genre ,Measure (mathematics) ,Computer Science Applications ,Theoretical Computer Science ,Set (abstract data type) ,Granulation ,Artificial Intelligence ,Control and Systems Engineering ,Information system ,Rough set ,Data mining ,Decision table ,computer ,Software ,Axiom ,Mathematics - Abstract
In rough set theory, accuracy and roughness are used to characterize uncertainty of a set and approximation accuracy is employed to depict accuracy of a rough classification. Although these measures are effective, they have some limitations when the lower/upper approximation of a set under one knowledge is equal to that under another knowledge. To overcome these limitations, we address in this paper the issues of uncertainty of a set in an information system and approximation accuracy of a rough classification in a decision table. An axiomatic definition of knowledge granulation for an information system is given, under which these three measures are modified. Theoretical studies and experimental results show that the modified measures are effective and suitable for evaluating the roughness and accuracy of a set in an information system and the approximation accuracy of a rough classification in a decision table, respectively, and have a much simpler and more comprehensive form than the existing ones.
- Published
- 2009
27. Measures for evaluating the decision performance of a decision table in rough set theory
- Author
-
Deyu Li, Haiyun Zhang, Jiye Liang, Chuangyin Dang, and Yuhua Qian
- Subjects
Weighted sum model ,Information Systems and Management ,business.industry ,Dominance-based rough set approach ,Evidential reasoning approach ,Decision rule ,computer.software_genre ,Machine learning ,Computer Science Applications ,Theoretical Computer Science ,Artificial Intelligence ,Control and Systems Engineering ,Decision matrix ,Data mining ,Artificial intelligence ,Decision table ,business ,computer ,Software ,Mathematics ,Optimal decision ,Decision analysis - Abstract
As two classical measures, approximation accuracy and consistency degree can be employed to evaluate the decision performance of a decision table. However, these two measures cannot give elaborate depictions of the certainty and consistency of a decision table when their values are equal to zero. To overcome this shortcoming, we first classify decision tables in rough set theory into three types according to their consistency and introduce three new measures for evaluating the decision performance of a decision-rule set extracted from a decision table. We then analyze how each of these three measures depends on the condition granulation and decision granulation of each of the three types of decision tables. Experimental analyses on three practical data sets show that the three new measures appear to be well suited for evaluating the decision performance of a decision-rule set and are much better than the two classical measures.
- Published
- 2008
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.