91 results
Search Results
2. Security-preserving social data sharing methods in modern social big knowledge systems.
- Author
-
Chen, Xuan
- Subjects
- *
SOCIAL computing , *COMPUTER systems , *COMPUTER science , *DATA privacy , *INFORMATION sharing , *DATA protection , *DATA security failures - Abstract
In recent decades, the development of social computing systems has realized the efficient information exchange between large groups of people. Nowadays, social computing systems are rather complex platforms supported by not only traditional sociology theory but also computer science and big data based applications. With the increase of the social computing systems' complexities, serious issues of social digital security and privacy have shown up since, in recent years, more and more social data leakage incidents are happening. This fact is due to reasons on many different aspects since there are many sources threatening the security and privacy of the social data in such a complex social computing system. In this paper, we improve the traditional social data protection schemes by combining the information fragmentation concepts with the distributed system architectures to build a novel social data protection scheme. We use social photo protection as the fundamental scenario and deploy our novel scheme to illustrate the improvement on the protection level with the protection analysis in detail. A security analysis of practically realizing such a scheme is also evaluated in this paper. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
3. A novel algorithmic construction for deductions of categorical polysyllogisms by Carroll's diagrams.
- Author
-
Senturk, Ibrahim, Gursoy, Necla Kircali, Oner, Tahsin, and Gursoy, Arif
- Subjects
- *
ARTIFICIAL intelligence , *ALGORITHMS , *AUTHORSHIP in literature , *COMPUTER science , *SYLLOGISM - Abstract
In this work, with the help of a calculus system syllogistic logic with Carroll's diagrams (SLCD), we construct a useful algorithm for the possible deductions of polysyllogisms (soriteses). This algorithm makes a general deduction in categorical syllogisms with the help of diagrams to depict each proposition of polysyllogisms. The developed calculus system PolySLCD (PSLCD) is used to allow a formal deduction from premises set by comprising synchronically biliteral and triliteral diagrammatical appearance and simple algorithmic nature. This algorithm can be used to deduce new conclusions, step by step, through recursive conclusion sets that are obtained from premises of categorical polysyllogisms. The fundamental contributions of this paper are accurately deducing conclusions from sets corresponding to given premises as exact human reasoning using a single algorithm and designing this algorithm based on SLCD. Therefore, it is more suitable for computer-aided solution. Since the algorithm is set-based, it is a novel algorithm in the literature and it can easily contribute to the researchers using polysyllogisms in different scientific branches, such as computer science, decision-making systems and artificial intelligence. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
4. Collaborative linear manifold learning for link prediction in heterogeneous networks.
- Author
-
Liu, JiaHui, Jin, Xu, Hong, YuXiang, Liu, Fan, Chen, QiXiang, Huang, YaLou, Liu, MingMing, Xie, MaoQiang, and Sun, FengChi
- Subjects
- *
ALGORITHMS , *COMPUTER science , *MANIFOLDS (Mathematics) , *TOPOLOGY - Abstract
Link prediction in heterogeneous networks aims at predicting missing interactions between pairs of nodes with the help of the topology of the target network and interconnected auxiliary networks. It has attracted considerable attentions from both computer science and bioinformatics communities in the recent years. In this paper, we introduce a novel Collaborative Linear Manifold Learning (CLML) algorithm. It can optimize the consistency of nodes similarities by collaboratively using the manifolds embedded between the target network and the auxiliary network. The experiments on four benchmark datasets have demonstrated the outstanding advantages of CLML, not only in the high prediction performance compared to baseline methods, but also in the capability to predict the unknown interactions in the target networks accurately and effectively. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
5. Conditional importance sampling for particle filters.
- Author
-
Zhang, Qingming, Shi, Buhai, and Zhang, Yuhao
- Subjects
- *
DIGITAL filters (Mathematics) , *MONTE Carlo method , *COMPUTER simulation , *STATISTICAL bootstrapping , *COMPUTER science - Abstract
In this paper, we present a new importance sampling method, namely the conditional importance sampling (CIS). This new method uses a conditional density as a proposal density and exploits rejection sampling, adaptively neglecting samples whose importance weights are relatively low. The CIS improves the efficiency of estimation without creating bias. We apply the CIS to the bootstrap filter to obtain a new algorithm, named the conditional bootstrap filter, which achieves higher estimation efficiency than the bootstrap filter and shows advantages over some other filters in our simulations. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
6. Niching particle swarm optimization with equilibrium factor for multi-modal optimization.
- Author
-
Li, Yikai, Chen, Yongliang, Zhong, Jinghui, and Huang, Zhixing
- Subjects
- *
PARTICLE swarm optimization , *EVOLUTIONARY computation , *ALGORITHMS , *PROBLEM solving , *COMPUTER science - Abstract
Multi-modal optimization is an active research topic that has attracted increasing attention from evolutionary computation community. Particle swarm optimization (PSO) with niching technique is one of the most effective approaches for multi-modal optimization. However, in existing PSO with niching methods, the number of particles around different niches varies distinctly from each other, which makes it difficult for the algorithm to find high-quality solutions in all niches. To address this issue, this paper proposes a new niching PSO with equilibrium factor named E-SPSO. Different from the existing niching PSOs, the numbers of particles in different niches have been kept in balance in E-SPSO. The velocity of each particle is influenced by not only the personal best particle and the global best particle, but also an equilibrium factor (EF). By using the equilibrium factor to update the velocities of particles, the particles can be allocated uniformly among the niches. In this way, the computation resources can be assigned to the niches in a more balanced manner, so that the algorithm can gain more population diversity and find high-quality solutions in all niches. Experimental results on eleven benchmark problems show that the proposed mechanism not only increases the number of optima found, but also improves the search efficiency. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
7. Target-aware convolutional neural network for target-level sentiment analysis.
- Author
-
Hyun, Dongmin, Park, Chanyoung, Yang, Min-Chul, Song, Ilhyeon, Lee, Jung-Tae, and Yu, Hwanjo
- Subjects
- *
ARTIFICIAL neural networks , *SENTIMENT analysis , *TASK performance , *QUALITATIVE research , *COMPUTER science - Abstract
Target-level sentiment analysis (TLSA) is a classification task to extract sentiments from targets in text. In this paper, we propose t arget-dependent c onvolutional n eural n etwork (TCNN) tailored to the task of TLSA. The TCNN leverages the distance information between the target word and its neighboring words to learn the importance of each word to the target. Experimental results show that the TCNN achieves state-of-the-art performance on both single- and multi-target datasets. Qualitative evaluations were conducted to demonstrate the limitations of previous TLSA methods and also to verify that distance information is crucial for TLSA. Furthermore, by exploiting a convolutional neural network (CNN), the TCNN trains six times faster per epoch than other baselines based on recurrent neural networks. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
8. Multi-Layer Stochastic Block Interaction driven by Logistic Regression (MLSBI-LR) for Efficient Link Recommendation in Intra-Layer Linkage Graphs.
- Author
-
Bolorunduro, Janet Oluwasola and Zou, Zhaonian
- Subjects
- *
LOGISTIC regression analysis , *RECOMMENDER systems , *ONLINE social networks , *COMPUTER science , *SOCIAL networks , *SOCIAL systems , *MACHINE learning - Abstract
Link Recommendation (LR) in complex networks has attracted huge interest in the social and computer science communities. Numerous networks, such as recommendation systems and social networks (which facilitate user contact), are probabilistic rather than deterministic due to the uncertainty surrounding the presence of links. Evaluating the various characteristics of such networks has frequently been tricky as the Intra-layer linkage graph requires at least two nodes to be in the same layer. Moreover, many existing LR methods mainly operate well on Single-Layer Graphs (SLGs) compared to Multi-Layer Graphs (MLGs) when nodes traverse multi-layers in a network of Intra-layer linkages. Considering this drawback, this paper proposes a Multi-Layer Stochastic Block Interaction method driven by Logistic Regression (MLSBI-LR) to exploit the bi-directional resources associated with Intra-layer linkages. Its inherent dependence on knowledge-based systems uses multi-criteria recommender systems to accommodate additional criteria and can modify neighborhood-based approaches. A multi-criteria network with relations over the same set of nodes is used since the modified neighbor-based method can exhibit rich dependence between entities and have available experimental data sets in MLGs to recommend links that would efficiently enrich users' experience. The accuracy and robustness of the proposed MLSBI-LR method compared to existing LR methods were extensively investigated using three distinct benchmark data sets and four evaluation metrics. Based on our experimental results across the databases and metrics, the proposed MLSBI-LR method performed significantly better (recording up to 17% increment in accuracy), recommending potential links in MLGs. Consequently, the proposed method may revolutionize link recommendation tasks in social networks by improving users' overall experience. • Multi-Layer Stochastic Block Interaction driven by Logistic Regression Proposed. • Recommendation uses bi-directional resources associated with Intra-layer linkages. • Uncertainty graph and machine learning enhances the Intra-layer linkage graphs. • Structural properties reveal important information about interaction in the system. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
9. Sparsity measure of a network graph: Gini index.
- Author
-
Goswami, Swati, Murthy, C.A., and Das, Asit K.
- Subjects
- *
GRAPH theory , *SPARSE graphs , *POWER law (Mathematics) , *APPROXIMATION theory , *COMPUTER science - Abstract
This article explores the problem of formulating a general measure of sparsity of network graphs. Based on an available definition sparsity of a dataset, namely Gini index, it provides a way to define sparsity measure of a network graph. We name the sparsity measure so introduced as sparsity index . Sparsity measures are commonly associated with six properties, namely, Robin Hood, Scaling, Rising Tide, Cloning, Bill Gates and Babies. Sparsity index directly satisfies four of these six properties; does not satisfy Cloning and satisfies Scaling for some specific cases. A comparison of the proposed index is drawn with Edge Density (the proportion of the sum of degrees of all nodes in a graph compared to the total possible degrees in the corresponding fully connected graph), by showing mathematically that as the edge density of an undirected graph increases, its sparsity index decreases. The paper highlights how the proposed sparsity measure can reveal important properties of a network graph. Further, a relationship has been drawn analytically between the sparsity index and the exponent term of a power law distribution (a distribution known to approximate the degree distribution of a wide variety of network graphs). To illustrate application of the proposed index, a community detection algorithm for network graphs is presented. The algorithm produces overlapping communities with no input requirement on number or size of the communities; has a computational complexity O ( n 2 ), where n is the number of nodes of the graph. The results validated on artificial and real networks show its effectiveness. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
10. Fifty years of Information Sciences: A bibliometric overview.
- Author
-
Merigó, José M., Pedrycz, Witold, Weber, Richard, and de la Sotta, Catalina
- Subjects
- *
INFORMATION science , *BIBLIOMETRICS , *COMPUTER science , *DATA visualization , *COMPUTER software , *TWENTIETH century , *HISTORY - Abstract
Information Sciences is a leading international journal in computer science launched in 1968, so becoming fifty years old in 2018. In order to celebrate its anniversary, this study presents a bibliometric overview of the leading publication and citation trends occurring in the journal. The aim of the work is to identify the most relevant authors, institutions, countries, and analyze their evolution through time. The paper uses the Web of Science Core Collection in order to search for the bibliographic information. Our study also develops a graphical mapping of the bibliometric material by using the visualization of similarities (VOS) viewer. With this software, the work analyzes bibliographic coupling, citation and co-citation analysis, co-authorship, and co-occurrence of keywords. The results underline the significant growth of the journal through time and its international diversity having publications from countries all over the world. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
11. Entailment and symmetry in confirmation measures of interestingness.
- Author
-
Glass, David H.
- Subjects
- *
ENTAILMENT (Logic) , *MATHEMATICAL symmetry , *BAYESIAN analysis , *COMPUTER science , *INFORMATION theory , *INFORMATION science - Abstract
Abstract: In a recent paper Greco et al. (2012) propose a number of properties for measures of rule interestingness. The most fundamental of these properties is that such measures should be Bayesian confirmation measures and this criterion provides the context for the current paper as well. They also propose a number of properties relating to entailment and symmetry in order to discriminate between various confirmation measures which have been proposed in the literature. Working within the same framework of confirmation measures, several limitations of their proposed properties are discussed and a motivation provided for alternative properties. Two new measures of interestingness are proposed and then compared with two other recently proposed measures which also satisfy these properties. [Copyright &y& Elsevier]
- Published
- 2014
- Full Text
- View/download PDF
12. On-line assurance of interpretability criteria in evolving fuzzy systems – Achievements, new concepts and open issues.
- Author
-
Lughofer, Edwin
- Subjects
- *
FUZZY systems , *COMPUTATIONAL complexity , *COMPUTER systems , *DATA analysis , *ELECTRONIC data processing , *COMPUTER science - Abstract
Abstract: In this position paper, we are discussing achievements and open issues in the interpretability of evolving fuzzy systems (EFS). In addition to pure on-line complexity reduction approaches, which can be an important direction for increasing the transparency of the evolved fuzzy systems, we examine the state-of-the-art and provide further investigations and concepts regarding the following interpretability aspects: distinguishability, simplicity, consistency, coverage and completeness, feature importance levels, rule importance levels and interpretation of consequents. These are well-known and widely accepted criteria for the interpretability of expert-based and standard data-driven fuzzy systems in batch mode. So far, most have been investigated only rudimentarily in the context of evolving fuzzy systems, trained incrementally from data streams: EFS have focussed mainly on precise modeling, aiming for models of high predictive quality. Only in a few cases, the integration of complexity reduction steps has been handled. This paper thus seeks to close this gap by pointing out new ways of making EFS more transparent and interpretable within the scope of the criteria mentioned above. The role of knowledge expansion, a peculiar concept in EFS, will be also addressed. One key requirement in our investigations is the availability of all concepts for on-line usage, which means they should be incremental or at least allow fast processing. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
13. Distributed networked control systems: A brief overview.
- Author
-
Ge, Xiaohua, Yang, Fuwen, and Han, Qing-Long
- Subjects
- *
DISTRIBUTED network protocols , *TIME-varying systems , *ELECTRIC network topology , *SYSTEMS theory , *COMPUTER science , *CONTROL theory (Engineering) - Abstract
Distributed networked control systems have attracted intense attention from both academia and industry due to the multidisciplinary nature among the areas of communication networks, computer science and control. With ever-increasing research trends in these areas, it is desirable to review recent advances and to identify methodologies for distributed networked control systems. This paper presents a brief overview of such systems regarding system configurations, challenging issues and methodologies. First, networked control systems are introduced and their prevalent configurations including centralized, decentralized and distributed structures are outlined. Second, an emphasis is laid on a number of challenging issues from the analysis and synthesis of distributed networked control systems. More specifically, these challenging issues are identified through three integrated aspects: communication, computation and control. Third, different methodologies in the literature for distributed networked control systems are reviewed and categorized based on three pairs: undirected and directed graphs, fixed and time-varying topologies, and time-triggered and event-triggered mechanisms. Finally, concluding remarks are drawn and some potential research directions are suggested. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
14. Extended rough set-based attribute reduction in inconsistent incomplete decision systems
- Author
-
Meng, Zuqiang and Shi, Zhongzhi
- Subjects
- *
DECISION support systems , *ROUGH sets , *INFORMATION processing , *ALGORITHMS , *NUMERICAL analysis , *COMPUTER science , *ARTIFICIAL intelligence , *INFORMATION science - Abstract
Abstract: A systematic study of attribute reduction in inconsistent incomplete decision systems (IIDSs) has not yet been performed, and no complete methodology of attribute reduction has been developed for IIDSs to date. In an IIDS, there are various ways to handle missing values. In this paper, a missing attribute value may be replaced with any known value of a corresponding attribute (such a missing attribute value is called a “do not care” condition). In this way, this paper establishes reduction concepts specifically for IIDSs, mainly by extending related reduction concepts from other types of decision systems into IIDSs, and then derives their relationships and properties. With these derived properties, the extended reducts are divided into two distinct types: heritable reducts and nonheritable reducts, and algorithms for computing them are presented. Using the relationships derived here, the eight types of extended reducts established for IIDSs can be converted to five equivalent types. Then five discernibility function-based approaches are proposed, each for a particular kind of reduct. Each approach can find all reducts of its associated type. The theoretical analysis of the proposed approaches is described in detail. Finally, numerical experiments have shown that the proposed approaches are effective and suitable for handling both numerical and categorical attributes, but that they have different application conditions. The proposed approaches can provide a solution to the reduction problem for IIDSs. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
15. A novel approach to probability distribution aggregation
- Author
-
Liu, X., Ghorpade, Amol, Tu, Y.L., and Zhang, W.J.
- Subjects
- *
DISTRIBUTION (Probability theory) , *DECISION making , *ERRORS , *STATISTICS , *MATHEMATICAL optimization , *EXPERT systems , *PROBABILITY theory , *COMPUTER science - Abstract
Abstract: Today’s business world is highly competitive and unpredictable, so effective decision-making is of primary importance. However, it is difficult to make effective decisions when sufficient information is not available, and decision-making in such situations involves a high risk of error. Conventional statistics based approaches to such problems are not effective, because in such situations decision-making is usually in the hands of a small panel of experts. However, the expert opinions can be represented by probability distribution functions. Thus, such a problem reduces to the aggregation of a set of probability distribution functions to an aggregated or consensus distribution. In this paper, we propose a new approach to address this problem. The novelties of the proposed approach include: (1) the problem is formulated as an optimization problem and (2) the overlapping area between an individual expert’s distribution and an aggregated distribution is taken to measure the expertise level of that expert and subsequently to determine the weight of the expert. The proposed approach in this paper is illustrated by an example reported in literature handled with the Delphi method, which also shows the effectiveness of our approach. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
16. A lightweight anonymous routing protocol without public key en/decryptions for wireless ad hoc networks
- Author
-
Li, Chun-Ta and Hwang, Min-Shiang
- Subjects
- *
NETWORK routing protocols , *PUBLIC key infrastructure (Computer security) , *AD hoc computer networks , *COMPUTER security , *COMPUTER science , *INFORMATION science , *WIRELESS communications - Abstract
Abstract: More attention should be paid to anonymous routing protocols in secure wireless ad hoc networks. However, as far as we know, only a few papers on secure routing protocols have addressed both issues of anonymity and efficiency. Most recent protocols adopted public key Infrastructure (PKI) solutions to ensure the anonymity and security of route constructing mechanisms. Since PKI solution requires huge and expensive infrastructure with complex computations and the resource constraints of small ad hoc devices; a two-layer authentication protocol with anonymous routing (TAPAR) is proposed in this paper. TAPAR does not adopt public key computations to provide secure and anonymous communications between source and destination nodes over wireless ad hoc networks. Moreover, TAPAR accomplishes mutual authentication, session key agreement, and forward secrecy among communicating nodes; along with integration of non-PKI techniques into the routing protocol allowing the source node to anonymously interact with the destination node through a number of intermediate nodes. Without adopting PKI en/decryptions, our proposed TAPAR can be efficiently implemented on small ad hoc devices while at least reducing the computational overhead of participating nodes in TAPAR by 21.75%. Our protocol is certainly favorable when compared with other related protocols. [Copyright &y& Elsevier]
- Published
- 2011
- Full Text
- View/download PDF
17. Embedding meshes into locally twisted cubes
- Author
-
Han, Yuejuan, Fan, Jianxi, Zhang, Shukui, Yang, Jiwen, and Qian, Peide
- Subjects
- *
PARALLEL computers , *COMPUTER systems , *COMPUTER science , *COMPUTER engineering , *COMPUTER networks , *NUMERICAL grid generation (Numerical analysis) , *OPERATOR theory , *CUBES - Abstract
Abstract: As a newly introduced interconnection network for parallel computing, the locally twisted cube possesses many desirable properties. In this paper, mesh embeddings in locally twisted cubes are studied. Let LTQ n (V, E) denote the n-dimensional locally twisted cube. We present three major results in this paper: (1) For any integer n ⩾1, a 2×2 n−1 mesh can be embedded in LTQ n with dilation 1 and expansion 1. (2) For any integer n ⩾4, two node-disjoint 4×2 n−3 meshes can be embedded in LTQ n with dilation 1 and expansion 2. (3) For any integer n ⩾3, a 4 ×(2 n−2 −1) mesh can be embedded in LTQ n with dilation 2. The first two results are optimal in the sense that the dilations of all embeddings are 1. The embedding of the 2×2 n−1 mesh is also optimal in terms of expansion. We also present the analysis of 2p ×2q mesh embedding in locally twisted cubes. [Copyright &y& Elsevier]
- Published
- 2010
- Full Text
- View/download PDF
18. Design of a multi-dimensional query expression for document warehouses.
- Author
-
Tseng, Frank S. C.
- Subjects
- *
DATA warehousing , *MANAGEMENT information systems , *RECORDS management , *QUERY languages (Computer science) , *COMPUTER science , *DATABASE management , *INFORMATION science , *INFORMATION resources management - Abstract
During the past decade, data warehousing has been widely adopted in the business community. It provides multi-dimensional analyses on cumulated historical business data for helping contemporary administrative decision-makings. However, many data warehousing query language in present only provides on-line analytical processing (OLAP) for numeric data. For example, MDX (Multi-Dimensional eXpressions) has been proposed as a query language to allow describing multi-dimensional queries over databases with OLAP capabilities. Nevertheless, it is believed there is only about 20% information can be extracted from data warehouse concerning numeric data only, the other 80% information is hidden in non-numeric data or even in documents. Therefore, many researchers now advocate it is time to conduct research works on document warehousing to capture complete business intelligence. Document warehouse, unlike traditional document management systems, include extensive semantic information about documents, cross-document feature relations, and document grouping or clustering to provide a more accurate and more efficient access to text-oriented business intelligence. In this paper, we extend the structure of MDX into a new one containing complete constructs for querying document warehouses. The traditional MDX only contains SELECT, FROM, and WHERE clauses, which is not rich enough for document warehousing. In this paper, we present how to extend the language constructs to include GROUP BY, HAVING, and ORDER BY to design an SQL-like query language for document warehousing. The work is essential for establishing an infrastructure to help combining text processing with numeric OLAP processing technologies. Hopefully, the combination of data warehousing and document warehousing will be one of the most important kernels of knowledge management and customer relationship management applications. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
19. Adapting the CBA algorithm by means of intensity of implication.
- Author
-
Janssens, Davy, Wets, Geert, Brijs, Tom, and Vanhoof, Koen
- Subjects
- *
ALGORITHMS , *INFORMATION science , *COMPUTER science , *ELECTRONIC data processing - Abstract
In recent years, extensive research has been carried out by using association rules to build more accurate classifiers. The idea behind these integrated approaches is to focus on a limited subset of association rules. This paper aims to contribute to this integrated framework by adapting the Classification Based on Associations (CBA) algorithm. CBA was adapted by coupling it with another measurement of the quality of association rules: i.e. intensity of implication. The new algorithm has been implemented and empirically tested on an authentic financial dataset for purposes of bankruptcy prediction. We validated our results with an association ruleset, with C4.5, with original CBA and with CART by statistically comparing its performance via the area under the ROC-curve. The adapted CBA algorithm presented in this paper proved to generate significantly better results than the other classifiers at the 5% level of significance. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
20. Architecture design of grid GIS and its applications on image processing based on LAN.
- Author
-
Zhanfeng Shen, Jiancheng Luo, Chenghu Zhou, Shaohua Cai, Jiang Zheng, Qiuxiao Chen, Dongping Ming, and Qinghui Sun
- Subjects
- *
GEOGRAPHIC information systems , *COMPUTER networks , *IMAGE processing , *INTERNET industry , *IMAGING systems , *INFORMATION resources , *COMPUTER science , *INFORMATION science - Abstract
Computer technology and its relative subjects developed at very high speed in recent years, so is geo-information science, including Geographic Information System (GIS), remote sensing (RS) and global position system (GPS). But with the increase of data, many data cannot be used efficiently because of the tremendous amount of data and information and the difficulty of process and transfer through network. So how to develop internet technology to solve these problems becomes a difficult problem for current computer experts and geo-science experts. Fortunately, grid computing provides us the method to solve this problem effectively. Grid computing is a resources sharing model presented by computer experts to solve current network resources imbalance problem. Basing on the application of grid computing on geographical information system (GIS), this paper analyzes the weakness and problems of traditional GIS, and then gives the method to solve these problems with the technology provided by grid computing and web services. After analyzing the characteristic of grid computing this paper expatiates on current application status of grid computing on GIS and the problems it faces, with the technology of middleware, this paper presents the architecture of grid GIS and lists the techniques it needs. In conclusion, this paper concludes that the distributing middleware architecture based on grid geographic markup language (GridGML) and web services technique is a good solution to current problems, this architecture can also solve those problems such as effective resources sharing through internet and advancing international application's efficiency, at last we discuss its implementation process based on LAN. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
21. Multilevel decision-making: A survey.
- Author
-
Lu, Jie, Han, Jialin, Hu, Yaoguang, and Zhang, Guangquan
- Subjects
- *
DECISION making , *MULTILEVEL models , *DECENTRALIZED control systems , *ISSUES management (Public relations) , *COMPUTER science - Abstract
Multilevel decision-making techniques aim to deal with decentralized management problems that feature interactive decision entities distributed throughout a multiple level hierarchy. Significant efforts have been devoted to understanding the fundamental concepts and developing diverse solution algorithms associated with multilevel decision-making by researchers in areas of both mathematics/computer science and business areas. Researchers have emphasized the importance of developing a range of multilevel decision-making techniques to handle a wide variety of management and optimization problems in real-world applications, and have successfully gained experience in this area. It is thus vital that a high quality, instructive review of current trends should be conducted, not only of the theoretical research results but also the practical developments in multilevel decision-making in business. This paper systematically reviews up-to-date multilevel decision-making techniques and clusters related technique developments into four main categories: bi-level decision-making (including multi-objective and multi-follower situations), tri-level decision-making, fuzzy multilevel decision-making, and the applications of these techniques in different domains. By providing state-of-the-art knowledge, this survey will directly support researchers and practical professionals in their understanding of developments in theoretical research results and applications in relation to multilevel decision-making techniques. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
22. Using minimal generators for composite isolated point extraction and conceptual binary relation coverage: Application for extracting relevant textual features.
- Author
-
Elloumi, S., Ferjani, F., and Jaoua, A.
- Subjects
- *
BINARY number system , *FEATURE extraction , *COMPUTER science , *INFORMATION retrieval , *DATA mining - Abstract
In recent years, several mathematical concepts have been successfully explored in the computer science domain as a basis for finding original solutions for complex problems related to knowledge engineering, data mining, and information retrieval. Hence, relational algebra (RA) and formal concept analysis (FCA) may be considered as useful mathematical foundations that unify data and knowledge into information retrieval systems. For example, some elements in a fringe relation (related to the (RA) domain) called isolated points have been successfully used in FCA as formal concept labels or composite labels. Once associated with words in a textual document, these labels constitute relevant features of a text. This paper proposes the MinGenCoverage algorithm for covering a Formal Context (as a formal representation of a text) based on isolated labels and using these labels (or text features) for categorization, corpus structuring, and micro–macro browsing as an advanced information retrieval functionality. The main thrust of the approach introduced here relies heavily on the close connection between isolated points and minimal generators (MGs). MGs stand at the antipodes of the closures within their respective equivalence classes. By using the fact that the minimal generators are the smallest elements within an equivalence class, their detection and traversal is greatly eased and the coverage can be swiftly built. Extensive experiments provide empirical evidence for the performance of the proposed approach. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
23. A fast algorithm for predicting links to nodes of interest.
- Author
-
Chen, Bolun, Chen, Ling, and Li, Bin
- Subjects
- *
ALGORITHMS , *INFORMATION science , *COMPUTER science , *ESTIMATION theory , *GRAPH theory - Abstract
The problem of link prediction has recently attracted considerable attention in various domains, such as sociology, anthropology, information science, and computer science. In many real world applications, we must predict similarity scores only between pairs of vertices in which users are interested, rather than predicting the scores of all pairs of vertices in the network. In this paper, we propose a fast similarity-based method to predict links related to nodes of interest. In the method, we first construct a sub-graph centered at the node of interest. By choosing the proper size for such a sub-graph, we can restrict the error of the estimated similarities within a given threshold. Because the similarity score is computed within a small sub-graph, the algorithm can greatly reduce computation time. The method is also extended to predict potential links in the whole network to achieve high process speed and accuracy. Experimental results on real networks demonstrate that our algorithm can obtain high accuracy results in less time than other methods can. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
24. Common influence region problems.
- Author
-
Fort, M. and Sellarès, J.A.
- Subjects
- *
PROBLEM solving , *DECISION support systems , *GRAPHICS processing units , *CENTRAL processing units , *ALGORITHMS - Abstract
In this paper we propose and solve common influence region problems. These problems are related to the simultaneous influence, or the capacity to attract customers, of two sets of facilities of different types. For instance, while a facility of the first type competes with the other facilities of the first type, it cooperates with several facilities of the second type. The problems studied can be applied, for example, to decision-making support systems for marketing and/or locating facilities. We present parallel algorithms, to be run on a Graphics Processing Unit, for approximately solving the problems considered here. We also provide experimental results and discuss the efficiency and scalability of our approach. Finally, we present the speedup ratios obtained when the running times of the parallel proposed algorithms using a GPU are compared with those obtained from their respective efficient sequential CPU versions. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
25. Model checking temporal properties of reaction systems.
- Author
-
Męski, Artur, Penczek, Wojciech, and Rozenberg, Grzegorz
- Subjects
- *
MATHEMATICAL sequences , *BENCHMARK problems (Computer science) , *BOOLEAN functions , *COMPUTER science , *MATHEMATICAL proofs - Abstract
This paper defines a temporal logic for reaction systems (rsCTL). The logic is interpreted over the models for the context restricted reaction systems that generalise standard reaction systems by controlling context sequences. Moreover, a translation from the context restricted reaction systems into boolean functions is defined in order to be used for a symbolic model checking for rsCTL over these systems. The model checking for rsCTL is proved to be pspace -complete. The proposed approach to model checking was implemented and experimentally evaluated using four benchmarks. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
26. Multi-granularity distance metric learning via neighborhood granule margin maximization.
- Author
-
Zhu, Pengfei, Hu, Qinghua, Zuo, Wangmeng, and Yang, Meng
- Subjects
- *
SUPPORT vector machines , *DECISION making , *MATHEMATICAL optimization , *COMPACT spaces (Topology) , *DISTANCE education , *COMPUTER science - Abstract
Learning a distance metric from training samples is often a crucial step in machine learning and pattern recognition. Locality, compactness and consistency are considered as the key principles in distance metric learning. However, the existing metric learning methods just consider one or two of them. In this paper, we develop a multi-granularity distance learning technique. First, a new index, neighborhood granule margin, which simultaneously considers locality, compactness and consistency of neighborhood, is introduced to evaluate a distance metric. By maximizing neighborhood granule margin, we formulate the distance metric learning problem as a sample pair classification problem, which can be solved by standard support vector machine solvers. Then a set of distance metrics are learned in different granular spaces. The weights of the granular spaces are learned through optimizing the margin distribution. Finally, the decisions from different granular spaces are combined with weighted voting. Experiments on UCI datasets, gender classification and object categorization tasks show that the proposed method is superior to the state-of-the-art distance metric learning algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
27. Partial order reduction for checking soundness of time workflow nets.
- Author
-
Boucheneb, Hanifa and Barkaoui, Kamel
- Subjects
- *
WORKFLOW , *SEMANTICS , *ABSTRACT thought , *COMPUTER science , *CONFIRMATION (Logic) - Abstract
Due to the critical role of workflows in organizations, their design must be assisted by automatic formal verification approaches. The aim is to prove formally, before implementation, their correctness w.r.t. the required properties such as achieving safely the expected services (soundness property). In this perspective, time workflow nets (TWF-nets for short) are proposed as a framework to specify and verify the soundness of workflows. The verification process is based on state space abstractions and takes into account the time constraints of workflows. However, it suffers from the state explosion problem due the interleaving semantics of TWF-nets. To attenuate this problem, this paper investigates the combination of a state space abstraction with a partial order reduction technique. Firstly, it shows that to verify soundness of a TWF-net, it suffices to explore its non-equivalent firing sequences. Then, it establishes a selection procedure of the subset of transitions to explore from each abstract state and proves that it covers all its non-equivalent firing sequences. Finally, the effectiveness of the proposed approach is assessed by some experimental results. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
28. A review of microarray datasets and applied feature selection methods.
- Author
-
Bolón-Canedo, V., Sánchez-Maroño, N., Alonso-Betanzos, A., Benítez, J. M., and Herrera, F.
- Subjects
- *
MICROARRAY technology , *BIG data , *MACHINE learning , *SAMPLE size (Statistics) , *COMPARATIVE studies , *COMPUTER science - Abstract
Microarray data classification is a difficult challenge for machine learning researchers due to its high number of features and the small sample sizes. Feature selection has been soon considered a de facto standard in this field since its introduction, and a huge number of feature selection methods were utilized trying to reduce the input dimensionality while improving the classification performance. This paper is devoted to reviewing the most up-to-date feature selection methods developed in this field and the microarray databases most frequently used in the literature. We also make the interested reader aware of the problematic of data characteristics in this domain, such as the imbalance of the data, their complexity, or the so-called dataset shift. Finally, an experimental evaluation on the most representative datasets using well-known feature selection methods is presented, bearing in mind that the aim is not to provide the best feature selection method, but to facilitate their comparative study by the research community. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
29. Weight evaluation for features via constrained data-pairscan't-linkq.
- Author
-
Liu, Ming, Wu, Chong, and Liu, Yuanchao
- Subjects
- *
ALGORITHMS , *PROBABILITY theory , *INCONSISTENCY (Logic) , *DISTRIBUTION (Probability theory) , *COMPUTER science - Abstract
Facing the massive amount of data appearing on the web, automatic analysis tools have become essential for web users to discover valuable information online. Precise similarity measurement plays a decisive role in enabling analysis tools to acquire high-quality performances. Because different features contribute diversely to similarity calculation, it is necessary to utilize weight to measure feature's contribution and import it into similarity measurement. To accurately assign feature's weight, constrained data-pairs provided by users are usually imported into the weight evaluation procedure, whereas conventional plans all fail to consider two challenges: (a) asymmetrical distribution of constrained data-pairs, and (b) inconsistency contained by constrained data-pairs. If these two issues occur, conventional plans are incompetent at addressing them or are even unable to work. Thus, this paper proposes a novel constraint based weight evaluation to address these two issues. For the former, constrained data-pairs are partitioned into several equivalent classes, and distributing parameters are assigned to constrained data-pairs to balance their distributions. For the latter, constrained data-pairs are connected one after another, and belief values are thereby formed to indicate their probability of being inconsistent. Experimental results demonstrate that this type of evaluation is independent of any algorithm. With this evaluation, similarities can be calculated more accurately. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
30. Some properties of limit inferior and limit superior for sequences of fuzzy real numbers.
- Author
-
Talo, Özer
- Subjects
- *
MATHEMATICAL sequences , *FUZZY systems , *REAL numbers , *COMPUTER science , *INFORMATION theory , *INFORMATION science - Abstract
Abstract: The limit inferior and limit superior of a bounded sequence of fuzzy real numbers have been introduced by Aytar et al. (2008) [1]. In this paper we give a simplified expressions for limit inferior and limit superior. The expressions is concise and convenient for use. As a straightforward corollary of this expressions, we can easily prove some properties of the limit inferior and limit superior. [Copyright &y& Elsevier]
- Published
- 2014
- Full Text
- View/download PDF
31. Evolutionary membrane computing: A comprehensive survey and new results.
- Author
-
Zhang, Gexiang, Gheorghe, Marian, Pan, Linqiang, and Pérez-Jiménez, Mario J.
- Subjects
- *
EVOLUTIONARY computation , *COMPUTER programming , *EVOLUTIONARY algorithms , *COMPUTER science , *INFORMATION theory , *INFORMATION science - Abstract
Abstract: Evolutionary membrane computing is an important research direction of membrane computing that aims to explore the complex interactions between membrane computing and evolutionary computation. These disciplines are receiving increasing attention. In this paper, an overview of the evolutionary membrane computing state-of-the-art and new results on two established topics in well defined scopes (membrane-inspired evolutionary algorithms and automated design of membrane computing models) are presented. We survey their theoretical developments and applications, sketch the differences between them, and compare the advantages and limitations. [Copyright &y& Elsevier]
- Published
- 2014
- Full Text
- View/download PDF
32. Exact formulas for fixation probabilities on a complete oriented star.
- Author
-
Yang, Xiaofan, Zhang, Chunming, Liu, Jiming, and Li, Hongwei
- Subjects
- *
PROBABILITY theory , *MATHEMATICAL formulas , *ELECTRONIC amplifiers , *DYNAMICAL systems , *COMPUTER science - Abstract
Abstract: It is already known that large complete oriented stars (COSs) with intrinsic weights are amplifiers of selection. This paper addresses the evolutionary dynamics on COSs. First, we give the exact formulas for the fixation probabilities on a COS. We then apply these formulas to study some properties of COSs. The obtained results partially reveal the way in which the fixation probabilities are affected by COSs. [Copyright &y& Elsevier]
- Published
- 2014
- Full Text
- View/download PDF
33. A general framework of hierarchical clustering and its applications.
- Author
-
Cai, Ruichu, Zhang, Zhenjie, Tung, Anthony K.H., Dai, Chenyun, and Hao, Zhifeng
- Subjects
- *
APPLICATION software , *COMPUTER science , *ALGORITHMS , *DATA analysis , *MATHEMATICAL inequalities , *COMPUTER user identification - Abstract
Abstract: Hierarchical clustering problem is a traditional topic in computer science, which aims to discover a consistent hierarchy of clusters with different granularities. One of the most important open questions on hierarchical clustering is the identification of the meaningful clustering levels in the hierarchical structure. In this paper, we answer this question from algorithmic point of view. In particular, we derive a quantitative analysis on the impact of the low-level clustering costs on high level clusters, when agglomerative algorithms are run to construct the hierarchy. This analysis enables us to find meaningful clustering levels, which are independent of the clusters hierarchically beneath it. We thus propose a general agglomerative hierarchical clustering framework, which automatically constructs meaningful clustering levels. This framework is proven to be generally applicable to any k-clustering problem in any -relaxed metric space, in which strict triangle inequality is relaxed within some constant factor . To fully utilize the hierarchical clustering framework, we conduct some case studies on k-median and k-means clustering problems, in both of which our framework achieves better approximation factor than the state-of-the-art methods. We also extend our framework to handle the data stream clustering problem, which allows only one scan on the whole data set. By incorporating our framework into Guha’s data stream clustering algorithm, the clustering quality is greatly enhanced with only small extra computation cost incurred. The extensive experiments show that our proposal is superior to the distance based agglomerative hierarchical clustering and data stream clustering algorithms on a variety of data sets. [Copyright &y& Elsevier]
- Published
- 2014
- Full Text
- View/download PDF
34. Solving the k-influence region problem with the GPU.
- Author
-
Fort, M. and Sellarès, J.A.
- Subjects
- *
GRAPHICS processing units , *DECISION making , *EUCLIDEAN domains , *PROBLEM solving , *INFORMATION processing , *ALGORITHMS - Abstract
Abstract: In this paper we study a problem that arises in the competitive facility location field. Facilities and customers are represented by points of a planar Euclidean domain. We associate a weighted distance to each facility to reflect that customers select facilities depending on distance and importance. We define, by considering weighted distances, the k-influence region of a facility as the set of points of the domain that has the given facility among their k-nearest/farthest neighbors. On the other hand, we partition the domain into subregions so that each subregion has a non-negative weight associated to it which measures a characteristic related to the area of the subregion. Given a weighted partition of the domain, the k-influence region problem finds the points of the domain where are new facility should be opened. This is done considering the known weight associated to the new facility and ensuring a minimum weighted area of its k-influence region. We present a GPU parallel approach, designed under CUDA architecture, for approximately solving the k-influence region problem. In addition, we describe how to visualize the solutions, which improves the understanding of the problem and reveals complicated structures that would be hard to capture otherwise. Integration of computation and visualization facilitates decision makers with an iterative what-if analysis process, to acquire more information to obtain an approximate optimal location. Finally, we provide and discuss experimental results showing the efficiency and scalability of our approach. [Copyright &y& Elsevier]
- Published
- 2014
- Full Text
- View/download PDF
35. SQBC: An efficient subgraph matching method over large and dense graphs.
- Author
-
Zheng, Weiguo, Zou, Lei, Lian, Xiang, Zhang, Huaming, Wang, Wei, and Zhao, Dongyan
- Subjects
- *
SUBGRAPHS , *GRAPH theory , *MATCHING theory , *DENSE graphs , *COMPUTER science , *COMPUTER networks , *COMPUTER performance - Abstract
Abstract: Recent progress in biology and computer science have generated many complicated networks, most of which can be modeled as large and dense graphs. Developing effective and efficient subgraph match methods over these graphs is urgent, meaningful and necessary. Although some excellent exploratory approaches have been proposed these years, they show poor performances when the graphs are large and dense. This paper presents a novel Subgraph Query technique Based on Clique feature, called SQBC, which integrates the carefully designed clique encoding with the existing vertex encoding [40] as the basic index unit to reduce the search space. Furthermore, SQBC optimizes the subgraph isomorphism test based on clique features. Extensive experiments over biological networks, RDF dataset and synthetic graphs have shown that SQBC outperforms the most popular competitors both in effectiveness and efficiency especially when the data graphs are large and dense. [Copyright &y& Elsevier]
- Published
- 2014
- Full Text
- View/download PDF
36. Incremental causal network construction over event streams.
- Author
-
Acharya, Saurav and Lee, Byung Suk
- Subjects
- *
COMPUTER networks , *DATA analysis , *ALGORITHMS , *BAYESIAN analysis , *MONTE Carlo method , *COMPUTER science - Abstract
Abstract: This paper addresses modeling causal relationships over event streams where data are unbounded and hence incremental modeling is required. There is no existing work for incremental causal modeling over event streams. Our approach is based on Popper’s three conditions which are generally accepted for inferring causality – temporal precedence of cause over effect, dependency between cause and effect, and elimination of plausible alternatives. We meet these conditions by proposing a novel incremental causal network construction algorithm. This algorithm infers causality by learning the temporal precedence relationships using our own new incremental temporal network construction algorithm and the dependency by adopting a state of the art incremental Bayesian network construction algorithm called the Incremental Hill-Climbing Monte Carlo. Moreover, we provide a mechanism to infer only strong causality, which provides a way to eliminate weak alternatives. This research benefits causal analysis over event streams by providing a novel two layered causal network without the need for prior knowledge. Experiments using synthetic and real datasets demonstrate the efficacy of the proposed algorithm. [Copyright &y& Elsevier]
- Published
- 2014
- Full Text
- View/download PDF
37. On rough approximations via ideal.
- Author
-
Tantawy, O.A.E. and Mustafa, Heba. I.
- Subjects
- *
ROUGH sets , *APPROXIMATION theory , *OPERATOR theory , *LATTICE theory , *INFINITY (Mathematics) , *COMPUTER science - Abstract
Abstract: In this paper, we generalize the rough set model by defining new approximation operators in more general setting of a complete atomic Boolean lattice by using an ideal. An ideal on a set X is a non empty collection of subsets of X with heredity property which is also closed under finite unions. We introduce the concept of lower and upper approximations via ideal in a lattice theoretical setting. These decrease the upper approximation and increase the lower approximation and hence increase the accuracy. Properties of these approximations are studied. Also properties of the ordered set of the lower and upper of an element of a complete atomic Boolean lattice via ideal are investigated. We also study the connections between the rough approximations defined by Järvinen [10,11] and our new approximations. Various examples are given. Finally we give a new approach for defining the rough approximations w.r.t the induced map by using an ideal. We study the connections between the rough approximations defined with respect to the induced map by using an ideal and the rough approximations defined with respect to the considered map under certain conditions of the map. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
38. The relationship among different covering approximations.
- Author
-
Liu, Guilong
- Subjects
- *
APPROXIMATION theory , *ROUGH sets , *AXIOMATIC set theory , *TOPOLOGY , *COMPUTER science , *COMPUTER systems - Abstract
Abstract: This paper studies covering-based rough sets by means of characteristic functions of sets. We consider covering-based rough sets from both constructive and axiomatic approaches. The relationship among four types of covering-based rough sets is discussed. We also outline the topologies induced by different covering approximations. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
39. Hukuhara differentiability of interval-valued functions and interval differential equations on time scales.
- Author
-
Lupulescu, Vasile
- Subjects
- *
INTERVAL functions , *DIFFERENTIAL equations , *TIMESCALE number , *INTEGRABLE functions , *INFORMATION science , *COMPUTER science - Abstract
Abstract: Using the concept of the generalized Hukuhara difference, in this paper we introduce and study the differentiability and the integrability for the interval-valued functions on time scales. Some illustrative examples to interval differential equations on time scales are presented. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
40. Weighted fuzzy interpolative reasoning systems based on interval type-2 fuzzy sets.
- Author
-
Chen, Shyi-Ming, Lee, Li-Wei, and Shen, Victor R.L.
- Subjects
- *
FUZZY sets , *INTERPOLATION , *REASONING , *RULE-based programming , *COMPUTER science , *INFORMATION science - Abstract
Abstract: In this paper, we present a weighted fuzzy interpolative reasoning method for sparse fuzzy rule-based systems based on interval type-2 fuzzy sets. We also apply the proposed weighted fuzzy interpolative reasoning method to deal with the truck backer-upper control problem. The proposed method satisfies the seven evaluation indices for fuzzy interpolative reasoning. The experimental results show that the proposed method outperforms the existing methods. It provides us with a useful way for dealing with fuzzy interpolative reasoning in sparse fuzzy rule-based systems. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
41. Introducing the Discriminative Paraconsistent Machine (DPM)
- Author
-
Guido, Rodrigo Capobianco, Barbon, Sylvio, Solgon, Regiane Denise, Silva Paulo, Kátia Cristina, Rodrigues, Luciene Cavalcanti, da Silva, Ivan Nunes, and Lemos Escola, João Paulo
- Subjects
- *
PATTERN recognition systems , *ARTIFICIAL intelligence , *MACHINE theory , *INTELLIGENT agents , *COMPUTER science , *INFORMATION technology - Abstract
Abstract: This paper introduces a new tool for pattern recognition. Called the Discriminative Paraconsistent Machine (DPM), it is based on a supervised discriminative model training that incorporates paraconsistency criteria and allows an intelligent treatment of contradictions and uncertainties. DPMs can be applied to solve problems in many fields of science, using the tests and discussions presented here, which demonstrate their efficacy and usefulness. Major difficulties and challenges that were overcome consisted basically in establishing the proper model with which to represent the concept of paraconsistency. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
42. Intuitionistic fuzzy hypergraphs with applications
- Author
-
Akram, Muhammad and Dudek, Wieslaw A.
- Subjects
- *
FUZZY hypergraphs , *INTUITIONISTIC mathematics , *COMPUTER simulation , *CIRCUIT complexity , *PARTITIONS (Mathematics) , *COMPUTER science , *DATA structures , *ELECTRONIC circuits , *FUZZY sets - Abstract
Abstract: Hypergraphs are considered a useful tool for modeling system architectures and data structures and to represent a partition, covering and clustering in the area of circuit design. In this paper, we apply the concept of intuitionistic fuzzy set theory to generalize results concerning hypergraphs. For each intuitionistic fuzzy structure defined, we use cut-level sets to define an associated sequence of crisp structures. We determine what properties of the sequence of crisp structures characterize a given property of the intuitionistic fuzzy structure. We also present applications of intuitionistic fuzzy hypergraphs. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
43. 3SEPIAS: A Semi-Structured Search Engine for Personal Information in dAtaspace System
- Author
-
Zhong, Ming, Liu, Mengchi, and He, Yanxiang
- Subjects
- *
PERSONAL information management , *WEB search engines , *QUERY (Information retrieval system) , *OBJECT monitors (Computer software) , *COMPUTER science , *DATA integration , *DATA acquisition systems , *SOFTWARE engineering - Abstract
Abstract: Nowadays, personal information is being distributed into more and more heterogeneous sources, which presents a huge obstacle to management and retrieval of personal information. To address this problem, this paper presents the blueprint of a novel Personal Information Management (PIM) system named 3SEPIAS (short for Semi-Structured Search Engine for Personal Information in dAtaspace System). 3SEPIAS has three main features, data integration without upfront semantic reconciliation, flexible query model for data having sparse and evolving schema, and efficient best-effort proximity search approach on graphs. For that, we first propose a semi-structured graph data model called Interpreted Object Model (IOM) to uniformly represents a user’s heterogeneous personal information and loosely integrates it into a dataspace in a schema-later way. Then, a Semi-Structured Search Engine (3SE) can be used to search over the personal dataspaces. We propose an intuitive 3SE Query Language (3SQL) that enables users to query in a varying degree of structural constraint according to their knowledge of underlying schemas. Moreover, a best-effort top-k proximity search optimization strategy and corresponding graph index structures are proposed to improve the efficiency of query processing. We perform comprehensive experiments to test both effectiveness and efficiency of our proximity search approach. The results reveal that 3SE can beat the previous proximity search systems by a large margin with only a little or even no loss of result quality, especially for large graphs. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
44. Relationships among generalized rough sets in six coverings and pure reflexive neighborhood system
- Author
-
Wang, Lijuan, Yang, Xibei, Yang, Jingyu, and Wu, Chen
- Subjects
- *
ROUGH sets , *SET theory , *SEMANTICS , *COMPUTER science , *BINARY number system , *MATHEMATICAL models , *APPROXIMATION theory - Abstract
Abstract: Rough set is a useful tool to deal with partition related uncertainty, granularity, and incompleteness of knowledge. Although the classical rough set is constructed on the basis of an indiscernibility relation, it can also be generalized by using some weaker binary relations. In this paper, a systematic approach is used to study the generalized rough sets in six coverings and pure reflexive neighborhood system. After two steps, relationships among generalized rough sets in six coverings and pure reflexive neighborhood system are obtained. The first step is to study the generalized rough sets in six coverings, and to get relationships between every two covering rough set models. The second step is to study the relationships between generalized rough sets in each covering and in pure reflexive neighborhood system. The inclusion relations or equivalence relations among the seven upper/lower approximations could be acquired. Finally, the accuracy measures of generalized rough sets in six coverings and that in pure reflexive neighborhood system are compared. The relationships among seven accuracy measures are also obtained. Some illustrative examples are employed to demonstrate our arguments. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
45. From cluster ensemble to structure ensemble
- Author
-
Yu, Zhiwen, You, Jane, Wong, Hau-San, and Han, Guoqiang
- Subjects
- *
CLUSTER analysis (Statistics) , *DATA analysis , *GENE expression , *STATISTICAL correlation , *GRAPH theory , *ALGORITHMS , *PERFORMANCE evaluation , *COMPUTER science - Abstract
Abstract: This paper investigates the problem of integrating multiple structures which are extracted from different sets of data points into a single unified structure. We first propose a new generalized concept called structure ensemble for the fusion of multiple structures. Unlike traditional cluster ensemble approaches the main objective of which is to align individual labels obtained from different clustering solutions, the structure ensemble approach focuses on how to unify the structures obtained from different data sources. Based on this framework, a new structure ensemble approach called the probabilistic bagging based structure ensemble approach (BSEA) is designed, which integrates the bagging technique, the force based self-organizing map (FBSOM) and the normalized cut algorithm into the proposed framework. BSEA views structures obtained from different datasets generated by the bagging technique as nodes in a graph, and adopts graph theory to find the most representative structure. In addition, the force based self-organizing map (FBSOM), which is a generalized form of SOM, is proposed to serve as the basic clustering algorithm in the structure ensemble framework. Finally, a new external index called correlation index (CI), which considers the correlation relationship of both the similarity and dissimilarity between the predicted solution and the true solution, is proposed to evaluate the performance of BSEA. The experiments show that (i) The performance of BSEA outperforms most of the state-of-the-art clustering approaches, and (ii) BSEA performs well on datasets from the UCI repository and real cancer gene expression profiles. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
46. Approximations and uncertainty measures in incomplete information systems
- Author
-
Dai, Jianhua and Xu, Qing
- Subjects
- *
APPROXIMATION theory , *UNCERTAINTY (Information theory) , *INFORMATION storage & retrieval systems , *ENTROPY (Information theory) , *INFORMATION resources , *COMPUTER science , *INFORMATION theory , *DATA analysis - Abstract
Abstract: There are mainly two methodologies dealing with uncertainty measurement issue in rough set theory: pure rough set approach and information theory approach. Pure rough set approach is based on the concepts of accuracy, roughness and approximation accuracy proposed by Pawlak. Information theory approach is based on Shannon’s entropy or its variants. Several authors have extended the information theory approach into incomplete information systems. However, there are few studies on extending the pure rough set approach to incomplete information systems. This paper focuses on constructing uncertainty measures in incomplete information systems by pure rough set approach. Three types of definitions of lower and upper approximations and corresponding uncertainty measurement concepts including accuracy, roughness and approximation accuracy are investigated. Theoretical analysis indicates that two of the three types can be used to evaluate the uncertainty in incomplete information systems. Experiments on incomplete real-life data sets have been conducted to test the two selected types (the first type and the third type) of uncertainty measures. Results show that the two types of uncertainty measures are effective. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
47. TempoXML: Nested bitemporal relationship modeling and conversion tool for fuzzy XML
- Author
-
Işıkman, Ömer Özgün, Özyer, Tansel, Zarour, Omar, Alhajj, Reda, and Polat, Faruk
- Subjects
- *
XML (Extensible Markup Language) , *THEORY of knowledge , *TREND analysis , *PREDICTION theory , *DATABASE design , *COMPUTER science , *DATABASE management , *FUZZY mathematics - Abstract
Abstract: The importance of incorporating time in databases has been well realized by the research community. Accordingly, temporal databases have been extensively studied by researchers. The main idea is to add a time or temporal dimension to the model and then tag data elements with time in order to keep all values instead of only the last one, and hence allow for time driven queries. This way it becomes possible to retrieve various values of the same element. This leads for better knowledge discovery and trend analysis by looking back into the history to predict for the future. Unfortunately, one disadvantage of the temporal database management system is that it has not been commercialized. The work described in this paper reflects our effort to demonstrate the power and effectiveness of the temporal dimension once it is well integrated into databases. We decided on XML (eXtensible Markup Language) as the underlying data model. The motivation is twofold. First XML is a defacto standard for data exchange; we have already demonstrated the power of XML in our other work described in the literature. Second, nested bitemporal databases form one interesting type of temporal databases. Thus, our purpose is to suggest an automated system that converts a nested bitemporal database to a corresponding fuzzy XML database. Fuzzy query model has been implemented as part of the proposed framework in order to provide flexibility to a wide rang of end users willing to access the database. The implemented temporal operators are database content independent. Fuzzy elements are capable of having different membership functions and varying number of linguistic variables. We have proposed a scheme for determining membership function parameters. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
48. Minimizing the ripple effect of web-centric software by using the pheromone extension
- Author
-
Banerjee, Soumya and Al-Qaheri, Hameed
- Subjects
- *
COMPUTER software , *DATA analysis , *PROGRAMMING languages , *SOFTWARE development tools , *ANT algorithms , *PHEROMONES , *COMPUTER science , *SOFTWARE engineering - Abstract
Abstract: The ripple effect metric shows what impact changes to a software will likely have on the rest of the system. In web-based data analysis, it has become a widespread practice to deploy both the script and the program developed with a high-level programming language software tool. Due to different vendors and the diversified philosophies behind different software tools, it may be slightly difficult to cope with the ripple effect across them. This paper initiates an experimental idea to minimize the wrapper interface ripple for web-based script tools and high-level programming environments and also indicates many potential research directions for the development of computationally intelligent tools in the software engineering domain that demand lower cost and less complexity. This work incorporates Ant Colony Optimization (ACO) and its prime artifact pheromone, which has been modified as a pheromone extension module to minimize ripples when cross-coding. A standard benchmark data set has been taken to validate the performance of the proposed algorithm. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
49. Learning data structure from classes: A case study applied to population genetics
- Author
-
del Coz, J.J., Díez, J., Bahamonde, A., and Goyache, F.
- Subjects
- *
DATA structures , *CASE studies , *POPULATION genetics , *MACHINE learning , *DATA mining , *HYPOTHESIS , *COMPUTER science , *ANIMAL breeds - Abstract
Abstract: In most cases, the main goal of machine learning and data mining applications is to obtain good classifiers. However, final users, for instance researchers in other fields, sometimes prefer to infer new knowledge about their domain that may be useful to confirm or reject their hypotheses. This paper presents a learning method that works along these lines, in addition to reporting three interesting applications in the field of population genetics in which the aim is to discover relationships between species or breeds according to their genotypes. The proposed method has two steps: first it builds a hierarchical clustering of the set of classes and then a hierarchical classifier is learned. Both models can be analyzed by experts to extract useful information about their domain. In addition, we propose a new method for learning the hierarchical classifier. By means of a voting scheme employing pairwise binary models constrained by the hierarchical structure, the proposed classifier is computationally more efficient than previous approaches while improving on their performance. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
50. Diagnosability of star graphs with missing edges
- Author
-
Chiang, Chieh-Feng, Hsu, Guo-Huang, Shih, Lun-Min, and Tan, Jimmy J.M.
- Subjects
- *
GRAPH theory , *SYSTEMS engineering , *COMBINATORICS , *COMPUTER science , *PATHS & cycles in graph theory , *MATHEMATICAL models , *COMPARATIVE studies - Abstract
Abstract: In this paper, we study the system diagnosis on an n-dimensional star under the comparison model. Following the concept of local diagnosability , the strong local diagnosability property is discussed; this property describes the equivalence of the local diagnosability of a node and its degree. We prove that an n-dimensional star has this property, and it keeps this strong property even if there exist n −3 missing edges in it. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.