6,630 results on '"Tuple"'
Search Results
2. Multiplicative functions on shifted primes.
- Author
-
Sachpazis, Stelios
- Subjects
- *
INTEGERS , *HYPERCUBES , *REAL numbers - Abstract
Let f be a positive multiplicative function and let k ≥ 2 be an integer. We prove that if the prime values f (p) converge to 1 sufficiently slowly as p → + ∞ , in the sense that ∑ p | f (p) − 1 | = ∞ , there exists a real number c > 0 such that the k -tuples (f (p + 1) , ... , f (p + k)) are dense in the hypercube [ 0 , c ] k or in [ c , + ∞) k. In particular, the values f (p + 1) , ... , f (p + k) can be put in any increasing order infinitely often. Our work generalises previous results of De Koninck and Luca. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
3. Neighborhood-based bridge node centrality tuple for complex network analysis
- Author
-
Natarajan Meghanathan
- Subjects
Bridge node ,Centrality ,Tuple ,Complex networks ,Clusters ,Neighborhood ,Applied mathematics. Quantitative methods ,T57-57.97 - Abstract
Abstract We define a bridge node to be a node whose neighbor nodes are sparsely connected to each other and are likely to be part of different components if the node is removed from the network. We propose a computationally light neighborhood-based bridge node centrality (NBNC) tuple that could be used to identify the bridge nodes of a network as well as rank the nodes in a network on the basis of their topological position to function as bridge nodes. The NBNC tuple for a node is asynchronously computed on the basis of the neighborhood graph of the node that comprises of the neighbors of the node as vertices and the links connecting the neighbors as edges. The NBNC tuple for a node has three entries: the number of components in the neighborhood graph of the node, the algebraic connectivity ratio of the neighborhood graph of the node and the number of neighbors of the node. We analyze a suite of 60 complex real-world networks and evaluate the computational lightness, effectiveness, efficiency/accuracy and uniqueness of the NBNC tuple vis-a-vis the existing bridgeness related centrality metrics and the Louvain community detection algorithm.
- Published
- 2021
- Full Text
- View/download PDF
4. Toolkit for data reduction to tuples for the ATLAS experiment
- Author
-
Krasznahorkay, A
- Published
- 2012
5. Possibilistic Data Cleaning
- Author
-
Henning Köhler and Sebastian Link
- Subjects
Degree (graph theory) ,Computer science ,Relational database ,Vertex cover ,02 engineering and technology ,Computer Science Applications ,Data modeling ,Set (abstract data type) ,Computational Theory and Mathematics ,020204 information systems ,Data integrity ,0202 electrical engineering, electronic engineering, information engineering ,Tuple ,Algorithm ,Information Systems ,Possibility theory - Abstract
Classical data cleaning performs a minimal set of operations on the data to satisfy the given integrity constraints. Often, this minimization is equivalent to vertex cover, for example when tuples can be removed due to the violation of functional dependencies. Classically, the uncertainty of tuples and constraints is ignored. We propose not to view data as dirty but the uncertainty information about data. Since probabilities are often unavailable and their treatment is limited due to correlations in the data, we investigate a qualitative approach to uncertainty. Tuples are assigned degrees of possibility with which they occur, and constraints are assigned degrees of certainty that say to which tuples they apply. Our approach is non-invasive to the data as we lower the possibility degree of tuples as little as possible. The new resulting qualitative version of vertex cover remains NP-hard. We establish an algorithm that is fixed-parameter tractable in the size of the qualitative vertex cover. Experiments with synthetic and real-world data show that our algorithm outperforms the classical algorithm proportionally to the available number of uncertainty degrees. Based on the novel mining of the certainty degrees with which constraints hold, our framework becomes applicable even when uncertainty information is unavailable.
- Published
- 2022
6. On the ordinal sum of fuzzy implications: New results and the distributivity over a class of overlap and grouping functions
- Author
-
Bao Qing Hu and Meng Cao
- Subjects
Discrete mathematics ,Class (set theory) ,Property (philosophy) ,Distributive property ,Artificial Intelligence ,Logic ,Distributivity ,Boundary value problem ,Function (mathematics) ,Tuple ,Fuzzy logic ,Mathematics - Abstract
Similar to the construction of ordinal sums of overlap functions, Baczynski et al. introduced two new kinds of ordinal sums of fuzzy implications without additional restrictions on summands in 2017. In this paper, based on the ordinal sum of fuzzy implications with the first method, we discuss its basic properties, such as iterative Boolean law, right ordering property and strong boundary condition. Meanwhile, we give characterizations of such ordinal sum of fuzzy implications that is a QL-implication constructed from tuples ( O , G , N ⊤ ) or a D-implication derived from grouping function G, and show some conclusions about its relations with ( G , N ) - and R O -implications. Moreover, we study the distributivity of such ordinal sum of fuzzy implications over a class of overlap and grouping functions. More specifically, we give necessary and sufficient conditions under which this ordinal sum of fuzzy implications is distributive over overlap and grouping functions satisfying some conditions.
- Published
- 2022
7. An Extended Nonstrict Partially Ordered Set-Based Configurable Linear Sorter on FPGAs.
- Author
-
Li, Dalin, Huang, Lan, Gao, Teng, Feng, Yang, Tavares, Adriano, and Wang, Kangping
- Subjects
- *
GATE array circuits , *FIELD programmable gate arrays - Abstract
Sorting is essential for many scientific and data processing problems. It is significant to improve the efficiency of sorting. Taking advantage of specialized hardware, parallel sorting, e.g., sorting networks and linear sorters, implements sorting in lower time complexity. However, most of them are designed based on the parallelization of algorithms, lacking consideration of specialized hardware structures. In this article, we propose an extended nonstrict partially ordered set-based configurable linear sorter on field-programmable gate arrays (FPGAs). First, we extend nonstrict partial order to the binary tuple and n-tuple nonstrict partial orders. Then, the linear sorting algorithm is defined based on them, with the consideration of hardware performance. It has 4N/n time complexity varying from 4 to 2 N as the tuple size varies. The number of comparisons reduces to N/2 in binary tuple-based sorting, which is half of the state-of-the-art insertion linear sorting. Finally, we implement the linear sorter on FPGAs. It consists of multiple customizable micro-cores, named sorting units (SUs). The SU packages the storage and comparison of the tuple. All the SUs are connected into a chain with simple communication, which makes the sorter fully configurable in length, bandwidth, and throughput. They also act the same in each clock cycle, so that the achieved frequency of the sorter improves. In our experiment, the sorter achieves at most 660-MHz frequency, 5.6 Gb/s throughput, and 87 times speed-up compared with the quick sort algorithm on general processors. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
8. A relational database model and algebra integrating fuzzy attributes and probabilistic tuples
- Author
-
T.H. Cao
- Subjects
Theoretical computer science ,Selection (relational algebra) ,Artificial Intelligence ,Logic ,Relational database ,Algebraic operation ,Fuzzy set ,Relational model ,Probabilistic logic ,Tuple ,Fuzzy logic ,Mathematics - Abstract
Although there have been many fuzzy or probabilistic relational database models proposed for representing and handling imprecise and uncertain information of objects in real-world applications, models combining the relevance and strength of both fuzzy set theory and probability theory appear sporadic. In this paper, we propose a new fuzzy and probabilistic relational database model where the imprecision of an attribute value is represented by a fuzzy set and the uncertainty of a relational tuple is represented by a probability interval. The mass assignment theory is employed to deal with the challenge of integration and computation of both fuzzy sets and probabilities in the same model. The conjunction and disjunction strategies to combine imprecise and uncertain information are introduced. Then the fundamental concepts of the classical relational database model are extended and generalized in this new model. The syntax and semantics of the selection operation are formally defined. Finally, the other important algebraic operations on imprecise attributes and uncertain tuples are developed.
- Published
- 2022
9. Finding out Noisy Patterns for Relation Extraction of Bangla Sentences
- Author
-
Rukaiya Habib and Md. Musfique Anwar
- Subjects
Relation (database) ,Computer science ,Process (engineering) ,business.industry ,Collaborative knowledge ,computer.software_genre ,Base (topology) ,Relationship extraction ,language.human_language ,Relation Extraction ,Bengali ,language ,Bangla ,Artificial intelligence ,Tuple ,business ,Conflict Score ,computer ,Natural language processing ,Noisy Pattern ,Natural Language Processing - Abstract
Relation extraction is one of the most important parts of natural language processing. It is the process of extracting relationships from a text. Extracted relationships actually occur between two or more entities of a certain type and these relations may have different patterns. The goal of the paper is to find out the noisy patterns for relation extraction of Bangla sentences. For the research work, seed tuples were needed containing two entities and the relation between them. We can get seed tuples from Freebase. Freebase is a large collaborative knowledge base and database of general, structured information for public use. But for Bangla language, there is no available Freebase. So we made Bangla Freebase which was the real challenge and it can be used for any other NLP based works. Then we tried to find out the noisy patterns for relation extraction by measuring conflict score.
- Published
- 2023
- Full Text
- View/download PDF
10. An Efficient Confidence Measure-Based Evaluation Metric for Breast Cancer Screening Using Bayesian Neural Networks
- Author
-
Naimul Mefraz Khan and Anika Tabassum
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Source code ,Computer science ,media_common.quotation_subject ,010501 environmental sciences ,Machine learning ,computer.software_genre ,01 natural sciences ,030218 nuclear medicine & medical imaging ,Machine Learning (cs.LG) ,03 medical and health sciences ,0302 clinical medicine ,Feature (machine learning) ,FOS: Electrical engineering, electronic engineering, information engineering ,0105 earth and related environmental sciences ,media_common ,Hyperparameter ,Network architecture ,Artificial neural network ,business.industry ,Image and Video Processing (eess.IV) ,Electrical Engineering and Systems Science - Image and Video Processing ,Metric (mathematics) ,Artificial intelligence ,Tuple ,Transfer of learning ,business ,computer - Abstract
Screening mammograms is the gold standard for detecting breast cancer early. While a good amount of work has been performed on mammography image classification, especially with deep neural networks, there has not been much exploration into the confidence or uncertainty measurement of the classification. In this paper, we propose a confidence measure-based evaluation metric for breast cancer screening. We propose a modular network architecture, where a traditional neural network is used as a feature extractor with transfer learning, followed by a simple Bayesian neural network. Utilizing a two-stage approach helps reducing the computational complexity, making the proposed framework attractive for wider deployment. We show that by providing the medical practitioners with a tool to tune two hyperparameters of the Bayesian neural network, namely, fraction of sampled number of networks and minimum probability, the framework can be adapted as needed by the domain expert. Finally, we argue that instead of just a single number such as accuracy, a tuple (accuracy, coverage, sampled number of networks, and minimum probability) can be utilized as an evaluation metric of our framework. We provide experimental results on the CBIS-DDSM dataset, where we show the trends in accuracy-coverage tradeoff while tuning the two hyperparameters. We also show that our confidence tuning results in increased accuracy with a reduced set of images with high confidence when compared to the baseline transfer learning. To make the proposed framework readily deployable, we provide (anonymized) source code with reproducible results at https://git.io/JvRqE., Comment: To be presented at the IEEE ICHI 2020
- Published
- 2023
11. Problem Solving Using Branches and Loops
- Author
-
Zhang, Yue and Zhang, Yue
- Published
- 2015
- Full Text
- View/download PDF
12. Topic-Based Data Merging and Routing Scheme in Many-to-Many Communication for WSNs
- Author
-
Hosen, A. S. M. Sanwar, Cho, Gi Hwan, Hutchison, David, editor, Kanade, Takeo, editor, Kittler, Josef, editor, Kleinberg, Jon M., editor, Kobsa, Alfred, editor, Mattern, Friedemann, editor, Mitchell, John C., editor, Naor, Moni, editor, Nierstrasz, Oscar, editor, Pandu Rangan, C., editor, Steffen, Bernhard, editor, Terzopoulos, Demetri, editor, Tygar, Doug, editor, Weikum, Gerhard, editor, Murgante, Beniamino, editor, Misra, Sanjay, editor, Rocha, Ana Maria A. C., editor, Torre, Carmelo, editor, Rocha, Jorge Gustavo, editor, Falcão, Maria Irene, editor, Taniar, David, editor, Apduhan, Bernady O., editor, and Gervasi, Osvaldo, editor
- Published
- 2014
- Full Text
- View/download PDF
13. Intersection joins under updates
- Author
-
Yufei Tao and Ke Yi
- Subjects
Amortized analysis ,General Computer Science ,Computer Networks and Communications ,Applied Mathematics ,Structure (category theory) ,Joins ,Join (topology) ,Theoretical Computer Science ,Combinatorics ,Computational Theory and Mathematics ,Intersection ,Tuple ,Undirected graph ,Mathematics - Abstract
In an intersection join, we are given t sets R 1 , . . . , R t of axis-parallel rectangles in R d , where d ≥ 1 and t ≥ 2 are constants, and a join topology which is a connected undirected graph G on vertices 1 , . . . , t . The result consists of tuples ( r 1 , . . . , r t ) ∈ R 1 × . . . × R t where r i ∩ r j ≠ ∅ for all i , j connected in G. A structure is feasible if it stores O ˜ ( n ) words, supports an update in O ˜ ( 1 ) amortized time, and can enumerate the join result with an O ˜ ( 1 ) delay, where n = ∑ i | R i | and O ˜ ( . ) hides a polylog n factor. We provide a dichotomy as to when feasible structures exist: they do when t = 2 or d = 1 ; subject to the OMv-conjecture, they do not exist when t ≥ 3 and d ≥ 2 , regardless of the join topology.
- Published
- 2022
14. Synchronization of Finite-Field Networks With Time Delays
- Author
-
Leszek Rutkowski, Xinli Shi, Jinde Cao, and Wanjie Zhu
- Subjects
Matrix (mathematics) ,Polynomial ,Finite field ,Consensus ,Computer Networks and Communications ,Control and Systems Engineering ,Computer science ,Synchronization (computer science) ,Topology (electrical circuits) ,Tuple ,Topology ,Tree (graph theory) ,Computer Science Applications - Abstract
In this paper, finite-field networks (FFNs) with time delays are investigated. We introduce the characteristic matrix polynomial for delayed FFNs and utilize it to analyze the dynamics considered in view of linear recursion theory. It is shown that delayed FFNs behave in a pattern similar to non-delayed FFNs in the sense that any state sequence finally reaches a periodic behavior within finite steps. In addition, a sufficient condition assuring pure periodic behaviors is given. For delayed FFNs, an algebra-theoretic criterion for synchronization is proposed for the first time which imposes constraints on the network matrix tuple and the interaction weights. As an application, we study the synchronization problem of delayed FFNs with a tree-structured interaction topology and accordingly provide a criterion of a simple form. Inspired by the existing results, for the tree case, we prove that a given delayed FFN solves a consensus problem if and only if the same does the corresponding non-delayed FFN. As a comparison, examples are shown to illustrate similar conclusion is invalid for the general synchronization problems since the consensus is included in the synchronization. In the end, we identify the specific periodic behavior of synchronized FFNs satisfying the proposed sufficient condition.
- Published
- 2022
15. An interval 2-Tuple linguistic Fine-Kinney model for risk analysis based on extended ORESTE method with cumulative prospect theory
- Author
-
Xinwang Liu, Weizhong Wang, Shuli Liu, and Ling Ding
- Subjects
Risk analysis ,Cumulative prospect theory ,Computer science ,Rationality ,Interval (mathematics) ,Linguistics ,Operator (computer programming) ,Risk analysis (engineering) ,Hardware and Architecture ,Signal Processing ,Sensitivity (control systems) ,Tuple ,Risk assessment ,Software ,Information Systems - Abstract
The risk assessment is one of the most significant procedures for identifying, preventing, and controlling Occupational Health and Safety (OHS) risks. One of many kinds of techniques for OHS risk assessment is based on the Fine-Kinney model. Most of the Fine-Kinney-based risk assessment approaches can consider the relative importance degree of risk parameters. Nevertheless, the current Fine-Kinney-based risk assessment approaches do not have abilities to capture the reference dependence effects and detailed relationships among hazards. In addition, these approaches overlook the influence of the deviation of risk evaluation information. To overcome these limitations, in this paper, an improved Fine-Kinney model is proposed for OHS risk assessment by integrating the weighted power average (WPA) operator, ORESTE (Organisation, rangement et Synthese de donnees relarionnelles (in French)) method, and cumulative prospect theory. First, the interval 2-Tuple linguistic variables are adopted to transform linguistic risk information into quantitative risk rating information. Then, an extended WPA operator is proposed to fuse the risk evaluation information from decision-makers, in which an optimization model is constructed to determine the weights of decision-makers. Next, an extended ORESTE method based on cumulative prospect theory and interval 2-Tuple linguistic variables is incorporated into the Fine-Kinney model to prioritize OHS risk. After that, the OHS risk assessment of the automobile components manufacturing process is presented to test the applicability and rationality of the improved Fine-Kinney model. After that, a sensitivity analysis is conducted to further illustrate the proposed model. Finally, the comparative analyses between the proposed risk assessment approach and other Fine-Kinney models are led to illustrating its effectiveness and advantages.
- Published
- 2022
16. Obscure: Information-Theoretically Secure, Oblivious, and Verifiable Aggregation Queries on Secret-Shared Outsourced Data
- Author
-
Peeyush Gupta, Shantanu Sharma, Yin Li, Sharad Mehrotra, Nisha Panwar, and Sumaya Almanee
- Subjects
Theoretical computer science ,Computer science ,business.industry ,Computation ,Cryptography ,Adversary ,Computer Science Applications ,Adversarial system ,Computational Theory and Mathematics ,Server ,Identity (object-oriented programming) ,Verifiable secret sharing ,Tuple ,business ,Information Systems - Abstract
Despite exciting progress on cryptography, secure and efficient query processing over outsourced data remains an open challenge. We develop a communication-efficient and information-theoretically secure system, entitled Obscure for aggregation queries with conjunctive or disjunctive predicates, using secret-sharing. Obscure is strongly secure (i.e., secure regardless of the computational-capabilities of an adversary) and prevents the network, as well as, the (adversarial) servers to learn the user's queries, results, or the database. In addition, Obscure provides additional security features, such as hiding access-patterns (i.e., hiding the identity of the tuple satisfying a query) and hiding query-patterns (i.e., hiding which two queries are identical). Also, Obscure does not require any communication between any two servers that store the secret-shared data before/during/after the query execution. Moreover, our techniques deal with the secret-shared data that is outsourced by a single or multiple database owners, as well as, allows a user, which may not be the database owner, to execute the query over secret-shared data. We further develop (non-mandatory) privacy-preserving result verification algorithms that detect malicious behaviors, and experimentally validate the efficiency of Obscure on large datasets, the size of which prior approaches of secret-sharing or multi-party computation systems have not scaled to.
- Published
- 2022
17. Beyond Triplet Loss: Meta Prototypical N-Tuple Loss for Person Re-identification
- Author
-
Zhibo Chen, Zhizheng Zhang, Cuiling Lan, Shih-Fu Chang, and Wenjun Zeng
- Subjects
FOS: Computer and information sciences ,Matching (statistics) ,business.industry ,Computer science ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,Process (computing) ,Inference ,Machine learning ,computer.software_genre ,Convolutional neural network ,Computer Science Applications ,Ranking (information retrieval) ,Signal Processing ,Media Technology ,Benchmark (computing) ,Identity (object-oriented programming) ,Artificial intelligence ,Electrical and Electronic Engineering ,Tuple ,business ,computer - Abstract
Person Re-identification (ReID) aims at matching a person of interest across images. In convolutional neural network (CNN) based approaches, loss design plays a vital role in pulling closer features of the same identity and pushing far apart features of different identities. In recent years, triplet loss achieves superior performance and is predominant in ReID. However, triplet loss considers only three instances of two classes in per-query optimization (with an anchor sample as query) and it is actually equivalent to a two-class classification. There is a lack of loss design which enables the joint optimization of multiple instances (of multiple classes) within per-query optimization for person ReID. In this paper, we introduce a multi-class classification loss, i.e., N-tuple loss, to jointly consider multiple (N) instances for per-query optimization. This in fact aligns better with the ReID test/inference process, which conducts the ranking/comparisons among multiple instances. Furthermore, for more efficient multi-class classification, we propose a new meta prototypical N-tuple loss. With the multi-class classification incorporated, our model achieves the state-of-the-art performance on the benchmark person ReID datasets., Comment: Accepted by IEEE Transactions on Multimedia
- Published
- 2022
18. Multi-Access Filtering for Privacy-Preserving Fog Computing
- Author
-
Meikang Qiu, Liehuang Zhu, Kim-Kwang Raymond Choo, Keke Gai, and Kai Xu
- Subjects
Computer Networks and Communications ,business.industry ,Computer science ,Distributed computing ,Initialization ,Cloud computing ,Filter (signal processing) ,Interconnectivity ,Computer Science Applications ,Reduction (complexity) ,Hardware and Architecture ,Software deployment ,Key (cryptography) ,Tuple ,business ,Software ,Information Systems - Abstract
The interest in fog computing is growing, including in traditionally conservative and sensitive areas such as military and governments. This is partly driven by the interconnectivity of our society, and advances in technologies such as Internet-of-Things (IoT). However, protecting against privacy leakage is one of several key considerations in fog computing deployment. Therefore, in this paper, we present a privacy- preserving multi-layer access filtering model, designed for a fog computing environment; hence, coined fog-based access filter (FAF). FAF comprises three key algorithms, namely: access filter initialization algorithm, optimal privacy-energy-time algorithm, and tuple reduction algorithm. Also, a hierarchical classification is used to distinguish the protection objectives. Findings from our experimental evaluation demonstrate that FAF allows one to achieve an optimal balance between privacy protection and computational costs.
- Published
- 2022
19. A graph-theoretical approach to DNA similarity analysis
- Author
-
Lizhen Lin, Dong Quan Ngoc Nguyen, Phuong Dong Tan Le, and Lin Xing
- Subjects
Structure (mathematical logic) ,Set (abstract data type) ,symbols.namesake ,Metric space ,Theoretical computer science ,Computer science ,symbols ,Cartesian product ,Tuple ,Representation (mathematics) ,Time complexity ,DNA sequencing - Abstract
One of the very active research areas in bioinformatics is DNA similarity analysis. There are several approaches using alignment-based or alignment-free methods to analyze similarities/dissimilarities between DNA sequences. In this work, we introduce a novel representation of DNA sequences, using n-ary Cartesian products of graphs for arbitrary positive integers n. Each of the component graphs in the representing Cartesian product of each DNA sequence contain combinatorial information of certain tuples of nucleotides appearing in the DNA sequence. We further introduce a metric space structure to the set of all Cartesian products of graphs that represent a given collection of DNA sequences in order to be able to compare different Cartesian products of graphs, which in turn signifies similarities/dissimilarities between DNA sequences. We test our proposed method on several datasets including Human Papillomavirus, Human rhinovirus, Influenza A virus, and Mammals. We compare our method to other methods in literature, which indicates that our analysis results are comparable in terms of time complexity and high accuracy, and in one dataset, our method performs the best in comparison with other methods.
- Published
- 2022
20. Network Coding With Myopic Adversaries
- Author
-
Sidharth Jaggi, Rawad Bitar, Yihan Zhang, and Sijie Li
- Subjects
FOS: Computer and information sciences ,Reliability theory ,Theoretical computer science ,Computer science ,Information Theory (cs.IT) ,Computer Science - Information Theory ,Singleton bound ,020206 networking & telecommunications ,Eavesdropping ,02 engineering and technology ,Characterization (mathematics) ,Topology ,Corollary ,Encoding (memory) ,Linear network coding ,Secrecy ,0202 electrical engineering, electronic engineering, information engineering ,Tuple ,Computer Science::Cryptography and Security ,Coding (social sciences) - Abstract
We consider the problem of reliable communication over a network containing a hidden {\it myopic} adversary who can eavesdrop on some $z_{ro}$ links, jam some $z_{wo}$ links, and do both on some $z_{rw}$ links. We provide the first information-theoretically tight characterization of the optimal rate of communication possible under all possible settings of the tuple $(z_{ro},z_{wo},z_{rw})$ by providing a novel coding scheme/analysis for a subset of parameter regimes. In particular, our vanishing-error schemes bypass the Network Singleton Bound (which requires a zero-error recovery criteria) in a certain parameter regime where the capacity had been heretofore open. As a direct corollary we also obtain the capacity of the corresponding problem where information-theoretic secrecy against eavesdropping is required in addition to reliable communication., Extended version of IEEE ISIT submission. Short video explaining the result: https://cuhk.zoom.us/rec/play/ZL93f6ool7K0T48aoRy5FS_sjme9DjLRDvaQeDLsR7IfV2aQGWrJVbqTTJ12Fg9qpXXvWVM4twAmIH-W.I88Ef9sVrWNS4Eog?startTime=1612281070000&_x_zm_rtaid=qXNfoabtTQK94kBK0YE-6A.1613735660905.fd77db9ef1a673afd9cd4ac45303144d&_x_zm_rhtaid=338
- Published
- 2021
21. A two-universe model of three-way decision with ranking and reference tuple
- Author
-
Xiaonan Li, Bing Jia, and Wenyan Xu
- Subjects
Matching (statistics) ,Information Systems and Management ,Theoretical computer science ,Computer science ,Computer Science Applications ,Theoretical Computer Science ,Set (abstract data type) ,Perspective (geometry) ,Ranking ,Artificial Intelligence ,Control and Systems Engineering ,Point (geometry) ,Rough set ,Tuple ,Construct (philosophy) ,Software - Abstract
The theory of three-way decision, introduced for the needs of explaining the three regions of rough sets, has developed into a more general theory of three regions in recent years. For different types of problems, we should have different types of intentions of trisecting, and only by considering the specific intentions of trisecting can we get the most accurate three regions. This is the starting point of this article. From a new perspective of trisecting, we propose two concepts on two universes, namely rankings of a set of attributes and reference tuples. These two concepts are combined together to express the original intention of trisecting in a new general meaning. At the same time, an evaluation of matching degree is proposed to formulate the trisecting. Based on the above two concepts and one evaluation method, we construct a two-universe model of three-way decision with concrete formulations, and show that the rough-set-based model proposed by Yan et al. is only equivalent to one of the eight cases of our model, with the eight cases corresponding to eight different types of intentions and hence to eight different types of problems. Therefore, the present paper extends classical rough-set-based models to a more general level on two universes. Two algorithms are provided to compute the three regions of our model, with the second one also computing the ordering of objects and hence the optimal ones.
- Published
- 2021
22. Fast Online Packet Classification With Convolutional Neural Network
- Author
-
Yanbiao Li, Penghao Zhang, Xinyi Zhang, Gaogang Xie, Xin Wang, and Kave Salamatian
- Subjects
Computer Networks and Communications ,Computer science ,Network packet ,Hash function ,Throughput ,computer.software_genre ,Convolutional neural network ,Hash table ,Computer Science Applications ,Tuple space ,Data mining ,Electrical and Electronic Engineering ,Tuple ,Software-defined networking ,computer ,Software - Abstract
Packet classification is a critical component in network appliances. Software Defined Networking and cloud computing update the rulesets frequently for flexible policy configuration. Tuple Space Search (TSS), implemented in Open vSwitch (OVS), achieves fast rule updating at the sacrifice of the classification rate. In TSS, each tuple is managed by a hash table and classifying a packet needs to go through all hash tables. Merging tuples can reduce the number of hash tables, but inevitably increases the hash conflicts that may even worsen the classification performance in some cases. No existing algorithm meets the need of both fast packet classification and online rule updating. In this paper, we propose Convolutional Neural Network (CNN)-based Range Partition (CRP) to achieve fast packet classification and online update simultaneously. CRP exploits CNN-based image recognition to quickly partition tuples into range spaces upon the change of ruleset distribution, which reduces hash operations while avoiding rule overlapping caused by hashing many rules to the same location of the hash table. Experimental results demonstrate that CRP achieves 3.2x classification speed and 4.2x update speed on average compared with state-of-the-art algorithms. We also implement CRP in OVS. The throughput of CRP-OVS is 10x that of native OVS.
- Published
- 2021
23. DG‐based SPO tuple recognition using self‐attention M‐Bi‐LSTM
- Author
-
Joon-Young Jung
- Subjects
General Computer Science ,Computer science ,Speech recognition ,Self attention ,Electrical and Electronic Engineering ,Tuple ,Electronic, Optical and Magnetic Materials - Published
- 2021
24. Deep truth discovery for pattern-based fact extraction
- Author
-
Jing Gao, Guojun Dai, Wenbo Lu, Chen Ye, and Hongzhi Wang
- Subjects
Text corpus ,Information Systems and Management ,Dependency (UML) ,Computer science ,business.industry ,Deep learning ,computer.software_genre ,Computer Science Applications ,Theoretical Computer Science ,Information extraction ,Artificial Intelligence ,Control and Systems Engineering ,Embedding ,Artificial intelligence ,Dimension (data warehouse) ,Tuple ,Representation (mathematics) ,business ,computer ,Software ,Natural language processing - Abstract
Fact extraction, which aims to extract (entity, attribute, value)-tuples from massive text corpora, is crucial in the area of text data mining . Recent approaches have focused on extracting facts by mining textual patterns with semantic types, where the quality of a pattern is evaluated based on content-based criteria, such as frequency. However, these approaches overlook the dimension of pattern reliability , which reflects how likely the extracted facts are correct. As a result, a pattern of good content-quality (e.g., high frequency) may still extract incorrect facts. In this study, we consider both pattern reliability and fact trustworthiness in addressing the pattern-based fact extraction problem. To learn the complex relationship between pattern reliability and fact trustworthiness, we propose a novel deep learning model using a hybrid of the CNN and LSTM architecture. For fact embedding, we adopt CNN to extract a fix-sized representation of each component, i.e., entity, attribute, and value, of the fact. For pattern embedding, we represent the pattern as a semantic composition of its extracted fact representations. To de-emphasis the noisy facts, we consider the fact trustworthiness and frequency during the process of pattern embedding, where the features of the tuple trustworthiness information are extracted by a long short-term memory (LSTM) model. To learn the pattern-fact relational dependency, we train the model with both pattern and tuple labels. Extensive experiments involving three real-world datasets demonstrated that the proposed model significantly improves the quality of the patterns and the extracted facts in the pattern-based information extraction.
- Published
- 2021
25. Comparative Analysis of Constraint Handling Techniques for Constrained Combinatorial Testing
- Author
-
Mark Harman, Huayao Wu, Changhai Nie, Yue Jia, and Justyna Petke
- Subjects
Mathematical optimization ,Dependency (UML) ,Computer science ,020207 software engineering ,02 engineering and technology ,Construct (python library) ,Constraint (information theory) ,Test case ,Software deployment ,0202 electrical engineering, electronic engineering, information engineering ,Test suite ,Software system ,Tuple ,Software - Abstract
Constraints depict the dependency relationships between parameters in a software system under test. Because almost all systems are constrained in some way, techniques that adequately cater for constraints have become a crucial factor for adoption, deployment and exploitation of Combinatorial Testing (CT). Currently, despite a variety of different constraint handling techniques available, the relationship between these techniques and the generation algorithms that use them remains unknown, yielding an important gap and pressing concern in the literature of constrained combination testing. In this article, we present a comparative empirical study to investigate the impact of four common constraint handling techniques on the efficiency of six representative (greedy and search-based) test suite generation algorithms. The results reveal that the Verify technique implemented with the Minimal Forbidden Tuple (MFT) approach is the fastest, while the Replace technique is promising for producing the smallest constrained covering arrays, especially for algorithms that construct test cases one-at-a-time. The results also show that there is an interplay between efficiency of the constraint handler and the test suite generation algorithm into which it is developed.
- Published
- 2021
26. A method for enumerating pairwise compatibility graphs with a given number of vertices
- Author
-
Hiroshi Nagamochi, Naveed Ahmed Azam, and Aleksandar Shurbevski
- Subjects
Applied Mathematics ,0211 other engineering and technologies ,021107 urban & regional planning ,0102 computer and information sciences ,02 engineering and technology ,01 natural sciences ,Graph ,Vertex (geometry) ,Combinatorics ,010201 computation theory & mathematics ,Bijection ,Discrete Mathematics and Combinatorics ,Pairwise comparison ,Tuple ,Mathematics - Abstract
Azam et al. (2018) proposed a method to enumerate all pairwise compatibility graphs (PCGs) with a given number n of vertices. For a tuple ( G , T , σ , λ ) of a graph G with n vertices and a tree T with n leaves, a bijection σ between the vertices in G and the leaves in T , and a bi-partition λ of the set of non-adjacent vertex pairs in G , they formulated two linear programs, LP ( G , T , σ , λ ) and DLP ( G , T , σ , λ ) such that: exactly one of them is feasible; and G is a PCG if and only if LP ( G , T , σ , λ ) is feasible for some tuple ( G , T , σ , λ ) . To reduce the number of graphs G with n vertices (resp., tuples) for which the LPs are solved, they excluded PCGs by heuristically generating PCGs (resp., some tuples that contain a sub-tuple ( G ′ , T ′ , σ ′ , λ ′ ) for n = 4 whose LP ( G ′ , T ′ , σ ′ , λ ′ ) is infeasible). This paper proposes two improvements in the method: derive a sufficient condition for a graph to be a PCG for a given tree in order to exclude more PCGs; and characterize all sub-tuples ( G ′ , T ′ , σ ′ , λ ′ ) for n = 4 for which LP ( G ′ , T ′ , σ ′ , λ ′ ) is infeasible, and enumerate tuples that contain no such sub-tuples by a branch-and-bound algorithm. Experimental results show that our method more efficiently enumerated all PCGs for n = 8 .
- Published
- 2021
27. The profinite topology of free groups and weakly generic tuples of automorphisms
- Author
-
Gábor Sági
- Subjects
Combinatorics ,Logic ,Tuple ,Automorphism ,Topology (chemistry) ,Mathematics - Published
- 2021
28. Achieving Safe Deep Reinforcement Learning via Environment Comprehension Mechanism
- Author
-
Liu Quan, Wu Wen, Zhao Peiyao, Zhu Fei, and Peng Pai
- Subjects
Markov chain ,Process (engineering) ,business.industry ,Computer science ,Applied Mathematics ,Deep learning ,Stability (learning theory) ,Task (project management) ,SAFER ,Reinforcement learning ,Artificial intelligence ,Electrical and Electronic Engineering ,Tuple ,business - Abstract
Deep reinforcement learning (DRL), which combines deep learning with reinforcement learning, has achieved great success recently. In some cases, however, during the learning process agents may reach states that are worthless and dangerous where the task fails. To address the problem, we propose an algorithm, referred as Environment comprehension mechanism (ECM) for deep reinforcement learning to attain safer decisions. ECM perceives hidden dangerous situations by analyzing object and comprehending the environment, such that the agent bypasses inappropriate actions systematically by setting up constraints dynamically according to states. ECM, which calculates the gradient of the states in Markov tuple, sets up boundary conditions and generates a rule to control the direction of the agent to skip unsafe states. ECM is able to be applied to basic deep reinforcement learning algorithms to guide the selection of actions. The experiment results show that the algorithm promoted safety and stability of the control tasks.
- Published
- 2021
29. On Read-Once Functions over $$\boldsymbol{\mathbb{Z}}_{\mathbf{3}}$$
- Author
-
A. D. Yashunsky
- Subjects
Combinatorics ,Multiplicative group of integers modulo n ,General Mathematics ,Sigma ,Function (mathematics) ,Algebra over a field ,Tuple ,Mathematics - Abstract
We consider read-once functions over $$\mathbb{Z}_{3}$$ , i.e., functions expressed by a formula with addition and multiplication modulo $$3$$ and constants 0, 1, 2 that contains every variable at most once. For a function $$F(x_{1},\dots,x_{n})\colon\{0,1,2\}^{n}\to\{0,1,2\}$$ let $$w_{i}$$ , $$i=0,1,2$$ , be the number of tuples $$(\sigma_{1},\dots,\sigma_{n})\in\{0,1,2\}^{n}$$ such that $$F(\sigma_{1},\dots,\sigma_{n})=i$$ and let $$p_{i}=w_{i}/3^{n}$$ . We prove that for every read-once function the values $$p_{0}$$ , $$p_{1}$$ , $$p_{2}$$ satisfy the inequality $$\max p_{i}-\min p_{i}\leq(\max p_{i}+\min p_{i})^{3}$$ , and that this bound is in a certain sense sharp for read-once functions over $$\mathbb{Z}_{3}$$ .
- Published
- 2021
30. The Shortcomings of the Existing Economic Theory and Their Elimination
- Subjects
Computer science ,Probabilistic logic ,Complex system ,Logical data model ,Function (mathematics) ,Tuple ,Orthogonalization ,Measure (mathematics) ,Mathematical economics ,Associative property - Abstract
Th e paper analyzes the state and management of country’s economics. A tuple of event-driven optimal management as a method of artifi cial intelligence has been developed. Th e characteristics of event-driven quality management of associative and structurally complex systems and processes are given. Th e events and probabilities in the management of economics and the state are considered. A measure of invalidation has been introduced for parameters. Th e method of synthesis of the probability of an event based on expert information is presented. Th e necessity of orthogonalization of the logical function and the transition to the probabilistic function have been substantiated. Th e eff ect of repeated initiating events is considered. One-dimensional optimization of the system on a logical model instead of arithmetic multiparameter optimization is presented. Schemes for managing of development and exit of economics from stagnation are given. Th e tools for event-driven quality management of systems and processes are described. Th e analysis of the shortcomings of the existing economic theory and the possibility of their elimination is carried out.
- Published
- 2021
31. Proportional hesitant 2‐tuple linguistic distance measurements and extended VIKOR method: Case study of evaluation and selection of green airport plans
- Author
-
Francisco Chiclana, Kwai-Sang Chin, Zhen-Song Chen, Sheng-Hua Xiong, and Miroslaw J. Skibniewski
- Subjects
VIKOR method ,Computer science ,Linguistic distance ,computer.software_genre ,Proportional hesitant 2-tuple linguistic term set ,Theoretical Computer Science ,Human-Computer Interaction ,Artificial Intelligence ,Green airport plan selection ,MAGDM ,Data mining ,Tuple ,Distance measurement ,computer ,VIKOR ,Software ,Selection (genetic algorithm) - Abstract
The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link. Building green airports can be regarded as among the most promising routes to sustainable development of ecosystems and human health. This study aims at addressing the problem of green airport plan selection under an uncertain context by developing an uncertain multiattribute group decision making (MAGDM) model. In the proposed model, the assessment information is characterized in the form of a proportional hesitant 2-tuple linguistic term set (PH2TLTS), which incorporates in binary form linguistic information that can accurately quantify subjective assessment information provided under uncertainty. The weights of assessment attributes of green airport plans are obtained automatically through a nonlinear programming model, which enhances the robustness of the decision-making method. Subsequently, on the basis of PH2TLTSs, three distance measures are proposed: the proportional hesitant 2-tuple linguistic Jaccard distance (PH2TLJD), the supplementary proportional hesitant 2-tuple linguistic normalized Minkowski distance (SPH2TLNMD) and the cluster-based proportional hesitant 2-tuple linguistic normalized Minkowski distance (CBPH2TLNMD). The TOPSIS-based comparison method proposed here can better determine the priorities of PH2TLTSs. The ranking and selection of green airport plans are derived using the PH2TL-VIKOR model. Finally, a case study accompanied by sensitivity and comparative analyses is performed to verify the rationality and feasibility of the proposed model.
- Published
- 2021
32. Achromatic number and facial achromatic number of connected locally-connected graphs
- Author
-
Yumiko Ohno and Naoki Matsumoto
- Subjects
Surface (mathematics) ,Vertex (graph theory) ,Relation (database) ,Applied Mathematics ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,0211 other engineering and technologies ,Triangulation (social science) ,021107 urban & regional planning ,0102 computer and information sciences ,02 engineering and technology ,Complete coloring ,01 natural sciences ,law.invention ,Combinatorics ,010201 computation theory & mathematics ,Achromatic lens ,law ,Discrete Mathematics and Combinatorics ,Tuple ,Connectivity ,MathematicsofComputing_DISCRETEMATHEMATICS ,ComputingMethodologies_COMPUTERGRAPHICS ,Mathematics - Abstract
A graph is locally-connected if the neighborhood of each vertex induces a connected graph. It is well known that a triangulation on a closed surface is locally-connected, and some results for triangulations were generalized to those for connected locally-connected graphs. In this paper, we extend two characterization theorems of triangulations for a complete coloring and a facial complete coloring, which are vertex colorings with constraints on the appearance of color tuples, to those of connected locally-connected graphs. Moreover, we also investigate the relation between the corresponding invariants and the number of independent elements.
- Published
- 2021
33. Diagnostic tests under shifts with fixed filling tuple
- Author
-
Grigorii V. Antiufeev
- Subjects
Applied Mathematics ,Discrete Mathematics and Combinatorics ,Diagnostic test ,Tuple ,Arithmetic ,Mathematics - Abstract
We consider a fault source under which the fault functions are obtained from the original function f(x̃ n ) ∈ $\begin{array}{} \displaystyle P_2^n \end{array}$ by a left shift of values of the Boolean variables by at most n. For the vacant positions of the variables, the values are selected from a given filling tuple γ̃ = (γ 1, γ 2, …, γn ) ∈ $\begin{array}{} \displaystyle E^n_2 \end{array}$ , which also moves to the left by the number of positions corresponding to a specific fault function. The problem of diagnostic of faults of this kind is considered. We show that the Shannon function $\begin{array}{} \displaystyle L_{\tilde{\gamma}}^{\rm shifts, diagn}(n), \end{array}$ which is equal to the smallest necessary test length for diagnostic of any n-place Boolean function with respect to a described fault source, satisfies the inequality $\begin{array}{} \displaystyle \left\lceil \frac{n}{2} \right\rceil \leq L_{\tilde{\gamma}}^{\rm shifts, diagn}(n) \leq n. \end{array}$
- Published
- 2021
34. On mean sensitive tuples
- Author
-
Jie Li and Tao Yu
- Subjects
Applied Mathematics ,010102 general mathematics ,Equicontinuity ,01 natural sciences ,010101 applied mathematics ,Combinatorics ,Integer ,Equivalence relation ,Ergodic theory ,0101 mathematics ,Tuple ,Invariant (mathematics) ,Dynamical system (definition) ,Analysis ,Mixing (physics) ,Mathematics - Abstract
In this paper we introduce and study several mean forms of sensitive tuples. It is shown that the topological or measure-theoretical entropy tuples are correspondingly mean sensitive tuples under certain conditions (minimal in the topological setting or ergodic in the measure-theoretical setting). Characterizations of the question when every non-diagonal tuple is mean sensitive are presented. Among other results we show that under minimality assumption a topological dynamical system is weakly mixing if and only if every non-diagonal tuple is mean sensitive and so as a consequence every minimal weakly mixing topological dynamical system is mean n-sensitive for any integer n ≥ 2 . Moreover, the notion of weakly sensitive in the mean tuple is introduced and it turns out that this property has some special lift property. As an application we obtain that the maximal mean equicontinuous factor for any topological dynamical system can be induced by the smallest closed invariant equivalence relation containing all weakly sensitive in the mean pairs.
- Published
- 2021
35. Evaluation and Development Perspectives of Stream Data Processing Systems
- Author
-
Gorawski, Marcin, Gorawska, Anna, Pasterak, Krzysztof, Kwiecień, Andrzej, editor, Gaj, Piotr, editor, and Stera, Piotr, editor
- Published
- 2013
- Full Text
- View/download PDF
36. A Design of WSN Model to Minimize Data-Centric Routing Cost for Many-to-Many Communication
- Author
-
Sanwar Hosen, A. S. M., Cho, Gi-hwan, SAE-China, FISITA, Kim, Kuinam J., editor, and Chung, Kyung-Yong, editor
- Published
- 2013
- Full Text
- View/download PDF
37. SkinnerDB: Regret-bounded Query Evaluation via Reinforcement Learning
- Author
-
Junxiong Wang, Joseph Antonakakis, Deepak Maram, Immanuel Trummer, Samuel Moseley, Ziyun Wei, Saehan Jo, and Ankush Rayabhari
- Subjects
Theoretical computer science ,Computer science ,Preemption ,Benchmark (computing) ,InformationSystems_DATABASEMANAGEMENT ,Reinforcement learning ,Join (sigma algebra) ,Cardinality (SQL statements) ,Tuple ,Data structure ,Query optimization ,Information Systems - Abstract
SkinnerDB uses reinforcement learning for reliable join ordering, exploiting an adaptive processing engine with specialized join algorithms and data structures. It maintains no data statistics and uses no cost or cardinality models. Also, it uses no training workloads nor does it try to link the current query to seemingly similar queries in the past. Instead, it uses reinforcement learning to learn optimal join orders from scratch during the execution of the current query. To that purpose, it divides the execution of a query into many small time slices. Different join orders are tried in different time slices. SkinnerDB merges result tuples generated according to different join orders until a complete query result is obtained. By measuring execution progress per time slice, it identifies promising join orders as execution proceeds. Along with SkinnerDB, we introduce a new quality criterion for query execution strategies. We upper-bound expected execution cost regret, i.e., the expected amount of execution cost wasted due to sub-optimal join order choices. SkinnerDB features multiple execution strategies that are optimized for that criterion. Some of them can be executed on top of existing database systems. For maximal performance, we introduce a customized execution engine, facilitating fast join order switching via specialized multi-way join algorithms and tuple representations. We experimentally compare SkinnerDB’s performance against various baselines, including MonetDB, Postgres, and adaptive processing methods. We consider various benchmarks, including the join order benchmark, TPC-H, and JCC-H, as well as benchmark variants with user-defined functions. Overall, the overheads of reliable join ordering are negligible compared to the performance impact of the occasional, catastrophic join order choice.
- Published
- 2021
38. A collaborative emergency decision making approach based on BWM and TODIM under interval 2-tuple linguistic environment
- Author
-
K.M. Liew, Hua Chai, Qiangling Duan, Qingsong Wang, Kaixuan Qi, Jinhua Sun, and Yongjian Du
- Subjects
Computer science ,Emergency decision making ,Interval 2-tuple linguistic ,Computational intelligence ,Interval (mathematics) ,Best–worst method (BWM) ,CEDM ,Linguistics ,Multiple criteria decision making ,Group decision-making ,Artificial Intelligence ,TODIM ,Pattern recognition (psychology) ,Original Article ,Computer Vision and Pattern Recognition ,Sensitivity (control systems) ,Acronym ,Tuple ,Software - Abstract
Emergencies require various emergency departments to collaborate to achieve timely and effective emergency responses. Thus, the overall performance of emergency response is influenced not only by the efficiency of each department alternative but also by the coordination effect among different department alternatives. This paper proposes a collaborative emergency decision making (CEDM) approach considering the synergy among different department alternatives based on the best–worst method (BWM) and TODIM (an acronym in Portuguese of interactive and multiple attribute decision making) method within an interval 2-tuple linguistic environment. First, the evaluation information provided by decision makers (DMs) is represented by interval 2-tuple linguistic variables to reflect and model the underlying diversity and uncertainty. On the basis of the DMs’ evaluations, the individual and collaborative performance evaluations of multi-alternative combinations composed of different department alternatives are constructed. Then, the BWM is extended into interval 2-tuple linguistic environment to obtain the weights of evaluation criteria, where the group decision making is taken into account in an interval fuzzy mathematical programming model. Furthermore, to derive more practical and accurate decision results, an interval 2-tuple linguistic TODIM (ITL-TODIM) method is proposed by considering the DMs’ psychological behaviours. In the developed ITL-TODIM method, both the gain and loss degrees of one alternative relative to another are simultaneously computed. Finally, a numerical example is presented to illustrate the applicability of the proposed method. Sensitivity and comparative analyses are also provided to demonstrate the effectiveness and advantages of the proposed approach.
- Published
- 2021
39. GIBWM-MABAC approach for MAGDM under multi-granularity intuitionistic 2-tuple linguistic information model
- Author
-
Yi Liu, Fang Liu, Ya Qin, and Yuan Rong
- Subjects
General Computer Science ,Extant taxon ,Rule-based machine translation ,Computer science ,Copula (linguistics) ,Linguistic model ,Computational intelligence ,Data mining ,Granularity ,Tuple ,computer.software_genre ,computer ,Preference (economics) - Abstract
Knowledge plays a vital role in multi-attribute group decision-making (MAGDM), where experts from different-fields present their knowledge to support decision-making by employing multi-granularity linguistic model. The main goals of current work are targeted to present a novel MAGDM approach by integrating the extended Archimedean Copulas (EACs), group-individual best-worst method (GIBWM) and multi-attributive border approximation area comparison (MABAC) approach to fuse multi-source knowledge with multi-granularity intuitionistic 2-tuple linguistic information (I2TLI) with unknown weight information of attributes and experts. To begin with, for the sake of modeling the relationships between attributes (experts), the Copula-based aggregation operators with I2TLI are recommended together with some of its variations discussed; In addition, taking the merits of GIBWM methods, an algorithm of weight information of expert and attributes is designed; Thirdly, considering the decision maker’s behaviour preference and psychology, a modified MABAC method is proposed by modified prospect matrix. Simultaneously, an algorithm for MAGDM based on I2TLI with different granularity is designed by integrating GIBWM and modified MABAC approach. Last of all, an example is furnished to manifest the significance of the proposed method along with related discussions, the advantages of this method are analyzed by comparing with the extant decision-making approaches.
- Published
- 2021
40. Core equivalence in collective-choice bargaining under minimal assumptions
- Author
-
Tomohiko Kawamori
- Subjects
TheoryofComputation_MISCELLANEOUS ,Computer Science::Computer Science and Game Theory ,Computer science ,Existential quantification ,ComputingMilieux_PERSONALCOMPUTING ,TheoryofComputation_GENERAL ,Monotonic function ,Function (mathematics) ,Decision rule ,Subgame perfect equilibrium ,Core (game theory) ,Tuple ,Mathematical economics ,Equivalence (measure theory) - Abstract
We investigate a collective-choice bargaining model under minimal assumptions. In this model, the set of alternatives is arbitrary; each player’s utility function is nonnegative-valued; the decision rule is monotonic; the probability of each player’s being recognized as a proposer depends only on the tuple of actions in the previous round; any player is perfectly patient. We show that for any alternative, it is in the core if and only if there exists a stationary subgame perfect equilibrium (SSPE) such that it is proposed by every player and implemented with certainty.
- Published
- 2021
41. Leveraging range joins for the computation of overlap joins
- Author
-
Michael H. Böhlen, Peter Moser, Johann Gamper, Anton Dignös, Christian S. Jensen, University of Zurich, and Dignös, Anton
- Subjects
Generality ,Theoretical computer science ,Exploit ,10009 Department of Informatics ,Computer science ,1708 Hardware and Architecture ,Computation ,Search engine indexing ,Joins ,000 Computer science, knowledge & systems ,1710 Information Systems ,Temporal databases ,Range (mathematics) ,Empirical research ,Hardware and Architecture ,Range join ,Interval join ,Tuple ,Overlap join ,Temporal join ,Information Systems - Abstract
Joins are essential and potentially expensive operations in database management systems. When data is associated with time periods, joins commonly include predicates that require pairs of argument tuples to overlap in order to qualify for the result. Our goal is to enable built-in systems support for such joins. In particular, we present an approach where overlap joins are formulated as unions of range joins, which are more general purpose joins compared to overlap joins, i.e., are useful in their own right, and are supported well by B+-trees. The approach is sufficiently flexible that it also supports joins with additional equality predicates, as well as open, closed, and half-open time periods over discrete and continuous domains, thus offering both generality and simplicity, which is important in a system setting. We provide both a stand-alone solution that performs on par with the state-of-the-art and a DBMS embedded solution that is able to exploit standard indexing and clearly outperforms existing DBMS solutions that depend on specialized indexing techniques. We offer both analytical and empirical evaluations of the proposals. The empirical study includes comparisons with pertinent existing proposals and offers detailed insight into the performance characteristics of the proposals.
- Published
- 2021
42. An efficient privacy-preserving approach for data publishing
- Author
-
Xinyu Qian, Zhiping Zhou, and Xinning Li
- Subjects
Information privacy ,Sorting algorithm ,General Computer Science ,Computational complexity theory ,Computer science ,Computational intelligence ,Data mining ,Data publishing ,Tuple ,Cluster analysis ,computer.software_genre ,computer ,Time complexity - Abstract
Privacy-preserving algorithm based on k-anonymity plays an outstanding role in real-world data mining applications, such as medical records, bioinformatics, market, and social network. How to maximize the availability of published data without sacrificing users’ privacy is the emphasis of privacy-preserving research. In this paper, we propose a mixed-feature weighted clustering algorithm for k-anonymity (MWCK) to study the contradiction of efficiency and information loss for utility-type anonymization. First, we propose the concept of natural equivalence group, then tuples with same attributes in dataset can be pre-extracted to reduce time complexity and information loss. Second, a sorting algorithm based on the shortest distance is proposed, which selects the optimal initial cluster center at a lower computational cost to reduce the number of iterations. Finally, MWCK not only considers intra-cluster isomorphism to reduce generalization information loss and inter-cluster heterogeneity to avoid local optimal solutions, but also applies to both numerical and categorical datasets. Extensive experiments show that our algorithm can effectively protect data privacy and has better comprehensive performance in terms of information loss and computational complexity than state-of-art methods.
- Published
- 2021
43. On n-polygonal interval-valued fuzzy sets
- Author
-
Yongming Li, Zhi-Hui Li, and Chunfeng Suo
- Subjects
Structure (mathematical logic) ,0209 industrial biotechnology ,Logic ,Fuzzy set ,02 engineering and technology ,Computer Science::Computational Geometry ,Topological space ,Separable space ,020901 industrial engineering & automation ,Compact space ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,Fuzzy number ,020201 artificial intelligence & image processing ,Arithmetic ,Tuple ,Mathematics ,Real number - Abstract
The structure of interval-valued fuzzy sets is complex in regard to their arithmetic operations. To reduce the computational complexity of arithmetic operations on interval-valued fuzzy sets, we propose the notion of an n-polygonal interval-valued fuzzy set considering two 2 n + 2 -tuple ordered real numbers. Complete representation of n-polygonal interval-valued fuzzy sets and numbers is provided. Moreover, we demonstrate that the n-polygonal interval-valued fuzzy numbers can approximate the general interval-value fuzzy numbers with any precision. Next, we introduce arithmetic operations on n-polygonal interval-valued fuzzy numbers by capitalizing on the good characteristics of n-polygonal interval-valued fuzzy numbers. The properties of the introduced arithmetic operations are also addressed. In addition, with the aid of a concrete example, we verify the effectiveness of the approximation ability of the n-polygonal interval-valued fuzzy set. Furthermore, we study the properties of the topological space of n-polygonal interval-valued fuzzy numbers. This proves that this space is a complete, separable and local compact metric space when endowed with the newly defined distance between two n-polygonal interval-valued fuzzy numbers. By product, it shows that arithmetic operations introduced here on n-polygonal interval-valued fuzzy numbers are continuous. Finally, the practicability of n-polygonal interval-valued fuzzy numbers is verified by an example.
- Published
- 2021
44. A New 2-Tuple Linguistic Approach for Unbalanced Linguistic Term Sets
- Author
-
Tanya Malhotra and Anjana Gupta
- Subjects
Computer science ,Applied Mathematics ,Computation ,02 engineering and technology ,Semantics ,Measure (mathematics) ,Linguistics ,Term (time) ,Reduction (complexity) ,Computational Theory and Mathematics ,Rule-based machine translation ,Artificial Intelligence ,Control and Systems Engineering ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Tuple ,Set (psychology) - Abstract
Several real-world problems employ linguistic-based approaches to handle qualitative data. The set of linguistic terms that is utilized in the problems are mostly alleged to be symmetrically distributed. However, with the advent of time, as the complexity of the problem increases, the equidistant linguistic term set seems improper. Consequently, in such cases, experts often prefer to use the set of the unbalanced linguistic term to direct the appraisal for the problems. In this article, we tend to propose a method that is newly designed to deal with a set of unbalanced linguistic terms. In this direction, we initially propose an algorithm to represent unbalanced linguistic information via a multiplicative linguistic label set that has a global inconsistent linguistic term distribution. Furthermore, in light of the Herrera and Martinez, “2-tuple linguistic model,” we develop a novel 2-tuple approach for the unbalanced linguistic set, which is based on the notion of minimum distance measure. Finally, to validate the proposed model in the physical realm and to demonstrate the functioning of the method, a numerical example is being elucidated. The proposed methodology seeks to indicate a reduction in the computation time and also enhances the decision-makers’ evaluations.
- Published
- 2021
45. Hone: Mitigating Stragglers in Distributed Stream Processing With Tuple Scheduling
- Author
-
Heng Qi, Keqiu Li, Kai Chen, Wenxin Li, and Duowen Liu
- Subjects
020203 distributed computing ,Competitive analysis ,Computer science ,Distributed computing ,02 engineering and technology ,Scheduling (computing) ,Instruction set ,Stream processing ,Computational Theory and Mathematics ,Hardware and Architecture ,Server ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,Latency (engineering) ,Tuple ,Queue - Abstract
Low latency stream processing on large clusters consisting of hundreds to thousands of servers is an increasingly important challenge. A crucial barrier to tackling this challenge is stragglers , i.e., tasks that are significantly straggling behind others in processing the stream data. However, prior straggler mitigation solutions have significant limitations. They balance streaming workloads among tasks but may incur imbalanced backlogs when the workloads exhibit variance, causing stragglers as well. Fortunately, we observe that carefully scheduling the outgoing tuples of different tasks can yield benefits for balancing backlogs, and thus avoids stragglers. To this end, we present Hone , a tuple scheduler that aims to minimize the maximum queue backlog of all tasks over time. Hone leverages an online Largest-Backlog-First ( LBF ) algorithm with a provable good competitive ratio to perform efficient tuple scheduling. We have implemented Hone based on Apache Storm and evaluated it extensively via both simulations and testbed experiments. Our results show that under the same workload balancing strategy– shuffle grouping , Hone outperforms the original Storm significantly, with the end-to-end tuple processing latency reduced by 78.7 percent on average.
- Published
- 2021
46. Slanted Canonicity of Analytic Inductive Inequalities
- Author
-
Alessandra Palmigiano, Laurent De Rudder, and Ethics, Governance and Society
- Subjects
Subordination (linguistics) ,Pure mathematics ,General Computer Science ,Logic ,03B45, 03B47, 03B60, 06D50, 06D10, 03G10, 06E15 ,0102 computer and information sciences ,algorithmic correspondence and canonicity ,Lattice (discrete subgroup) ,01 natural sciences ,Theoretical Computer Science ,FOS: Mathematics ,0101 mathematics ,Algebraic number ,Mathematics ,analytic inductive inequalities ,transfer results via Gödel-McKinsey-Tarski translations ,010102 general mathematics ,non-distributive lattices ,SDG 10 - Reduced Inequalities ,Mathematics - Logic ,Extension (predicate logic) ,Sahlqvist canonicity ,Computational Mathematics ,Transfer (group theory) ,subordination algebras ,010201 computation theory & mathematics ,Tuple ,Logic (math.LO) ,Signature (topology) - Abstract
We prove an algebraic canonicity theorem for normal LE-logics of arbitrary signature, in a generalized setting in which the non-lattice connectives are interpreted as operations mapping tuples of elements of the given lattice to closed or open elements of its canonical extension. Interestingly, the syntactic shape of LE-inequalities which guarantees their canonicity in this generalized setting turns out to coincide with the syntactic shape of analytic inductive inequalities, which guarantees LE-inequalities to be equivalently captured by analytic structural rules of a proper display calculus. We show that this canonicity result connects and strengthens a number of recent canonicity results in two different areas: subordination algebras, and transfer results via G\"odel-McKinsey-Tarski translations., Comment: arXiv admin note: text overlap with arXiv:1603.08515, arXiv:1603.08341
- Published
- 2021
47. Data privacy preservation algorithm with k-anonymity
- Author
-
Waranya Mahanan, W. Art Chaovalitwongse, and Juggapong Natwichai
- Subjects
Information privacy ,Hierarchy (mathematics) ,Computer Networks and Communications ,Computer science ,Heuristic (computer science) ,Generalization ,02 engineering and technology ,k-anonymity ,Data type ,Set (abstract data type) ,Hardware and Architecture ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Tuple ,Algorithm ,Software - Abstract
With growing concern of data privacy violations, privacy preservation processes become more intense. The k-anonymity method, a widely applied technique, transforms the data such that the publishing datasets must have at least k tuples to have the same link-able attribute, quasi-identifiers, values. From the observations, we found that, in a certain domain, all quasi-identifiers of the datasets, can have the same data type. This type of attribute is considered as an Identical Generalization Hierarchy (IGH) data. An IGH data has a particular set of characteristics that could utilize for enhancing the efficiency of heuristic privacy preservation algorithms. In this paper, we propose a data privacy preservation heuristic algorithm on IGH data. The algorithm is developed from the observations on the anonymous property of the problem structure that can eliminate the privacy constraints consideration. The experiment results are presented that the proposed algorithm could effectively preserve data privacy and also reduce the number of visited nodes for ensuring the privacy protection, which is the most time-consuming process, compared to the most efficient existing algorithm by at most 21%.
- Published
- 2021
48. Uniform k-Tuple Partially Rank-Ordered Set Sampling
- Author
-
Marvin Javier and Kaushik Ghosh
- Subjects
Statistics and Probability ,Combinatorics ,Set (abstract data type) ,Ranking ,Rank (linear algebra) ,Ranked set sampling ,RSS ,Ordered set ,Sampling (statistics) ,computer.file_format ,Tuple ,computer ,Mathematics - Abstract
Ranked Set Sampling (RSS), introduced by McIntyre, and other related methods, such as Partially Rank-Ordered Set Sampling (PROSS), have shown that inclusion of a ranking mechanism produces estimato...
- Published
- 2021
49. Empirical Analysis of Machine Learning Algorithms on Imbalance Electrocardiogram Based Arrhythmia Dataset for Heart Disease Detection
- Author
-
Pramod Kumar Mishra and Shwet Ketu
- Subjects
Structure (mathematical logic) ,Class (computer programming) ,Multidisciplinary ,Boosting (machine learning) ,Computer science ,business.industry ,Decision tree ,Machine learning ,computer.software_genre ,Random forest ,Support vector machine ,Tree (data structure) ,Artificial intelligence ,Tuple ,business ,computer ,Algorithm - Abstract
Living beings are subjected to many hazards during their course of life. Owing to high mortality rate, heart disease (HD) is among leading hazards for living being. It is world’s one of the critical disease due to its complex diagnosis and expansive treatment. It has predominantly affected the health care sector of developing as well as developed countries. Inadequate preventive measures, diagnosis shortcomings, inefficient medical support, lack of medical staff and advancements have led to severe impacts on developing countries. The paper exhibits state-of-the-art of various intelligent solutions for HD detection with an empirical analysis of machine learning algorithms on electrocardiogram-based arrhythmia dataset for disease detection. A critical investigation is being performed using eight machine learning algorithms, Support Vector Machine, K-Nearest Neighbors, Random Forest, Extra Tree, Bagging, Decision Tree, Linear Regression, and Adaptive Boosting, under imbalanced and balanced class paradigms. The performance of these algorithms is tested with four metrics namely, precision, recall, accuracy, and f1-score. The empirical analysis presents an interesting insight on the structure of dataset. Initially for binary class balancing problem majority class have more accuracy than the minority class because model’s training dataset is crowded with majority class tuples than minority class. The paper uses Synthetic Minority Over-sampling Technique for data balancing. It has not only increased the overall accuracy of the algorithm but also the individual accuracy of the classes. Hence, the accuracy of the minority class will not be sacrificed.
- Published
- 2021
50. High-Parallelism Hash-Merge Architecture for Accelerating Join Operation on FPGA
- Author
-
Wen-Qi Wu, Mei-Ting Xue, Feng Yu, and Qian-Jian Xing
- Subjects
Hash join ,Computer science ,Hash function ,Data_FILES ,Joins ,Join (sigma algebra) ,Linked list ,Parallel computing ,Electrical and Electronic Engineering ,Tuple ,Field-programmable gate array ,Hash table - Abstract
Join is a data-intensive and compute-intensive operation in database systems. As most existing solutions to accelerate the hash join operation on field programmable gate array (FPGA) are focused on N-to-1 join relationships, their performances rapidly decline on N-to-M joins. To resolve this shortcoming, this brief proposes a novel architecture combining hash and sort-merge algorithms for join acceleration. In the build phase, the architecture utilizes a single hash function to build hash tables for two data tables, and the hash collisions are addressed by building ordered linked lists according to their join attributes. In the merge phase, mapped buckets in two hash tables are merged one-to-one to find matching tuples. This architecture lends itself to high parallelism to improve its performance. Experimental results show that the design on a FPGA achieved a high join throughput of 194.0 million tuples per second, which is better than the reported FPGA implementations. Moreover, the architecture is perfectly compatible with both N-to-1 and N-to-M join relationships.
- Published
- 2021
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.