106 results on '"APPROXIMATE reasoning"'
Search Results
2. Three-way approximation of decision granules based on the rough set approach.
- Author
-
Stepaniuk, Jaroslaw and Skowron, Andrzej
- Subjects
- *
ROUGH sets , *ARTIFICIAL intelligence , *PROCESS optimization , *APPROXIMATE reasoning , *DECISION making , *GRANULAR computing - Abstract
We discuss the three-way rough set based approach for approximation of decision granules in Intelligent Systems (IS's). The novelty of the approach is in presenting a new concept of approximation space which is based on advanced reasoning tools. Many generalisations of the rough set approaches developed over the years are mainly concentrated around reasoning concerning (partial) inclusion of sets. However, such approximation spaces are not satisfactory to deal with important aspects of approximate reasoning by IS's aiming to construct of the high quality approximations of compound decision granules. We demonstrate a number of examples supporting this claim. In particular, in solving the considered in the paper problems are involved complex algorithmic optimization processes directed by reasoning tools supporting searching for (semi-)optimal approximations of decision granules in huge spaces. This paper is a step toward developing tools for derivation of granules supporting IS's in perceiving situations to a degree satisfactory for making the right decisions. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
3. Label of a linguistic value in a universe of discourse and the truth values of fuzzy propositions.
- Author
-
Pei, Zheng, Liu, Qiong, Yan, Li, and Wang, Lu
- Subjects
- *
APPROXIMATE reasoning , *MEMBERSHIP functions (Fuzzy logic) , *ACADEMIC achievement , *VALUE proposition , *COMPARATIVE studies - Abstract
Due to objects described by a linguistic value with unsharp or fuzzy boundary, meaning of a linguistic value means different things to different people, the relation between the linguistic value and membership functions as its meaning is one-to-many rather than one-to-one. How to precisiate meaning of a linguistic value and even determine the truth values of fuzzy propositions remain open problems in computing with words (CW). In the paper, label of a linguistic value in its universe of discourse is proposed to formalize a possible position of objects described by the linguistic value, which is determined by a part of objects that can be and can not be described by the linguistic value via a group of users' commonsense knowledge. Then confidence degree of a membership function relative to a linguistic value is presented by measuring " the membership function is close to label of the linguistic value ", which can be used to precisiate meaning of a linguistic value and determine the truth values of fuzzy propositions. Finally, the truth value approximate reasoning of fuzzy propositions in the framework of the generalized modus ponens is provided and employed to evaluate the students' educational achievement. Comparative analysis with educational achievement based on Aliev's Z -interpolation approach and Zadeh's compositional rule of inference show that label of a linguistic value and confidence degree of a membership function relative to the linguistic value are effective and useful tools for CW. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
4. Rough sets, modal logic and approximate reasoning.
- Author
-
Chakraborty, Mihir Kr., Majumder, Sandip, and Kar, Samarjit
- Subjects
- *
APPROXIMATE reasoning , *ROUGH sets , *MODAL logic , *COMPARATIVE studies - Abstract
• This paper introduces an approximate reasoning method based on rough sets and modal logic. • Different Approximate Modus Ponens (AMP) rules are presented and validated. • A comparative analysis is provided for fuzzy and rough based approaches. • Provide a real case analysis to logically model some issues of legal interest. This paper introduces an approximate reasoning method based on rough sets and modal logic. Various Approximate Modus Ponens rules are investigated and defined in Modal Logic systems interpreted in the rough set language. Although this is primarily theoretical work, we expect natural applications of the technique in real-life scenarios. An attempt in this direction is made in a real case analysis to logically model some issues of legal interest. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
5. Generalised graded interpolation.
- Author
-
Diaconescu, Răzvan
- Subjects
- *
MODEL theory , *FUZZY logic , *APPROXIMATE reasoning - Abstract
We develop an initial study of interpolation for graded consequence relations, which are many-valued consequence relations that arise in connection to many-valued / fuzzy logics. While in general consequence relations represent a foundational structure that supports the mathematical study of reasoning, their graded / many-valued upgrade provides mathematical foundations specifically for approximate reasoning. Our study is developed at the abstract level of institution theory of Goguen and Burstall which is an axiomatic and categorical approach to model theory. It consists of a development of new concepts – that are many-valued – of interpolation, but also of Robinson joint-consistency and Beth definability, and of results that recover at the many-valued truth level the causality relations between these and interpolation, as known from the classical binary context. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
6. New Personalized Medicine Findings from University of Warmia and Mazury Described (Medical Decision Support In the Light of Interactive Granular Computing: Lessons From the Ovufriend Project).
- Subjects
ARTIFICIAL intelligence ,DECISION support systems ,APPROXIMATE reasoning ,GRANULAR computing ,INDIVIDUALIZED medicine - Abstract
The University of Warmia and Mazury in Olsztyn, Poland, conducted research on personalized medicine, focusing on the development of Intelligent Systems and Decision Support Systems for medical decision-making. The study explored the use of Interactive Granular Computing to enhance the design of systems for diagnosis and therapy. Financial support for the research came from the EU Smart Growth Operational Program, and the findings were published in the International Journal of Approximate Reasoning. Researchers aimed to improve the existing OvuFriend platform by incorporating AI algorithms and personalized medicine concepts. [Extracted from the article]
- Published
- 2024
7. Google Is Working on Reasoning AI, Chasing OpenAI's Efforts.
- Author
-
Love, Julia and Metz, Rachel
- Subjects
ARTIFICIAL intelligence ,MATHEMATICS contests ,COMPUTER programming ,CHATBOTS ,APPROXIMATE reasoning - Abstract
Google is developing artificial intelligence (AI) software that can reason like a human, similar to OpenAI's o1 model. Multiple teams at Google have been making progress on AI reasoning software, which is better at solving complex problems in fields like math and computer programming. Google is using a technique called chain-of-thought prompting, where the software considers related prompts before responding. This development is part of the ongoing rivalry between Google and OpenAI in the AI field. While Google has been more cautious in releasing AI products, it remains a strong player in the industry. [Extracted from the article]
- Published
- 2024
8. Three-way approximate reduct based on information-theoretic measure.
- Author
-
Gao, Can, Wang, Zhicheng, and Zhou, Jie
- Subjects
- *
APPROXIMATE reasoning , *ENTROPY (Information theory) - Abstract
Three-way decision is a typical and popular methodology for decision-making and approximate reasoning, while attribute reduction is an important research topic in three-way decision. However, most attribute reduction methods based on three-way decision strictly rely on the preservation of measure criterion, which not only explicitly limits the efficiency of attribute reduction and also implicitly confines the generalization ability of the resulting reduct. In this study, we present a new three-way approximate attribute reduction method based on information-theoretic measure. More specifically, a unified framework for approximate attribute reduction is first provided. Then, the process of attribute reduction is considered to determine each attribute to be the positive region, boundary region, or negative region in terms of its correlation to the decision attribute. The negative attributes can be removed by the preservation of information-theoretic measure, while some boundary attributes are further iteratively eliminated by relaxing the measure criterion. An approximate reduct is finally formed by the positive attributes and the remaining boundary attributes. On several public UCI data sets, the proposed method achieves a much better attribute reduction rate and simultaneously gains an improvement in performance when comparing with other attribute reduction methods. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
9. Pairwise comparisons matrix decomposition into approximation and orthogonal component using Lie theory.
- Author
-
Koczkodaj, W.W., Marek, V.W., and Yayli, Y.
- Subjects
- *
MATRIX decomposition , *LIE algebras , *GROUP algebras , *LIE groups , *DECOMPOSITION method , *ORTHOGONAL decompositions , *LIE theory - Abstract
This paper examines the use of Lie group and Lie Algebra theory to construct the geometry of pairwise comparisons matrices. The Hadamard product (also known as coordinatewise, coordinate-wise, elementwise, or element-wise product) is analyzed in the context of inconsistency and inaccuracy by the decomposition method. The two designed components are the approximation and orthogonal components. The decomposition constitutes the theoretical foundation for the multiplicative pairwise comparisons. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
10. Rough set reasoning using answer set programs.
- Author
-
Doherty, Patrick and Szalas, Andrzej
- Subjects
- *
PRAGMATICS , *ROUGH sets , *KNOWLEDGE representation (Information theory) , *APPROXIMATE reasoning , *INFORMATION storage & retrieval systems , *KNOWLEDGE base - Abstract
Reasoning about uncertainty is one of the main cornerstones of Knowledge Representation. Formal representations of uncertainty are numerous and highly varied due to different types of uncertainty intended to be modeled such as vagueness, imprecision and incompleteness. There is a rich body of theoretical results that has been generated for many of these approaches. It is often the case though, that pragmatic tools for reasoning with uncertainty lag behind this rich body of theoretical results. Rough set theory is one such approach for modeling incompleteness and imprecision based on indiscernibility and its generalizations. In this paper, we provide a pragmatic tool for constructively reasoning with generalized rough set approximations that is based on the use of Answer Set Programming (Asp). We provide an interpretation of answer sets as (generalized) approximations of crisp sets (when possible) and show how to use Asp solvers as a tool for reasoning about (generalized) rough set approximations situated in realistic knowledge bases. The paper includes generic Asp templates for doing this and also provides a case study showing how these techniques can be used to generate reducts for incomplete information systems. Complete, ready to run clingo Asp code is provided in the Appendix, for all programs considered. These can be executed for validation purposes in the clingo Asp solver. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
11. A logical reasoning based decision making method for handling qualitative knowledge.
- Author
-
Chen, Shuwei, Liu, Jun, and Xu, Yang
- Subjects
- *
DECISION making , *APPROXIMATE reasoning - Abstract
Successful decision-making analysis needs to take both advantages of human analysts and computers, and human knowledge is usually expressed in a qualitative way. Computer based approaches are good at handling quantitative data, while it is still challenging on how to well structure qualitative knowledge and incorporate them as part of decision analytics. This paper develops a logical reasoning based decision-making framework for handling qualitative human knowledge. In this framework, an algebraic structure is adopted for modelling qualitative human knowledge in a systematic way, and a logic based approximate reasoning method is then proposed for inferring the final decision based on the structured qualitative knowledge. By taking a non-classical logic as its formal foundation, the proposed logical reasoning based decision making method is able to model and infer with qualitative human knowledge directly without numerical approximation in a strict way. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
12. Modus tollens with respect to uninorms: U-Modus Tollens.
- Author
-
Aguiló, Isabel, Riera, Juan Vicente, Suñer, Jaume, and Torrens, Joan
- Subjects
- *
APPROXIMATE reasoning , *FUZZY logic - Abstract
In fuzzy logic and approximate reasoning the inference rule given by the Modus Tollens usually derives into an inequality involving three logical operators: a conjunction, an implication function and a negation. Until now, in this scenario the conjunction has been commonly modeled by a t-norm, but recently the possibility of using a more general conjunction has been pointed out. In this work, we want to generalize the Modus Tollens inequality by using a conjunctive uninorm instead of a t-norm, leading to the so-called U -Modus Tollens. First, we give a study of this new property for implication functions in general and then we specially focus on residual implications derived from uninorms. In all cases, we prove that there are a lot of solutions of the U -Modus Tollens and we give a characterization of all the solutions in some particular cases. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
13. n-Dimensional (S,N)-implications.
- Author
-
Zanotelli, Rosana, Reiser, Renata, and Bedregal, Benjamin
- Subjects
- *
APPROXIMATE reasoning , *STATISTICAL decision making , *FUZZY systems , *DECISION making , *TRIANGULAR norms , *FUZZY logic , *FUZZY sets - Abstract
The n -dimensional fuzzy logic (n -DFL) has been contributed to overcome the insufficiency of traditional fuzzy logic in modeling imperfect and imprecise information, coming from different opinions of many experts by considering the possibility to model not only ordered but also repeated membership degrees. Thus, n -DFL provides a consolidated logical strategy for applied technologies since the ordered evaluations provided by decision makers impact not only by selecting the best solutions for a decision making problem, but also by enabling their comparisons. In such context, this paper studies the n -dimensional fuzzy implications (n -DI) following distinct approaches: (i) analytical studies, presenting the most desirable properties as neutrality, ordering, (contra-)symmetry, exchange and identity principles, discussing their interrelations and exemplifications; (ii) algebraic aspects mainly related to left- and right-continuity of representable n -dimensional fuzzy t-conorms; and (iii) generating n -DI from existing fuzzy implications. As the most relevant contribution, the prospective studies in the class of n -dimensional interval (S,N)-implications include results obtained from t-representable n -dimensional conorms and involutive n -dimensional fuzzy negations. And, these theoretical results are applied to model approximate reasoning of inference schemes, dealing with based-rule in n -dimensional interval fuzzy systems. A synthetic case-study illustrates the solution for a decision-making problem in medical diagnoses. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
14. The Cubic Dynamic Uncertain Causality Graph: A Methodology for Temporal Process Modeling and Diagnostic Logic Inference.
- Author
-
Dong, Chunling and Zhang, Qin
- Subjects
- *
FAULT diagnosis , *LOGIC , *ALGORITHMS , *VECTOR error-correction models , *APPROXIMATE reasoning , *DYNAMICAL systems - Abstract
To meet the demand for dynamic and highly reliable real-time fault diagnosis for complex systems, we extend the dynamic uncertain causality graph (DUCG) by proposing novel temporal causality modeling and reasoning methods. A new methodology, the Cubic DUCG, is therefore developed. It exploits an efficient scheme for compactly representing and accurately reasoning about the dynamic causalities in the system fault-spreading process. The Cubic DUCG is characterized by: 1) continuous generation of a causality graph that allows for causal connections penetrating among any number of time slices and discards the restrictive assumptions (about the underlying graph structure) upon which the existing research commonly relies; 2) a modeling scheme of complex causalities that includes dynamic negative feedback loops in a natural and intuitive manner; 3) a rigorous and reliable inference algorithm based on complete causalities that reflect real-time fault situations rather than on the cumulative aggregation of static time slices; and 4) some solutions to causality simplification and reduction, graphical transformation, and logical reasoning, for the sake of reducing the reasoning complexity. A series of fault diagnosis experiments on a nuclear power plant simulator verifies the accuracy, robustness, and efficiency of the proposed methodology. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
15. On the use of group theory to generalize elements of pairwise comparisons matrix: A cautionary note.
- Author
-
Koczkodaj, W.W., Liu, F., Marek, V.W., Mazurek, J., Mazurek, M., Mikhailov, L., Özel, C., Pedrycz, W., Przelaskowski, A., Schumann, A., Smarzewski, R., Strzalka, D., Szybowski, J., and Yayli, Y.
- Subjects
- *
APPROXIMATE reasoning , *SCIENTIFIC community , *GROUP theory - Abstract
This paper examines the constricted use of group theory in the study of pairwise comparisons. The presented approach is based on the application of the celebrated Levi Theorems of 1942 and 1943 for orderable groups. The theoretical foundation for multiplicative (ratio) pairwise comparisons is provided. Counterexamples are provided to support the theory. In our opinion, the scientific community must be made aware of the limitations of using the group theory in pairwise comparisons. Groups, which are not torsion free, cannot be used for ratios by Levi's theorems. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
16. Fuzzy implications: alpha migrativity and generalised laws of importation.
- Author
-
Baczyński, Michał, Jayaram, Balasubramaniam, and Mesiar, Radko
- Subjects
- *
FUNCTIONAL equations , *APPROXIMATE reasoning , *GENERALIZATION , *SATISFACTION - Abstract
In this work, we discuss the law of α -migrativity as applied to fuzzy implication functions in a meaningful way. A generalisation of this law leads us to Pexider-type functional equations connected with the law of importation, viz., the generalised law of importation I (C (x , α) , y) = I (x , J (α , y)) (GLI) and the generalised cross-law of importation I (C (x , α) , y) = J (x , I (α , y)) (CLI), where C is a generalised conjunction. In this article we investigate only (GLI). We begin by showing that the satisfaction of law of importation by the pairs (C, I) and/or (C, J) does not necessarily lead to the satisfaction of (GLI). Hence, we study the conditions under which these three laws are related. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
17. Three-way decisions of rough vague sets from the perspective of fuzziness.
- Author
-
Zhang, Qinghua, Zhao, Fan, Yang, Jie, and Wang, Guoyin
- Subjects
- *
ROUGH sets , *FUZZY sets , *APPROXIMATE reasoning , *SEARCH algorithms , *DECISION making , *INFORMATION processing , *PROBABILISTIC databases - Abstract
Vague set, as well as intuitionistic fuzzy set, is an extended model of fuzzy sets. On the basis of fuzzy sets, vague sets describe the membership degree of a vague concept by using an interval value instead of a single value. To a certain degree, vague sets have a more powerful ability to process fuzzy information than fuzzy sets. Thus, when characterizing a target concept by vague sets, identifying methods to make scientific and reasonable decisions has become an essential issue. However, existing decision methods always focus on the decisions based on fuzzy concepts, and research on how to make three-way decisions based on vague concepts is still lacking. Therefore, in this paper, the concept of rough vague sets is proposed to construct a rough approximation framework of vague concepts. Then, the fuzziness of the existing approximation approaches is analyzed. Next, improved step-vague set model which is a better approximation approach than existing approaches and the algorithm used to search for a improved step-vague set are proposed. Furthermore, based on the improved step-vague sets, probabilistic rough vague sets and a three-way approximation model with shadowed sets are introduced. Finally, several illustrative examples and relative experiment are listed to verify the effectiveness and significance of the proposed models. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
18. Dual incremental fuzzy schemes for frequent itemsets discovery in streaming numeric data.
- Author
-
Zheng, Hui, Li, Peng, Liu, Qing, Chen, Jinjun, Huang, Guangli, Wu, Junfeng, Xue, Yun, and He, Jing
- Subjects
- *
INTEGER approximations , *DATA distribution , *DISCRETIZATION methods , *ASSOCIATION rule mining , *APPROXIMATE reasoning , *BIG data , *CACHE memory , *SEQUENTIAL pattern mining - Abstract
• There is no need to re-visit previous batches of numeric data. • The consumed time and the estimated error of the proposed two schemes, which is stable with the number of data increasing, is much less than traditional method. • Approximate support values of item-sets are proved in this paper, and also testified by synthetic and real datasets to converge when the number of streaming data increase. • Errors of approximate support values of item-sets are also testified to converge at zero when the number of streaming data increase, which means approximate support values converge to their corresponding real support value. Discovering frequent itemsets is essential for finding association rules, yet too computational expensive using existing algorithms. It is even more challenging to find frequent itemsets upon streaming numeric data. The streaming characteristic leads to a challenge that streaming numeric data cannot be scanned repetitively. The numeric characteristic requires that streaming numeric data should be pre-processed into itemsets, e.g., fuzzy-set methods can transform numeric data into itemsets with non-integer membership values. This leads to a challenge that the frequency of itemsets are usually not integer. To overcome such challenges, fast methods and stream processing methods have been applied. However, the existing algorithms usually either still need to re-visit some previous data multiple times, or cannot count non-integer frequencies. Those existing algorithms re-visiting some previous data have to sacrifice large memory spaces to cache those previous data to avoid repetitive scanning. When dealing with big streaming data nowadays, such large-memory requirement often goes beyond the capacity of many computers. Those existing algorithms unable to count non-integer frequencies would be very inaccurate in estimating the non-integer frequencies of frequent itemsets if used with integer approximation of frequency-counting. To solve the aforementioned issues, in this paper we propose two incremental schemes for frequent itemsets discovery that are capable to work efficiently with streaming numeric data. In particular, they are able to count non-integer frequency without re-visiting any previous data. The key of our schemes to the benefits in efficiency is to extract essential statistics that would occupy much less memory than the raw data do for the ongoing streaming data. This grants the advantages of our schemes 1) allowing non-integer counting and thus natural integration with a fuzzy-set discretization method to boost robustness and anti-noise capability for numeric data, 2) enabling the design of a decay ratio for different data distributions, which can be adapted for three general stream models: landmark, damped and sliding windows, and 3) achieving highly-accurate fuzzy-item-sets discovery with efficient stream-processing. Experimental studies demonstrate the efficiency and effectiveness of our dual schemes with both synthetic and real-world datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
19. The three-way-in and three-way-out framework to treat and exploit ambiguity in data.
- Author
-
Campagner, Andrea, Cabitza, Federico, and Ciucci, Davide
- Subjects
- *
APPROXIMATE reasoning , *DECISION theory , *MACHINE learning , *AMBIGUITY , *PARTIALLY ordered sets , *DATA - Abstract
In this paper, we address ambiguity, intended as a characteristic of any data expression for which a unique meaning cannot be associated by the computational agent for either lack of information or multiple interpretations of the same configuration. In particular, we will propose and discuss ways in which a decision-support classifier can accept ambiguous data and make some (informative) value out of them for the decision maker. Towards this goal we propose a set of learning algorithms within what we call the three-way-in and three-way-out approach, that is, respectively, learning from partially labeled data and learning classifiers that can abstain. This approach is based on orthopartitions, as a common representation framework, and on three-way decisions and evidence theory, as tools to enable uncertain and approximate reasoning and inference. For both the above learning settings, we provide experimental results and comparisons with standard Machine Learning techniques, and show the advantages and promising performances of the proposed approaches on a collection of benchmarks, including a real-world medical dataset. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
20. Z-number based neural network structured inference system.
- Author
-
Aliev, Rafik A., Babanli, M.B., and Guirimov, Babek G.
- Subjects
- *
OPTIMIZATION algorithms , *APPROXIMATE reasoning , *DIFFERENTIAL evolution , *INFERENCE (Logic) , *SYSTEM identification , *PARKINSON'S disease - Abstract
Z-number based Neural Network structured Inference System (ZNIS) with rule-base consisting of linguistic Z-terms trainable with Differential Evolution with Constraints (DEC) optimization algorithm is suggested. The inference mechanism of the multi-layered ZNIS consists of a fuzzifier, fuzzy rule base, inference engine, and output processor. Due to the use of extended fuzzy terms, each processing layer implements appropriate extended fuzzy operations, including computation of fuzzy valued rule firing strengths, fuzzy Level-2 aggregate outputs, and two consecutive Center of Gravity (COG) defuzzification procedures. The experiments with different versions of ZNIS have demonstrated that it is a universal modeling tool suitable for dealing with both approximate reasoning and functional mapping tasks. Random experiments on benchmark examples (among which are simple functional mapping, Parkinson disease, and non-linear system identification) have shown that ZNIS performance is equivalent to or better than FLS Type 2 and far superior to FLS Type 1, showing on average 2–3 times lower MSE. Along with this, the main advantages of ZNIS over other inference systems are better semantic expressing power, higher degree of perception and interpretability of the linguistic rules by humans, and a higher confidence in the reliability of achieved decision due to the transparency of the underlying decision-making mechanism. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. Multiple-rules reasoning based on Triple I method on Atanassov's intuitionistic fuzzy sets.
- Author
-
Zheng, Mucong and Liu, Yan
- Subjects
- *
APPROXIMATE reasoning , *FUZZY sets , *FUZZY numbers - Abstract
Triple I method, proposed by Wang, is an important method to solve FMP (fuzzy modus pones) problem on fuzzy reasoning. In this paper, we extend the Triple I method for multiple-rules approximate reasoning on Atanassov's intuitionistic fuzzy sets. Firstly, we adopt the FITA (First Inference Then Aggregation) pattern and the FATI (First Aggregation Then Inference) pattern to solve the multiple-rules IFMP (intuitionistic fuzzy modus pones) model and prove that the solutions of Triple I method for the model based on the two patterns are equivalent. Secondly, we view the multiple-rules IFMP problem as the reasoning problem based on Triple I method, and present a Multiple I method to solve the model. Moreover, we invert the multiple-rules model of IFMP problem into the two multiple-rules models of FMP problem, then propose the disassembled Multiple I method for the multiple-rules model. Finally, we provide the relationship of two solutions of Multiple I method for Multiple-rules IFMP model. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
22. Interval-valued fuzzy inference based on aggregation functions.
- Author
-
Li, Dechao and Zhu, Min
- Subjects
- *
APPROXIMATE reasoning , *AGGREGATION operators , *FUZZY sets - Abstract
Generalized modus ponens (GMP) and generalized modus tollens (GMT), as two basic patterns of approximate reasoning, aim to acquire some reasonable imprecise conclusions from a collection of imprecise premises using some inference rules. To solve the GMP and GMT problems under interval-valued fuzzy setting, an interval-valued A -compositional rule of inference (ACRI) method and quintuple implication principle (QIP) method with interval-valued implication generated by A under any partial order are presented in this paper, where A is an interval-valued aggregation function. In order to develop these methods, we firstly discuss interval-valued negation generated by an interval-valued aggregation function with any partial order. Some properties of interval-valued implications generated by interval-valued aggregation functions with an arbitrary order are then analyzed. We further investigate the ACRI method and quintuple implication principle (QIP) method with interval-valued implication generated by interval-valued aggregations to solve the interval-valued fuzzy modus ponens (IFMP) and interval-valued fuzzy modus tollens (IFMT). Finally, two examples are implemented to illustrate our proposed approaches using some special interval-valued aggregation functions. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
23. Dicovering approximation spaces and definability.
- Author
-
Diker, Murat and Uğur, Ayşegül Altay
- Subjects
- *
DEFINITION (Logic) , *DEFINABILITY theory (Mathematical logic) , *APPROXIMATION theory , *APPROXIMATE reasoning , *MATHEMATICAL functions - Abstract
In this work, we define a category Cap of covering approximation spaces whose morphisms are functions satisfying a refinement property. We give the relations among Cap , and the category Top of topological spaces and continuous functions, and the category Rere of reflexive approximation spaces and the relation preserving functions. Further, we discuss the textural versions diCap , dfDitop and diRere of these categories. Then we study the definability in Cap with respect to five covering-based approximation operators. In particular, it is observed that via the morphisms of Cap , we may get more information about the subsets of the universe. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
24. A quantitative approach to reasoning about incomplete knowledge.
- Author
-
She, Yanhong, He, Xiaoli, Qian, Yuhua, Xu, Weihua, and Li, Jinhai
- Subjects
- *
EPISTEMIC logic , *REASONING , *APPROXIMATE reasoning , *ARTIFICIAL intelligence , *SEMANTICS - Abstract
In this paper, we aim to present a quantitative approach to reasoning about incomplete information. The study is conducted in MEL, a minimal epistemic logic relating modal languages to uncertainty theories. The proposed approach leads to two types of epistemic truth degrees of a proposition. Some related properties are derived. By means of a more general probability distribution on the set of epistemic states, two randomized versions of epistemic truth degrees are obtained. The connection between the notion of local probabilistic epistemic truth degree and belief function is also established. Based upon the fundamental notion of the global epistemic truth degree, the notion of epistemic similarity degree is also proposed and a kind of pseudo-metric used for approximate reasoning in MEL is thus derived. The obtained results provide a useful supplement to the existing study in the sense that it offers a quantitative approach instead of the qualitative manner in the literature. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
25. An empirically supported approach to the treatment of imprecision in vague reasoning.
- Author
-
Velasco Benito, Gael, Sobrino Cerdeiriña, Alejandro, and Bugarín-Diz, Alberto
- Subjects
- *
APPROXIMATE reasoning , *NATURAL languages , *LINGUISTIC models , *SPANISH language , *USER interfaces , *SEMANTICS - Abstract
The aim of this paper is to propose a new approach for the automatic treatment of linguistic vagueness. Our motivation is the feeling that most existing approaches dealing with linguistic information are based on converting vague meaning into crisp meaning using some conversions to precise measurements. As a result, existing approaches are adequate and easy to implement, but do not closely model the human thought process. To help alleviate this deficiency, we propose the use of linguistic relations to provide a natural language interface to an end user. We show a possible linguistic Prolog model based on an extension of the syntactic unification algorithm using synonymy and antonymy, as well as the extension of the resolution principle. Our approach does not aim to provide a well-founded formal semantics for such a linguistic Prolog, but a simple model supported by two experiments focused on the use of vague language, both of them executed in Spanish (an analysis of the data of the first experiment it is also available in that language at [1]). Thus, the purpose of this paper is to contribute to the mechanization of approximate reasoning by being respectful of the semantics of the vague terms involved in it; i.e., by paying attention to how they are evaluated by linguistic users under experimentation. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
26. A method to construct fuzzy implications–rotation construction.
- Author
-
Su, Yong, Liu, Hua-Wen, and Pedrycz, Witold
- Subjects
- *
FUZZY sets , *APPROXIMATE reasoning , *INFINITY (Mathematics) , *FUZZY logic , *MATHEMATICAL functions - Abstract
In this paper, an algebraic construction–called rotation–is introduced, which produces a fuzzy implication from a fuzzy implication. This construction method is similar to the rotation construction for triangular norms. An infinite number of new families of such fuzzy implications can be constructed in this way which provides a broad spectrum of choices for e.g. fuzzy connectives in fuzzy set theory. A preservation of the logical properties of the initial implication in the final one is investigated. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
27. Approximate classification with web ontologies through evidential terminological trees and forests.
- Author
-
Rizzo, Giuseppe, Fanizzi, Nicola, d'Amato, Claudia, and Esposito, Floriana
- Subjects
- *
SEMANTIC Web , *DECISION trees , *ONTOLOGIES (Information retrieval) , *DEMPSTER-Shafer theory , *RANDOM forest algorithms , *APPROXIMATE reasoning - Abstract
In the context of the Semantic Web, assigning individuals to their respective classes is a fundamental reasoning service. It has been shown that, when purely deductive reasoning falls short, this problem can be solved as a prediction task to be accomplished through inductive classification models built upon the statistical evidence elicited from ontological knowledge bases. However also these data-driven alternative classification models may turn out to be inadequate when instances are unevenly distributed over the various targeted classes To cope with this issue, a framework based on logic decision trees and ensemble learning is proposed. The new models integrate the Dempster–Shafer theory with learning methods for terminological decision trees and forests . These enhanced classification models allow to explicitly take into account the underlying uncertainty due to the variety of branches to be followed up to classification leaves (in the context of a single tree) and/or to the different trees within the ensemble model (the forest). In this extended paper, we propose revised versions of the algorithms for learning Evidential Terminological Decision Trees and Random Forests considering alternative heuristics and additional evidence combination rules with respect to our former preliminary works. A comprehensive and comparative empirical evaluation proves the effectiveness and stability of the classification models, especially in the form of ensembles. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
28. From propositional logic to plausible reasoning: A uniqueness theorem.
- Author
-
Van Horn, Kevin S.
- Subjects
- *
PROPOSITIONAL calculus , *PLAUSIBILITY (Logic) , *APPROXIMATE reasoning , *UNIQUENESS (Mathematics) , *NUMERICAL analysis - Abstract
We consider the question of extending propositional logic to a logic of plausible reasoning, and posit four requirements that any such extension should satisfy. Each is a requirement that some property of classical propositional logic be preserved in the extended logic; as such, the requirements are simpler and less problematic than those used in Cox's Theorem and its variants. As with Cox's Theorem, our requirements imply that the extended logic must be isomorphic to (finite-set) probability theory. We also obtain specific numerical values for the probabilities, recovering the classical definition of probability as a theorem , with truth assignments that satisfy the premise playing the role of the “possible cases.” [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
29. Three-way attribute reducts.
- Author
-
Zhang, Xianyong and Miao, Duoqian
- Subjects
- *
DECISION making , *APPLICATION software , *APPROXIMATE reasoning , *MATHEMATICAL combinations , *GENERALIZATION - Abstract
Three-way decisions are a fundamental methodology with extensive applications, while attribute reducts play an important role in data analyses. The combination of both topics has theoretical significance and applicable prospects, but rarely gains direct research at present. In this paper, three-way decisions are introduced into attribute reducts and thus three-way attribute reducts are systematically investigated. Firstly, classical qualitative reducts are reviewed by the dependency degree. Then, the dependency degree implements approximation analyses to be improved to a controllable measure: the relative dependency degree, which is monotonic to relatively measure the attribute dependency. Given an approximate bar, the relative dependency degree defines the applicable quantitative reducts, which approach, expand, and weaken the classical qualitative reducts. This type of quantitative reducts is actually the positive quantitative reducts for three-way reducts. Thus, three-way quantitative reducts are established by the relative dependency degree and dual thresholds. The positive, boundary, and negative quantitative reducts divide the power set of the condition attribute set and thus gain acceptance, noncommitment, and rejection decisions, respectively; they exhibit the potential derivation from the higher level to the lower level. Furthermore, three-way qualitative reducts are established by degeneration to implement three-way decisions, and three-way quantitative and qualitative reducts exhibit the approximation, expansion, and strength; by virtue of superiority analyses, three-way reducts improve the latent two-way reducts with only acceptance and rejection decisions. Finally, three-way reducts are practically illustrated by observing an example of decision tables. By developing the relative dependency degree with controllability, three-way reducts implement both a quantitative generalization for qualitative reducts and a structural completion for attribute reducts. The relevant study provides a new insight into both three-way decisions and attribute reducts. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
30. Approximation enhancement for stochastic Bayesian inference.
- Author
-
Friedman, Joseph S., Droulez, Jacques, Bessière, Pierre, Lobo, Jorge, and Querlioz, Damien
- Subjects
- *
BAYESIAN analysis , *APPROXIMATE reasoning , *STOCHASTIC approximation , *REAL-time computing , *AUTOCORRELATION (Statistics) - Abstract
Advancements in autonomous robotic systems have been impeded by the lack of a specialized computational hardware that makes real-time decisions based on sensory inputs. We have developed a novel circuit structure that efficiently approximates naïve Bayesian inference with simple Muller C-elements. Using a stochastic computing paradigm, this system enables real-time approximate decision-making with an area-energy-delay product nearly one billion times smaller than a conventional general-purpose computer. In this paper, we propose several techniques to improve the approximation of Bayesian inference by reducing stochastic bitstream autocorrelation. We also evaluate the effectiveness of these techniques for various naïve inference tasks and discuss hardware considerations, concluding that these circuits enable approximate Bayesian inferences while retaining orders-of-magnitude hardware advantages compared to conventional general-purpose computers. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
31. An argumentation system for defeasible reasoning.
- Author
-
Amgoud, Leila and Nouioua, Farid
- Subjects
- *
DEFEASIBLE reasoning , *NONMONOTONIC logic , *SEMANTICS , *APPROXIMATE reasoning , *APPROXIMATION theory - Abstract
Rule-based argumentation systems are developed for reasoning about defeasible information. They take as input a theory made of a set of facts , a set of strict rules , which encode strict information, and a set of defeasible rules which describe general behavior with exceptional cases. They build arguments by chaining such rules, define attacks between them, use a semantics for evaluating the arguments, and finally identify the plausible conclusions that follow from the theory. Undercutting is one of the main attack relations of such systems. It consists of blocking the application of defeasible rules when their exceptional cases hold. In this paper, we consider this relation for capturing all the different conflicts in a theory. We present the first argumentation system that uses only undercutting, and show that it satisfies the rationality postulates proposed in the literature. Finally, we fully characterize both its extensions and its plausible conclusions under various acceptability semantics. Indeed, we show full correspondences between extensions and sub-theories of the theory under which the argumentation system is built. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
32. Learning Gaussian graphical models with fractional marginal pseudo-likelihood.
- Author
-
Leppä-aho, Janne, Pensar, Johan, Roos, Teemu, and Corander, Jukka
- Subjects
- *
GAUSSIAN processes , *APPROXIMATE reasoning , *FRACTIONAL programming , *GRAPH theory , *ANALYTICAL solutions , *STRUCTURAL learning theory - Abstract
We propose a Bayesian approximate inference method for learning the dependence structure of a Gaussian graphical model. Using pseudo-likelihood, we derive an analytical expression to approximate the marginal likelihood for an arbitrary graph structure without invoking any assumptions about decomposability. The majority of the existing methods for learning Gaussian graphical models are either restricted to decomposable graphs or require specification of a tuning parameter that may have a substantial impact on learned structures. By combining a simple sparsity inducing prior for the graph structures with a default reference prior for the model parameters, we obtain a fast and easily applicable scoring function that works well for even high-dimensional data. We demonstrate the favourable performance of our approach by large-scale comparisons against the leading methods for learning non-decomposable Gaussian graphical models. A theoretical justification for our method is provided by showing that it yields a consistent estimator of the graph structure. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
33. Improved spectral clustering using three-way decisions.
- Author
-
Khan, Shahzad, Khan, Omar, Azam, Nouman, and Ullah, Ihsan
- Subjects
- *
MACHINE learning , *NOISE control , *TERNARY system , *MEASUREMENT errors , *APPROXIMATE reasoning - Abstract
Spectral clustering is an unsupervised machine learning algorithm that groups similar data points into clusters. The method generally works by modeling pair-wise data points as input similarity matrices, and then performs their eigen-decomposition. Clustering is then carried out from this high-dimensional representation by utilizing spectral properties. Here, several eigen-points are mapped and merged to a lower dimensional sub-space iteratively. In contrast to traditional methods, spectral clustering is well poised to solve problems involving complex patterns. However, the approach is sensitive to outliers, measurement errors, or perturbations in the original data. These then appear in the form of increased levels of spectral noise, especially in the higher ordered eigen-vectors. Consequently, the application of pre-processing and noise reduction techniques are important for its performance. In this article, we address this issue by introducing a three-way decision based approach to spectral clustering in order to make it insensitive to noise. Three-way decisions are classically applied to problems involving uncertainty and follow a ternary classification system involving actions of acceptance, rejection, and non-commitment. The proposed approach is tested on various standard datasets for verification and validation purposes. Results on the basis of these datasets demonstrate that the proposed approach outperforms classical spectral clustering by an average of 30%. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
34. Special issue on "Uncertainty in Cloud Computing: Concepts, Challenges and Current Solutions".
- Author
-
Mezni, Haithem, Aridhi, Sabeur, Tchernykh, Andrei, and Hadjali, Allel
- Subjects
- *
SYSTEMS on a chip , *CLOUD computing , *UNCERTAINTY , *APPROXIMATE reasoning , *DATA security failures , *CONCEPTS - Published
- 2019
- Full Text
- View/download PDF
35. An inquiry into approximate operations on fuzzy numbers.
- Author
-
Brunelli, Matteo and Mezei, József
- Subjects
- *
APPROXIMATE reasoning , *FUZZY numbers , *COMPUTER simulation , *FUZZY decision making , *FUZZY arithmetic - Abstract
Operations on fuzzy numbers have been a cornerstone in the development of fuzzy modeling and computing with words. Although exact operations are commonly defined by the extension principle, many applications employ approximate operations. At present, despite their wide use, there is no evidence on the goodness of approximate operations. By means of both numerical simulations and theoretical results, in this paper we present an analysis of approximate operations on fuzzy numbers. By focusing on the ranking and defuzzification procedures as essential tools in fuzzy decision making problems, we are going to study the errors produced by the application of approximate operations. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
36. Feature selection and approximate reasoning of large-scale set-valued decision tables based on α-dominance-based quantitative rough sets.
- Author
-
Zhang, Hong-Ying and Yang, Shu-Yun
- Subjects
- *
APPROXIMATE reasoning , *APPROXIMATION theory , *SET theory , *ROUGH sets , *INFORMATION theory - Abstract
Set-valued data are a common type of data for characterizing uncertain and missing information. Traditional dominance-based rough sets can not efficiently deal with large-scale set-valued decision tables and usually neglect the disjunctive semantics of sets. In this paper, we propose a general framework of feature selection and approximate reasoning for large-scale set-valued information tables by integrating quantitative rough sets and dominance-based rough sets. Firstly, we define two new partial orders for set-valued data via the conjunctive and disjunctive semantics of a set. Secondly, based on α -disjunctive dominance relation and α -conjunctive dominance relation defined by the inclusion measure, we present α -dominance-based quantitative rough set models for these two types of set-valued decision tables. Furthermore, we study the issue of feature selection in set-valued decision tables by employing α -dominance-based quantitative rough set models and discuss the relationships between the relative reductions and discernibility matrices. We also present approximate reasoning models based on α -dominance-based quantitative rough sets. Finally, the application of the approach is illustrated by some real-world data sets. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
37. Computing lower and upper expected first-passage and return times in imprecise birth–death chains.
- Author
-
Lopatatzidis, Stavros, De Bock, Jasper, and de Cooman, Gert
- Subjects
- *
MARKOV processes , *NONLINEAR equations , *MATHEMATICAL bounds , *OPERATOR theory , *NUMERICAL analysis , *APPROXIMATE reasoning - Abstract
We provide simple methods for computing exact bounds on expected first-passage and return times in finite-state birth–death chains, when the transition probabilities are imprecise, in the sense that they are only known to belong to convex closed sets of probability mass functions. In order to do that, we model these so-called imprecise birth–death chains as a special type of time-homogeneous imprecise Markov chain, and use the theory of sub- and supermartingales to define global lower and upper expectation operators for them. By exploiting the properties of these operators, we construct a simple system of non-linear equations that can be used to efficiently compute exact lower and upper bounds for any expected first-passage or return time. We also discuss two special cases: a precise birth–death chain, and an imprecise birth–death chain for which the transition probabilities belong to linear-vacuous mixtures. In both cases, our methods simplify even more. We end the paper with some numerical examples. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
38. Distributional logic programming for Bayesian knowledge representation.
- Author
-
Angelopoulos, Nicos and Cussens, James
- Subjects
- *
KNOWLEDGE representation (Information theory) , *LOGIC programming , *BAYESIAN analysis , *PROBABILISTIC inference , *APPROXIMATE reasoning , *STATISTICAL models - Abstract
We present a formalism for combining logic programming and its flavour of nondeterminism with probabilistic reasoning. In particular, we focus on representing prior knowledge for Bayesian inference. Distributional logic programming (Dlp), is considered in the context of a class of generative probabilistic languages. A characterisation based on probabilistic paths which can play a central role in clausal probabilistic reasoning is presented. We illustrate how the characterisation can be utilised to clarify derived distributions with regards to mixing the logical and probabilistic constituents of generative languages. We use this operational characterisation to define a class of programs that exhibit probabilistic determinism . We show how Dlp can be used to define generative priors over statistical model spaces. For example, a single program can generate all possible Bayesian networks having N nodes while at the same time it defines a prior that penalises networks with large families. Two classes of statistical models are considered: Bayesian networks and classification and regression trees. Finally we discuss: (1) a Metropolis–Hastings algorithm that can take advantage of the defined priors and the probabilistic choice points in the prior programs and (2) its application to real-world machine learning tasks. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
39. An extended depth-first search algorithm for optimal triangulation of Bayesian networks.
- Author
-
Li, Chao and Ueno, Maomi
- Subjects
- *
SEARCH algorithms , *BAYESIAN analysis , *TRIANGULATION , *TREE graphs , *PROBABILISTIC inference , *COMPUTATIONAL complexity , *APPROXIMATE reasoning - Abstract
The junction tree algorithm is currently the most popular algorithm for exact inference on Bayesian networks. To improve the time complexity of the junction tree algorithm, we need to find a triangulation with the optimal total table size. For this purpose, Ottosen and Vomlel have proposed a depth-first search (DFS) algorithm. They also introduced several techniques to improve the DFS algorithm, including dynamic clique maintenance and coalescing map pruning. Nevertheless, the efficiency and scalability of that algorithm leave much room for improvement. First, the dynamic clique maintenance allows to recompute some cliques. Second, in the worst case, the DFS algorithm explores the search space of all elimination orders, which has size n !, where n is the number of variables in the Bayesian network. To mitigate these problems, we propose an extended depth-first search (EDFS) algorithm. The new EDFS algorithm introduces the following two techniques as improvements to the DFS algorithm: (1) a new dynamic clique maintenance algorithm that computes only those cliques that contain a new edge, and (2) a new pruning rule, called pivot clique pruning. The new dynamic clique maintenance algorithm explores a smaller search space and runs faster than the Ottosen and Vomlel approach. This improvement can decrease the overhead cost of the DFS algorithm, and the pivot clique pruning reduces the size of the search space by a factor of O ( n 2 ) . Our empirical results show that our proposed algorithm finds an optimal triangulation markedly faster than the state-of-the-art algorithm does. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
40. Relations arising from coverings and their topological structures.
- Author
-
Liu, Guilong, Hua, Zheng, and Zou, Jiyang
- Subjects
- *
TOPOLOGICAL spaces , *ROUGH sets , *APPROXIMATION theory , *INVERSE problems , *APPROXIMATE reasoning - Abstract
Covering rough sets are an important extension of Pawlak rough sets. This paper studies the relations arising from coverings and their topological structures. Every covering induces a reflexive and transitive relation. We represent the approximate pairs proposed by Ma (2015) [20] with different combinations of a relation and its inverse. Based on this representation, we give the relationship among approximate pairs. We also consider the topological structures induced by these lower approximations and establish the relationship among these topologies. The results show that the approximate pairs can be precisely characterized by a relation and its inverse. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
41. Comparison of reduction in formal decision contexts.
- Author
-
Li, Jinhai, Aswani Kumar, Cherukuri, Mei, Changlin, and Wang, Xizhao
- Subjects
- *
MATHEMATICAL simplification , *COMPARATIVE studies , *DECISION making , *PHILOSOPHICAL analysis , *APPROXIMATE reasoning - Abstract
In formal concept analysis, many reduction methods have recently been proposed for formal decision contexts, and each of them was to reduce formal decision contexts with a particular purpose. However, little attention has been paid to the comparison of their differences from various aspects. In fact, this problem is very important because it can provide evidence to select an appropriate reduction method for a given specific case. To address this problem, our study mainly focuses on clarifying the relationship among the existing reduction methods in formal decision contexts. Firstly, we give a rule-based review of the existing reduction methods, revealing the type of rules that each of them can preserve. Secondly, we analyze the relationship among the consistencies introduced by the existing reduction methods. More specifically, Wei's first consistency (see [39] ) is stronger than others, while her second one is weaker than the remainder except Wu's consistency (see [43] ). Finally, we make a comparison of the existing reductions, concluding that Li's reduction (see [14] ) maintaining the non-redundant decision rules of a formal decision context is coarser than others. The results obtained in this paper are beneficial for users to select an appropriate reduction method for meeting their requirements. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
42. A two-phase method for extracting explanatory arguments from Bayesian networks.
- Author
-
Timmer, Sjoerd T., Meyer, John-Jules Ch., Prakken, Henry, Renooij, Silja, and Verheij, Bart
- Subjects
- *
APPROXIMATE reasoning , *BAYESIAN analysis , *PROBABILITY theory , *GRAPH theory , *DEBATE , *MATHEMATICAL models - Abstract
Errors in reasoning about probabilistic evidence can have severe consequences. In the legal domain a number of recent miscarriages of justice emphasises how severe these consequences can be. These cases, in which forensic evidence was misinterpreted, have ignited a scientific debate on how and when probabilistic reasoning can be incorporated in (legal) argumentation. One promising approach is to use Bayesian networks (BNs), which are well-known scientific models for probabilistic reasoning. For non-statistical experts, however, Bayesian networks may be hard to interpret. Especially since the inner workings of Bayesian networks are complicated, they may appear as black box models. Argumentation models, on the contrary, can be used to show how certain results are derived in a way that naturally corresponds to everyday reasoning. In this paper we propose to explain the inner workings of a BN in terms of arguments. We formalise a two-phase method for extracting probabilistically supported arguments from a Bayesian network. First, from a Bayesian network we construct a support graph , and, second, given a set of observations we build arguments from that support graph. Such arguments can facilitate the correct interpretation and explanation of the relation between hypotheses and evidence that is modelled in the Bayesian network. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
43. Interval type-2 fuzzy decision making.
- Author
-
Runkler, Thomas, Coupland, Simon, and John, Robert
- Subjects
- *
INTERVAL functions , *FUZZY sets , *DECISION making , *MATHEMATICAL functions , *APPROXIMATE reasoning - Abstract
This paper concerns itself with decision making under uncertainty and the consideration of risk. Type-1 fuzzy logic by its (essentially) crisp nature is limited in modelling decision making as there is no uncertainty in the membership function. We are interested in the role that interval type-2 fuzzy sets might play in enhancing decision making. Previous work by Bellman and Zadeh considered decision making to be based on goals and constraints. They deployed type-1 fuzzy sets. This paper extends this notion to interval type-2 fuzzy sets and presents a new approach to using interval type-2 fuzzy sets in a decision making situation taking into account the risk associated with the decision making. The explicit consideration of risk levels increases the solution space of the decision process and thus enables better decisions. We explain the new approach and provide two examples to show how this new approach works. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
44. An evidence clustering DSmT approximate reasoning method for more than two sources.
- Author
-
Guo, Qiang, He, You, Jian, Tao, Wang, Haipeng, and Xia, Shutao
- Subjects
- *
COMPUTATIONAL complexity , *APPROXIMATE reasoning , *COMPUTER simulation , *MATHEMATICAL analysis , *DIGITAL signal processing - Abstract
Due to the huge computation complexity of Dezert–Smarandache Theory (DSmT), its applications especially for multi-source (more than two sources) complex fusion problems have been limited. To get high similar approximate reasoning results with Proportional Conflict Redistribution 6 (PCR6) rule in DSmT framework (DSmT + PCR6) and remain less computation complexity, an Evidence Clustering DSmT approximate reasoning method for more than two sources is proposed. Firstly, the focal elements of multi evidences are clustered to two sets by their mass assignments respectively. Secondly, the convex approximate fusion results are obtained by the new DSmT approximate formula for more than two sources. Thirdly, the final approximate fusion results by the method in this paper are obtained by the normalization step. Analysis of computation complexity show that the method in this paper cost much less computation complexity than DSmT + PCR6. The simulation experiments show that the method in this paper can get very similar approximate fusion results and need much less computing time than DSmT + PCR6, especially, when the numbers of sources and focal elements are large, the superiorities of the method are remarkable. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
45. Causal compositional models in valuation-based systems with examples in specific theories.
- Author
-
Jiroušek, Radim and Shenoy, Prakash P.
- Subjects
- *
PROBABILITY theory , *BERNSTEIN polynomials , *EPISTEMIC logic , *MODAL logic , *APPROXIMATE reasoning - Abstract
We show that Pearl's causal networks can be described using causal compositional models (CCMs) in the valuation-based systems (VBS) framework. One major advantage of using the VBS framework is that as VBS is a generalization of several uncertainty theories (e.g., probability theory, a version of possibility theory where combination is the product t -norm, Spohn's epistemic belief theory, and Dempster–Shafer belief function theory), CCMs, initially described in probability theory, are now described in all uncertainty calculi that fit in the VBS framework. We describe conditioning and interventions in CCMs. Another advantage of using CCMs in the VBS framework is that both conditioning and intervention can be easily described in an elegant and unifying algebraic way for the same CCM without having to do any graphical manipulations of the causal network. We describe how conditioning and intervention can be computed for a simple example with a hidden (unobservable) variable. Also, we illustrate the algebraic results using numerical examples in some of the specific uncertainty calculi mentioned above. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
46. Active classification using belief functions and information gain maximization.
- Author
-
Reineking, Thomas
- Subjects
- *
PROBABILISTIC databases , *DEMPSTER-Shafer theory , *APPROXIMATE reasoning , *APPROXIMATION theory , *REASONING - Abstract
Obtaining reliable estimates of the parameters of a probabilistic classification model is often a challenging problem because the amount of available training data is limited. In this paper, we present a classification approach based on belief functions that makes the uncertainty resulting from limited amounts of training data explicit and thereby improves classification performance. In addition, we model classification as an active information acquisition problem where features are sequentially selected by maximizing the expected information gain with respect to the current belief distribution, thus reducing uncertainty as quickly as possible. For this, we consider different measures of uncertainty for belief functions and provide efficient algorithms for computing them. As a result, only a small subset of features need to be extracted without negatively impacting the recognition rate. We evaluate our approach on an object recognition task where we compare different evidential and Bayesian methods for obtaining likelihoods from training data and we investigate the influence of different uncertainty measures on the feature selection process. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
47. Proposition and learning of some belief function contextual correction mechanisms.
- Author
-
Pichon, Frédéric, Mercier, David, Lefèvre, Éric, and Delmotte, François
- Subjects
- *
APPROXIMATE reasoning , *REASONING , *COMPUTATIONAL complexity , *ELECTRONIC data processing , *MACHINE theory - Abstract
Knowledge about the quality of a source can take several forms: it may for instance relate to its truthfulness or to its relevance, and may even be uncertain. Of particular interest in this paper is that such knowledge may also be contextual; for instance the reliability of a sensor may be known to depend on the actual object observed. Various tools, called correction mechanisms, have been developed within the theory of belief functions, to take into account knowledge about the quality of a source. Yet, only a single tool is available to account for contextual knowledge about the quality of a source, and precisely about the relevance of a source. There is thus some lack of flexibility since contextual knowledge about the quality of a source does not have to be restricted to its relevance. The first aim of this paper is thus to try and enlarge the set of tools available in belief function theory to deal with contextual knowledge about source quality. This aim is achieved by (1) providing an interpretation to each one of two contextual correction mechanisms introduced initially from purely formal considerations, and (2) deriving extensions – essentially by uncovering contextual forms – of two interesting and non-contextual correction mechanisms. The second aim of this paper is related to the origin of contextual knowledge about the quality of a source: due to the lack of dedicated approaches, it is indeed not clear how to obtain such specific knowledge in practice. A sound, easy to interpret and computationally simple method is therefore provided to learn from data contextual knowledge associated with the contextual correction mechanisms studied in this paper. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
48. Logics for Approximate Entailment in ordered universes of discourse.
- Author
-
Vetterlein, Thomas, Esteva, Francesc, and Godo, Lluís
- Subjects
- *
APPROXIMATION theory , *ENTAILMENT (Logic) , *SET theory , *LATTICE theory , *MODAL logic - Abstract
The Logic of Approximate Entailment ( LAE ) is a graded counterpart of classical propositional calculus, where conclusions that are only approximately correct can be drawn. This is achieved by equipping the underlying set of possible worlds with a similarity relation. When using this logic in applications, however, a disadvantage must be accepted; namely, in LAE it is not possible to combine conclusions in a conjunctive way. In order to overcome this drawback, we propose in this paper a modification of LAE where, at the semantic level, the underlying set of worlds is moreover endowed with an order structure. The chosen framework is designed in view of possible applications. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
49. Neighborhood based decision-theoretic rough set models.
- Author
-
Li, Weiwei, Huang, Zhiqiu, Jia, Xiuyi, and Cai, Xinye
- Subjects
- *
SET theory , *ROUGH sets , *DATA mining , *PROBABILITY theory , *APPROXIMATE reasoning , *DECISION theory - Abstract
As an extension of Pawlak rough set model, decision-theoretic rough set model (DTRS) adopts the Bayesian decision theory to compute the required thresholds in probabilistic rough set models. It gives a new semantic interpretation of the positive, boundary and negative regions by using three-way decisions. DTRS has been widely discussed and applied in data mining and decision making. However, one limitation of DTRS is its lack of ability to deal with numerical data directly. In order to overcome this disadvantage and extend the theory of DTRS, this paper proposes a neighborhood based decision-theoretic rough set model (NDTRS) under the framework of DTRS. Basic concepts of NDTRS are introduced. A positive region related attribute reduct and a minimum cost attribute reduct in the proposed model are defined and analyzed. Experimental results show that our methods can get a short reduct. Furthermore, a new neighborhood classifier based on three-way decisions is constructed and compared with other classifiers. Comparison experiments show that the proposed classifier can get a high accuracy and a low misclassification cost. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
50. Searching secrets rationally.
- Author
-
Boreale, Michele and Corradi, Fabio
- Subjects
- *
APPROXIMATE reasoning , *INFORMATION science , *DECISION theory , *BAYESIAN analysis , *CONFIDENTIAL communications , *QUANTITATIVE research - Abstract
We study quantitative information flow, from the perspective of an analyst who is interested in maximizing its expected gain in the process of learning a secret, or settling a hypothesis, represented by an unobservable X , after observing some Y related to X . In our framework, learning the secret has an associated reward, while the investigation of the set of possibilities prompted by the observation has a cost, proportional to the set's size. Approaches based on probability coverage, or on trying a fixed number of guesses, are sub-optimal in this framework. Inspired by Bayesian decision theory, we characterize the optimal behavior for the analyst and the corresponding expected gain (payoff) in a variety of situations. We argue about the importance of advantage , defined as the increment in expected gain after the observation if the analyst acts optimally, and representing the value of the information conveyed by Y . We characterize advantage precisely in a number of special but important instances of the framework. Applications to cryptographic systems and to familial DNA searching are examined. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.