484 results on '"68t27"'
Search Results
2. μXL: explainable lead generation with microservices and hypothetical answers.
- Author
-
Cruz-Filipe, Luís, Kostopoulou, Sofia, Montesi, Fabrizio, and Vistrup, Jonas
- Subjects
- *
LEAD , *ARCHITECTURAL design , *PROGRAMMING languages , *ARTIFICIAL intelligence , *JOURNALISTS - Abstract
Lead generation refers to the identification of potential topics (the 'leads') of importance for journalists to report on. In this article we present μ XL, a new lead generation tool based on a microservice architecture that includes a component of explainable AI. μ XL collects and stores historical and real-time data from web sources, like Google Trends, and generates current and future leads. Leads are produced by a novel engine for hypothetical reasoning based on temporal logical rules, which can identify propositions that may hold depending on the outcomes of future events. This engine also supports additional features that are relevant for lead generation, such as user-defined predicates (allowing useful custom atomic propositions to be defined as Java functions) and negation (needed to specify and reason about leads characterized by the absence of specific properties). Our microservice architecture is designed using state-of-the-art methods and tools for API design and implementation, namely API patterns and the Jolie programming language. Thus, our development provides an additional validation of their usefulness in a new application domain (journalism). We also carry out an empirical evaluation of our tool. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Disagreement amongst counterfactual explanations: how transparency can be misleading.
- Author
-
Brughmans, Dieter, Melis, Lissa, and Martens, David
- Abstract
Counterfactual explanations are increasingly used as an Explainable Artificial Intelligence (XAI) technique to provide stakeholders of complex machine learning algorithms with explanations for data-driven decisions. The popularity of counterfactual explanations resulted in a boom in the algorithms generating them. However, not every algorithm creates uniform explanations for the same instance. Even though in some contexts multiple possible explanations are beneficial, there are circumstances where diversity amongst counterfactual explanations results in a potential disagreement problem among stakeholders. Ethical issues arise when for example, malicious agents use this diversity to fairwash an unfair machine learning model by hiding sensitive features. As legislators worldwide tend to start including the right to explanations for data-driven, high-stakes decisions in their policies, these ethical issues should be understood and addressed. Our literature review on the disagreement problem in XAI reveals that this problem has never been empirically assessed for counterfactual explanations. Therefore, in this work, we conduct a large-scale empirical analysis, on 40 data sets, using 12 explanation-generating methods, for two black-box models, yielding over 192,000 explanations. Our study finds alarmingly high disagreement levels between the methods tested. A malicious user is able to both exclude and include desired features when multiple counterfactual explanations are available. This disagreement seems to be driven mainly by the data set characteristics and the type of counterfactual algorithm. XAI centers on the transparency of algorithmic decision-making, but our analysis advocates for transparency about this self-proclaimed transparency. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Lifted Algorithms for Symmetric Weighted First-Order Model Sampling
- Author
-
Wang, Yuanhong, Pu, Juhua, Wang, Yuyi, and Kuželka, Ondřej
- Subjects
Computer Science - Artificial Intelligence ,Computer Science - Logic in Computer Science ,68T27 ,I.2.4 - Abstract
Weighted model counting (WMC) is the task of computing the weighted sum of all satisfying assignments (i.e., models) of a propositional formula. Similarly, weighted model sampling (WMS) aims to randomly generate models with probability proportional to their respective weights. Both WMC and WMS are hard to solve exactly, falling under the $\#\mathsf{P}$-hard complexity class. However, it is known that the counting problem may sometimes be tractable, if the propositional formula can be compactly represented and expressed in first-order logic. In such cases, model counting problems can be solved in time polynomial in the domain size, and are known as domain-liftable. The following question then arises: Is it also the case for weighted model sampling? This paper addresses this question and answers it affirmatively. Specifically, we prove the domain-liftability under sampling for the two-variables fragment of first-order logic with counting quantifiers in this paper, by devising an efficient sampling algorithm for this fragment that runs in time polynomial in the domain size. We then further show that this result continues to hold even in the presence of cardinality constraints. To empirically verify our approach, we conduct experiments over various first-order formulas designed for the uniform generation of combinatorial structures and sampling in statistical-relational models. The results demonstrate that our algorithm outperforms a start-of-the-art WMS sampler by a substantial margin, confirming the theoretical results., Comment: 47 pages, 6 figures. An expanded version of "On exact sampling in the two-variable fragment of first-order logic" in LICS23. arXiv admin note: substantial text overlap with arXiv:2302.02730
- Published
- 2023
- Full Text
- View/download PDF
5. Tractable and Intractable Entailment Problems in Separation Logic with Inductively Defined Predicates
- Author
-
Echenim, Mnacho and Peltier, Nicolas
- Subjects
Computer Science - Logic in Computer Science ,68T27 ,I.2.3 ,F.4.1 - Abstract
We establish various complexity results for the entailment problem between formulas in Separation Logic with user-defined predicates denoting recursive data structures. The considered fragments are characterized by syntactic conditions on the inductive rules that define the semantics of the predicates. We focus on so-called P-rules, which are similar to (but simpler than) the PCE rules introduced by Iosif et al. in 2013. In particular, for a specific fragment where predicates are defined by so-called loc-deterministic inductive rules, we devise a sound and complete cyclic proof procedure running in polynomial time. Several complexity lower bounds are provided, showing that any relaxing of the provided conditions makes the problem intractable.
- Published
- 2023
6. Efficiently Explaining CSPs with Unsatisfiable Subset Optimization (extended algorithms and examples)
- Author
-
Gamba, Emilio, Bogaerts, Bart, and Guns, Tias
- Subjects
Computer Science - Artificial Intelligence ,Computer Science - Logic in Computer Science ,68T27 ,F.4.1 - Abstract
We build on a recently proposed method for stepwise explaining solutions of Constraint Satisfaction Problems (CSP) in a human-understandable way. An explanation here is a sequence of simple inference steps where simplicity is quantified using a cost function. The algorithms for explanation generation rely on extracting Minimal Unsatisfiable Subsets (MUS) of a derived unsatisfiable formula, exploiting a one-to-one correspondence between so-called non-redundant explanations and MUSs. However, MUS extraction algorithms do not provide any guarantee of subset minimality or optimality with respect to a given cost function. Therefore, we build on these formal foundations and tackle the main points of improvement, namely how to generate explanations efficiently that are provably optimal (with respect to the given cost metric). For that, we developed (1) a hitting set-based algorithm for finding the optimal constrained unsatisfiable subsets; (2) a method for re-using relevant information over multiple algorithm calls; and (3) methods exploiting domain-specific information to speed up the explanation sequence generation. We experimentally validated our algorithms on a large number of CSP problems. We found that our algorithms outperform the MUS approach in terms of explanation quality and computational time (on average up to 56 % faster than a standard MUS approach)., Comment: arXiv admin note: text overlap with arXiv:2105.11763
- Published
- 2023
7. Complexity and scalability of defeasible reasoning in many-valued weighted knowledge bases with typicality
- Author
-
Alviano, Mario, Giordano, Laura, and Dupré, Daniele Theseider
- Subjects
Computer Science - Artificial Intelligence ,68T27 ,I.2.4 - Abstract
Weighted knowledge bases for description logics with typicality under a "concept-wise" multi-preferential semantics provide a logical interpretation of MultiLayer Perceptrons. In this context, Answer Set Programming (ASP) has been shown to be suitable for addressing defeasible reasoning in the finitely many-valued case, providing a $\Pi^p_2$ upper bound on the complexity of the problem, nonetheless leaving unknown the exact complexity and only providing a proof-of-concept implementation. This paper fulfils the lack by providing a $P^{NP[log]}$-completeness result and new ASP encodings that deal with weighted knowledge bases with large search spaces., Comment: 14 pages 4, figures
- Published
- 2023
8. Modeling and shadowing paraconsistent BDI agents.
- Author
-
Dunin-Kęplicz, Barbara and Szałas, Andrzej
- Abstract
The Bdi model of rational agency has been studied for over three decades. Many robust multiagent systems have been developed, and a number of Bdi logics have been studied. Following this intensive development phase, the importance of integrating Bdi models with inconsistency handling and revision theory have been emphasized. There is also a demand for a tighter connection between Bdi-based implementations and Bdi logics. In this paper, we address these postulates by introducing a novel, paraconsistent logical Bdi model close to implementation, with building blocks that can be represented as Sql/rule-based databases. Importantly, tractability is achieved by reasoning as querying. This stands in a sharp contrast to the high complexity of known Bdi logics. We also extend belief shadowing, a shallow and lightweight alternative to deep and computationally demanding belief revision, to encompass agents' motivational attitudes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Semantics of Belief Change Operators for Intelligent Agents: Iteration, Postulates, and Realizability.
- Author
-
Sauerwald, Kai
- Abstract
This paper summarises several contributions to the theory of belief change by the authors' dissertation thesis. First, a relational characterization of belief revision for Tarskian logics is considered, encompassing first-order predicate logic, description logic, modal logics and many monotonic logics with model-theoretic semantics. Those logics where total preorders are the standard semantics for revision are characterized. The second contribution considered is a theory of belief revision that builds upon the idea that agents are limited in what the outcome of a revision is. Furthermore, advancements in principles for iterated belief contraction given in the thesis are outlined. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. On Exact Sampling in the Two-Variable Fragment of First-Order Logic
- Author
-
Wang, Yuanhong, Pu, Juhua, Wang, Yuyi, and Kuželka, Ondřej
- Subjects
Computer Science - Artificial Intelligence ,Computer Science - Logic in Computer Science ,68T27 ,I.2.4 - Abstract
In this paper, we study the sampling problem for first-order logic proposed recently by Wang et al. -- how to efficiently sample a model of a given first-order sentence on a finite domain? We extend their result for the universally-quantified subfragment of two-variable logic $\mathbf{FO}^2$ ($\mathbf{UFO}^2$) to the entire fragment of $\mathbf{FO}^2$. Specifically, we prove the domain-liftability under sampling of $\mathbf{FO}^2$, meaning that there exists a sampling algorithm for $\mathbf{FO}^2$ that runs in time polynomial in the domain size. We then further show that this result continues to hold even in the presence of counting constraints, such as $\forall x\exists_{=k} y: \varphi(x,y)$ and $\exists_{=k} x\forall y: \varphi(x,y)$, for some quantifier-free formula $\varphi(x,y)$. Our proposed method is constructive, and the resulting sampling algorithms have potential applications in various areas, including the uniform generation of combinatorial structures and sampling in statistical-relational models such as Markov logic networks and probabilistic logic programs., Comment: 37 pages, 4 figures, LICS 2023
- Published
- 2023
11. Two Results on Separation Logic With Theory Reasoning
- Author
-
Echenim, Mnacho and Peltier, Nicolas
- Subjects
Computer Science - Logic in Computer Science ,68T27 ,F.4.1 ,I.2.3 - Abstract
Two results are presented concerning the entailment problem in Separation Logic with inductively defined predicate symbols and theory reasoning. First, we show that the entailment problem is undecidable for rules with bounded tree-width, if theory reasoning is considered. The result holds for a wide class of theories, even with a very low expressive power. For instance it applies to the natural numbers with the successor function, or with the usual order. Second, we show that every entailment problem can be reduced to an entailment problem containing no equality (neither in the formulas nor in the recursive rules defining the semantics of the predicate symbols)., Comment: ASL 2022 - Workshop on Advancing Separation Logic. arXiv admin note: substantial text overlap with arXiv:2201.13227
- Published
- 2022
12. Sequential composition of propositional logic programs.
- Author
-
Antić, Christian
- Abstract
This paper introduces and studies the sequential composition and decomposition of propositional logic programs. We show that acyclic programs can be decomposed into single-rule programs and provide a general decomposition result for arbitrary programs. We show that the immediate consequence operator of a program can be represented via composition which allows us to compute its least model without any explicit reference to operators. This bridges the conceptual gap between the syntax and semantics of a propositional logic program in a mathematically satisfactory way. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Automatic Tabulation in Constraint Models
- Author
-
Akgün, Özgür, Gent, Ian P., Jefferson, Christopher, Kiziltan, Zeynep, Miguel, Ian, Nightingale, Peter, Salamon, András Z., and Ulrich-Oltean, Felix
- Subjects
Computer Science - Artificial Intelligence ,68T27 ,I.2.3 - Abstract
The performance of a constraint model can often be improved by converting a subproblem into a single table constraint. In this paper we study heuristics for identifying promising candidate subproblems, where converting the candidate into a table constraint is likely to improve solver performance. We propose a small set of heuristics to identify common cases, such as expressions that will propagate weakly. The process of discovering promising subproblems and tabulating them is entirely automated in the constraint modelling tool Savile Row. Caches are implemented to avoid tabulating equivalent subproblems many times. We give a simple algorithm to generate table constraints directly from a constraint expression in \savilerow. We demonstrate good performance on the benchmark problems used in earlier work on tabulation, and also for several new problem classes. In some cases, the entirely automated process leads to orders of magnitude improvements in solver performance.
- Published
- 2022
14. An ASP approach for reasoning on neural networks under a finitely many-valued semantics for weighted conditional knowledge bases
- Author
-
Giordano, Laura and Dupré, Daniele Theseider
- Subjects
Computer Science - Artificial Intelligence ,68T27 ,I.2.4 - Abstract
Weighted knowledge bases for description logics with typicality have been recently considered under a "concept-wise" multipreference semantics (in both the two-valued and fuzzy case), as the basis of a logical semantics of MultiLayer Perceptrons (MLPs). In this paper we consider weighted conditional ALC knowledge bases with typicality in the finitely many-valued case, through three different semantic constructions. For the boolean fragment LC of ALC we exploit ASP and "asprin" for reasoning with the concept-wise multipreference entailment under a phi-coherent semantics, suitable to characterize the stationary states of MLPs. As a proof of concept, we experiment the proposed approach for checking properties of trained MLPs. The paper is under consideration for acceptance in TPLP., Comment: Paper presented at the 38th International Conference on Logic Programming (ICLP 2022), 16 pages
- Published
- 2022
15. SAT Encodings for Pseudo-Boolean Constraints Together With At-Most-One Constraints
- Author
-
Bofill, Miquel, Coll, Jordi, Nightingale, Peter, Suy, Josep, Ulrich-Oltean, Felix, and Villaret, Mateu
- Subjects
Computer Science - Artificial Intelligence ,68T27 ,I.2.3 - Abstract
When solving a combinatorial problem using propositional satisfiability (SAT), the encoding of the problem is of vital importance. We study encodings of Pseudo-Boolean (PB) constraints, a common type of arithmetic constraint that appears in a wide variety of combinatorial problems such as timetabling, scheduling, and resource allocation. In some cases PB constraints occur together with at-most-one (AMO) constraints over subsets of their variables (forming PB(AMO) constraints). Recent work has shown that taking account of AMOs when encoding PB constraints using decision diagrams can produce a dramatic improvement in solver efficiency. In this paper we extend the approach to other state-of-the-art encodings of PB constraints, developing several new encodings for PB(AMO) constraints. Also, we present a more compact and efficient version of the popular Generalized Totalizer encoding, named Reduced Generalized Totalizer. This new encoding is also adapted for PB(AMO) constraints for a further gain. Our experiments show that the encodings of PB(AMO) constraints can be substantially smaller than those of PB constraints. PB(AMO) encodings allow many more instances to be solved within a time limit, and solving time is improved by more than one order of magnitude in some cases. We also observed that there is no single overall winner among the considered encodings, but efficiency of each encoding may depend on PB(AMO) characteristics such as the magnitude of coefficient values.
- Published
- 2021
- Full Text
- View/download PDF
16. Seven challenges for harmonizing explainability requirements
- Author
-
Chen, Jiahao and Storchan, Victor
- Subjects
Computer Science - Machine Learning ,Computer Science - Artificial Intelligence ,Computer Science - Computers and Society ,68T27 ,I.2.6 ,J.4 ,K.5.2 - Abstract
Regulators have signalled an interest in adopting explainable AI(XAI) techniques to handle the diverse needs for model governance, operational servicing, and compliance in the financial services industry. In this short overview, we review the recent technical literature in XAI and argue that based on our current understanding of the field, the use of XAI techniques in practice necessitate a highly contextualized approach considering the specific needs of stakeholders for particular business applications., Comment: 5 pages; Spotlight paper at the ACM SIGKDD Workshop on Machine Learning in Finance 2021
- Published
- 2021
17. Efficiently Explaining CSPs with Unsatisfiable Subset Optimization
- Author
-
Gamba, Emilio, Bogaerts, Bart, and Guns, Tias
- Subjects
Computer Science - Artificial Intelligence ,Computer Science - Logic in Computer Science ,68T27 ,F.4.1 - Abstract
We build on a recently proposed method for explaining solutions of constraint satisfaction problems. An explanation here is a sequence of simple inference steps, where the simplicity of an inference step is measured by the number and types of constraints and facts used, and where the sequence explains all logical consequences of the problem. We build on these formal foundations and tackle two emerging questions, namely how to generate explanations that are provably optimal (with respect to the given cost metric) and how to generate them efficiently. To answer these questions, we develop 1) an implicit hitting set algorithm for finding optimal unsatisfiable subsets; 2) a method to reduce multiple calls for (optimal) unsatisfiable subsets to a single call that takes constraints on the subset into account, and 3) a method for re-using relevant information over multiple calls to these algorithms. The method is also applicable to other problems that require finding cost-optimal unsatiable subsets. We specifically show that this approach can be used to effectively find sequences of optimal explanation steps for constraint satisfaction problems like logic grid puzzles.
- Published
- 2021
18. Logical perspectives on the foundations of probability
- Author
-
Hosni Hykel and Landes Jürgen
- Subjects
uncertainty ,logic ,probability ,artificial intelligence ,events ,coherence ,induction ,03-01 ,03-03 ,03b42 ,03b48 ,60-01 ,60a99 ,68t27 ,68t30 ,68t37 ,Mathematics ,QA1-939 - Abstract
We illustrate how a variety of logical methods and techniques provide useful, though currently underappreciated, tools in the foundations and applications of reasoning under uncertainty. The field is vast spanning logic, artificial intelligence, statistics, and decision theory. Rather than (hopelessly) attempting a comprehensive survey, we focus on a handful of telling examples. While most of our attention will be devoted to frameworks in which uncertainty is quantified probabilistically, we will also touch upon generalisations of probability measures of uncertainty, which have attracted a significant interest in the past few decades.
- Published
- 2023
- Full Text
- View/download PDF
19. Research on the Influence of Integrative Art Therapy on Adolescents’ Mental Health Based on Big Data Analysis
- Author
-
Wu Chuyi
- Subjects
big data ,integrative art therapy ,adolescents ,mental health ,68t27 ,Mathematics ,QA1-939 - Abstract
Integrative art therapy has achieved great results in improving neurosis, palliative care, mental development of special children, and mental health education of adolescents. Integrative art therapy is an applied technique that integrates artistic creation and psychotherapy. This topic studies the influence of integrated art therapy on adolescent mental health based on big data analysis, and establishes a big data analysis model of adolescent mental health. Through DM(data mining), the most valuable information is extracted, expressed in an easy-to-understand way, and analyzed and evaluated accordingly. In this process, the information should be filtered, hoping to get the information that meets the purpose of mining. Descriptive mining is used to describe the general properties of the mined data. If we want to make predictions by inferring the current data, we should use predictive mining. The results show that the scores of each item factor of SCL-90 self-rating scale in the experimental group have been improved obviously. It is mainly manifested in: somatization, obsessive-compulsive symptoms, interpersonal sensitivity, depression, hostility, terror and paranoia. The scores of these seven items have all dropped below 1.5. The median probability of using C4.5 model to predict the mental state as “abnormal” is 0.8135, and the lowest probability of using BPNN (BP neural network) model to predict the mental state as “abnormal” is 0.6082. This provides an idea for designing a psychological crisis prevention system based on the comprehensive C4.5 and BPNN technologies.
- Published
- 2024
- Full Text
- View/download PDF
20. Temperature and Fault Prediction of Transformer in Distribution Station Based on Digital Twin Model
- Author
-
Zhang Xi, Li Wei, Chen Zhuang, Huang Fazheng, Hu Yaqiong, Xu Bo, Hui Shuang, and Li Heyuan
- Subjects
digital twin model ,radio distribution area ,transformer ,temperature ,fault detection ,68t27 ,Mathematics ,QA1-939 - Abstract
This article presents an innovative method for predicting transformer temperature and faults in distribution substations using a digital twin model combined with deep learning techniques. By constructing a digital model of the transformer, real-time monitoring and precise simulation of its operating status are achieved. In the prediction process, convolutional neural networks (CNN) and long short-term memory networks (LSTM) are fused to mine data features deeply and predict the future state of the transformer. The results show that this method demonstrates significant advantages in transformer temperature and fault prediction, with an accuracy rate as high as 96.55%. Moreover, the error rate of this method has been significantly reduced through comparative experimental verification. In addition to ensuring high accuracy, this method achieves a false alarm rate of less than 0.12% and an average detection time of only 1.35 seconds, further highlighting its effectiveness in practical applications. Therefore, the transformer temperature and fault prediction system developed in this article for distribution substations can effectively improve the stability and safety of the electrical power system (EPS) and provide new and powerful support for the intelligent management and maintenance of transformers.
- Published
- 2024
- Full Text
- View/download PDF
21. Research on Computer-aided Innovation and Entrepreneurship Education Model and Algorithm
- Author
-
Wang Jingru
- Subjects
computer-aided ,innovation and entrepreneurship education ,recommendation algorithm ,68t27 ,Mathematics ,QA1-939 - Abstract
With the rapid development of science and technology, computer-aided technology has been widely used in the field of education, but in the field of innovation and entrepreneurship education (Hereinafter referred to as IEE), there are still problems such as uneven distribution of resources and the disconnection between teaching content and actual needs. This paper puts forward a new model framework of computer-aided IEE, which is student-centered and practice-oriented. Through clear teaching objectives, rich course content, diverse teaching methods, and a scientific evaluation system, students’ innovative consciousness and entrepreneurial ability are comprehensively improved. The course covers innovative thinking, business models, market research, and other theoretical knowledge, and introduces virtual reality (VR) and augmented reality (AR) technologies to simulate the real entrepreneurial process and provide a practical platform. In the aspect of algorithm application, the research combines the content recommendation algorithm and collaborative filtering recommendation algorithm, constructs accurate user portraits by analyzing students’ learning behavior data, and matches them according to the characteristics of learning resources to recommend personalized learning resources for students. At the same time, the learning path optimization algorithm is developed, and the personalized and efficient learning path is planned by using the shortest path algorithm in graph theory to ensure that students can master the required knowledge in the shortest time. The experimental part designed a one-semester teaching experiment, and compared the differences between the experimental group and the control group in academic performance, learning participation, and satisfaction. The results show that the computer-aided IEE model significantly improves students’ academic performance, learning participation, and satisfaction, which verifies the effectiveness of the model. The research on the model and algorithm of computer-aided IEE proposed in this paper provides new ideas and methods for IEE, and helps to cultivate more high-quality talents with innovative thinking and entrepreneurial ability.
- Published
- 2024
- Full Text
- View/download PDF
22. Research on the Application of Computer Vision Technology and Optimization of Space Design in a Smart Home Environment
- Author
-
Peng Mengzhou, Kang Xiaoran, Zhang Xun, and Zhao Chenxu
- Subjects
smart home ,computer vision ,space design ,68t27 ,Mathematics ,QA1-939 - Abstract
This study aims to explore the application of computer vision technology in smart home environments and its influence on space design optimization. Through the comprehensive application of image recognition and processing technology, the behavior patterns of family members, the placement status of household items, and the real-time changes in the indoor environment in smart home systems are deeply analyzed. The experimental results show that computer vision technology can effectively improve smart home systems’ perception and analysis ability, providing data support for automatically adjusting indoor environments and providing personalized service. At the same time, this study also focuses on the optimization strategy of space design based on computer vision technology and puts forward a series of humanized design schemes through reasonable arrangement of cameras and sensors, real-time analysis of user behavior data and environmental parameters, aiming at improving living comfort and achieving efficient use of energy. This study reveals the internal relationship between computer vision technology and space design, which complement each other and jointly promote the intelligent and humanized development of smart homes.
- Published
- 2024
- Full Text
- View/download PDF
23. Construction of Online English Teaching Management System in Universities Based on VR
- Author
-
Suwan Chen and Wei Jiang
- Subjects
vr ,online english teaching ,teaching management system ,68t27 ,Mathematics ,QA1-939 - Abstract
The full English name of VR (Virtual Reality) is a new technology in the computer field. This technology has been widely used and invested in most fields, which shows its reliability and strong computing power. This technology has also played a very important role in the courses taught in colleges and universities. Because of the addition of this technology in college teaching, both language and physical and chemical disciplines have achieved very good results in college teaching, which is why all walks of life also value VR. However, as far as the current situation is concerned, VR still has many problems and difficulties that need to be solved, which also reminds us that we need to look at each issue dialectically. This paper studies the construction of an online English teaching management system in colleges and universities based on VR to achieve scientific and systematic teaching methods and good teaching results. The test results show that the quality of online college English teaching using VR technology has reached 78.93, which is 24.36 higher than that of traditional teaching methods. Therefore, the use of VR technology in English teaching has its unique advantages. This paper can also carry out effective experiments for different experimental objects.
- Published
- 2024
- Full Text
- View/download PDF
24. Cloud Computing Based Data Processing and Automated Management of Power Dispatch
- Author
-
Yucheng Shu, Yan Liu, and Kailin Ma
- Subjects
cloud computing ,power dispatch ,data processing ,automated management ,68t27 ,Mathematics ,QA1-939 - Abstract
As smart grids increase, the scope of power dispatch systems has broadened, encompassing not just traditional dispatch services but also multifaceted information services, encompassing data fusion, heterogeneous system integration, and big data analytics. However, managing intricate information businesses like big data analysis poses numerous challenges for the current power dispatch system, hindering its ability to keep up with escalating business demands. We introduce a cloud computing and big data analytics-based design scheme for a power dispatch data processing and automation management system to address this. Leveraging the distributed computing prowess of cloud computing platforms, the system ensures swift processing and analysis of vast power dispatch data. Additionally, by harnessing big data analysis techniques, the system delves into historical data, uncovers potential operational patterns and risk areas, and offers insights for optimizing the Electric Power System’s (EPS) operations. Experimental outcomes demonstrate the system’s remarkable proficiency in enhancing data processing, real-time control, and automation management capabilities, thereby bolstering the growth of intelligent power dispatch systems with robust technical backing.
- Published
- 2024
- Full Text
- View/download PDF
25. The application value of behavior analysis based on deep learning in the evaluation of depression in art students
- Author
-
Junyi Zhu
- Subjects
behavior analysis ,deep learning ,art students ,depression ,cnn-lstm ,68t27 ,Mathematics ,QA1-939 - Abstract
This study discusses the application value of behavior analysis based on deep learning in the evaluation of depression in art students. Because of the professional characteristics and creative pressure, art college students are at high risk of mental health, among which the incidence of depression is increasing year by year, which has a serious impact on their studies and quality of life. With the rapid development of AI technology, deep learning algorithms show significant advantages in processing complex data and pattern recognition. In this study, by collecting the daily behavior data of art college students and combining it with a deep learning algorithm, an efficient depression evaluation model was constructed. The model aims to realize the early identification and evaluation of depressive symptoms of art college students and provide new methods and means for mental health management. The study collected data using various methods such as questionnaire surveys, mobile application tracking, and social media data crawling, and went through detailed data preprocessing steps, including missing value processing, outlier detection, data standardization, and feature selection, to ensure data quality and model training effectiveness. Subsequently, this study designed a deep learning model (CNN-LSTM) based on the combination of Convolutional Neural Network (CNN) and Long Short Term Memory Network (LSTM), which can capture temporal dependencies and spatial relationships between features in the data, thereby improving the accuracy of depression assessment. The empirical findings demonstrate that the CNN-LSTM integrated model has attained remarkable accuracy in assessing the depressive tendencies of art students, underscoring the efficacy of deep learning techniques in behavioral analysis. This research further scrutinizes the impact of various attributes on the predictive outcomes, highlighting the significance of social interaction frequency, academic stress, and artistic engagement levels in depression assessment.
- Published
- 2024
- Full Text
- View/download PDF
26. Research on the application of artificial intelligence technology in teaching the cultural inheritance and innovation of urban public space
- Author
-
Feng LiWen and Heng HaoYi
- Subjects
urban public space ,artificial intelligence ,teaching ,cultural inheritance and innovation ,68t27 ,Mathematics ,QA1-939 - Abstract
In recent years, the rapid development of artificial intelligence (AI) technology has brought new opportunities to the field of education. With its powerful data processing and analysis capabilities, AI technology has shown great potential in the integration of educational resources and personalized teaching. Specifically in the realm of design education, AI technology holds immense potential to enhance our comprehension of the cultural essence of urban public spaces, elevate the standards of design education, and thereby foster a generation of design talents equipped with both innovative thinking and practical proficiency. The objective of this study is to delve into the application and impact of AI technology in the educational setting, against the backdrop of cultural preservation and urban public space innovation. By conducting a comparative analysis between an experimental group and a control group, this research aims to thoroughly examine the role AI technology plays in augmenting students’ spatial design proficiency, cultural comprehension, and innovative thinking capabilities. The research results show that AI technology significantly improves students’ abilities in the above three aspects by providing personalized learning paths, rich learning resources, and real-time learning feedback. Specifically, students’ spatial design abilities such as spatial composition and color matching have been improved, their ability to understand and appreciate different cultures has also been enhanced, and their innovative thinking and imagination have also been effectively stimulated. This study provides the theoretical basis and practical guidance for educators to better integrate AI technology into teaching, which helps promote the inheritance and innovation of urban public space culture.
- Published
- 2024
- Full Text
- View/download PDF
27. Application and Performance Analysis of Deep Learning Models in Power Dispatching Automation
- Author
-
Yan Liu, Yucheng Shu, and Kailin Ma
- Subjects
power dispatching ,automation ,deep learning ,stability ,68t27 ,Mathematics ,QA1-939 - Abstract
Amidst the swift advancement of smart grid technology, traditional power dispatching methods have become inadequate in addressing escalating power needs and intricate system management prerequisites. By incorporating a deep learning model, we have refined these methods, facilitating data-driven dispatching decisions and optimizing power resource allocation and dispatching efficiency. Our experimental outcomes reveal that the Long Short-Term Memory network (LSTM) excels in handling intricate time series data, boasting superior accuracy and convergence rates compared to the Recurrent Neural Network (RNN) and Convolutional Neural Network (CNN). Detailed performance evaluations confirm LSTM’s proficiency in capturing long-term dependencies and processing time series traits inherent in power dispatching data. Furthermore, a 10-fold cross-validation underscores the LSTM model’s stability and generalizability. In essence, this study concludes that in the realm of power dispatching automation, the LSTM deep learning model demonstrates remarkable effectiveness and holds vast potential, anticipating crucial support for the reliable operation and optimal dispatching of the electrical power system (EPS).
- Published
- 2024
- Full Text
- View/download PDF
28. Research on Space Optimization Design of High-rise Residential Building Based on Genetic Algorithm
- Author
-
Huang Youwei and Zhang Xin
- Subjects
genetic algorithm ,space optimization ,high-rise residential building ,68t27 ,Mathematics ,QA1-939 - Abstract
With the rapid development of urbanization and the continuous growth of population, the design and planning of high-rise residential buildings have become increasingly important. The purpose of this study is to explore the space optimization design method of high-rise residential buildings based on genetic algorithm(GA), focusing on the comparative analysis between traditional GA and Adaptive genetic algorithm(AGA). In this paper, AGA is used to establish the spatial optimization model of high-rise residential buildings. By dynamically adjusting the parameters of the algorithm, AGA makes the algorithm better adapt to the characteristics of the problem and improves the search efficiency. The results show that AGA is superior to traditional GA in global convergence probability, especially when the population size is large. AGA improves the adaptability and robustness of the algorithm by dynamically adjusting the crossover and mutation probability. AGA has better flexibility and adaptability in the design of high-rise residential buildings and is expected to provide more optimized solutions for solving complex design problems. The findings of this study provide a useful reference for innovation and sustainable development in the field of high-rise building design and also provide practical methods and tools for the application of GA.
- Published
- 2024
- Full Text
- View/download PDF
29. Research on the application technology of Artificial Intelligence in college physical education and training
- Author
-
Mao Min and Chen Jianxing
- Subjects
artificial intelligence ,physical education ,physical training ,68t27 ,Mathematics ,QA1-939 - Abstract
Nowadays, with the swift advancement of Artificial Intelligence (AI) technology, its applications have become widespread across diverse fields. AI is no longer a mere abstract idea but has seamlessly integrated into our daily lives, bringing numerous conveniences through its myriad benefits. One such domain where AI has made significant inroads in college physical education and training. The integration of Intelligent Computer-Aided Instruction (ICAI) with computer-assisted teaching systems, AI-powered wearable devices, motion capture systems in sports training, and virtual demonstration technology for simulating athletic movements has greatly enhanced both the precision of physical education and the efficacy of physical training. AI continues to evolve rapidly, indicating vast potential for further development in its integration with physical education and training. This paper highlights the widespread adoption of AI in sports and delves into its specific applications within this domain. The findings reveal that the algorithm employed in this context excels in identifying sports movement features, outperforming the comparison algorithm by 27.65%. Moreover, it precisely pinpoints the edge contours of human movement. In comparison to traditional Support Vector Machines (SVM), Convolutional Neural Networks (CNN) exhibit clear advantages during the later stages of operation, reducing errors by 36.69%. The experimental results underscore the importance of comprehensive human body detection in ensuring stable and accurate sports action tracking.
- Published
- 2024
- Full Text
- View/download PDF
30. Research on 3D Animation Scene Plane Design Based on Deep Learning
- Author
-
Liu Yanqi
- Subjects
deep learning ,3d animation scene ,convolutional neural network ,68t27 ,Mathematics ,QA1-939 - Abstract
At present, most graphic design methods of 3D animation scenes only get a small part of 3D animation scenes, and all of them are in a 3D coordinate system, with observers as the core, so it is difficult to express the depth information of 3D animation scenes. This project intends to study a plane design method for 3D animation scenes based on deep learning. CNN (Convolutional Neural Network) is used to build a multi-view 3D animation scene generation network, and 3D geometry and structure of objects are reconstructed through multiple or a group of images. On this basis, the feature extraction method of 3D animation scenes is studied, and the collaborative learning model of multiple networks is established to improve the modeling accuracy of 3D animation scenes. The experimental results show that the network model is superior to the method based on multi-view and 0-1 voxel in detecting retrieval performance, and the accuracy rate can reach 91.725%. The multi-view 3D animation scene generation method in this paper has achieved better results than the current advanced methods, which proves that the multi-view feature fusion network proposed in this paper is a more reasonable method to fuse multi-view image features.
- Published
- 2024
- Full Text
- View/download PDF
31. Research on the Influence of College Ideological and Political Education on Students’ Mental Health Based on Deep Learning
- Author
-
Pan Gao and Yan Guo
- Subjects
deep learning ,ideological and political education ,mental health ,college students ,68t27 ,Mathematics ,QA1-939 - Abstract
Psychological health education is the main battlefield for promoting the mental health of college students. This article studies the impact of ideological and political education in universities on the mental health of college students. Research on Student Mental Health Based on DL. A student emotion recognition model based on DL (deep learning) has been established. The separable CNN (Convolutional neural network) is used to detect the face in the image, and the face is located and tracked through the tracker. After normalizing the facial image, it is input into a deep separable CNN for classification. The results indicate that the regression equation can explain 74.106% of the total variation. The political characteristics of Demographics characteristics have a significant impact on college students’ political entity identity, while gender, grade, and major have no significant impact on college students’ Party identification. Social practice and campus culture have a significant positive impact on the political entity identity of college students. Compared with traditional machine learning-based emotion recognition methods, the classification accuracy of the proposed method in this paper has been improved by 8.036%, and research based on DL technology has significant advantages in accuracy.
- Published
- 2024
- Full Text
- View/download PDF
32. AI-Driven Decision Support System for Green and Sustainable Urban Planning in Smart Cities
- Author
-
Xu Can
- Subjects
sustainable urban planning ,solar panel efficiency ,rainwater harvesting ,community gardens ,bike sharing stations ,environmental sustainability ,68t27 ,Mathematics ,QA1-939 - Abstract
This study focuses on innovative practices in sustainable urban planning, demonstrating significant advancements in key areas such as solar panel efficiency, rainwater harvesting capacity, community garden space, and bike-sharing station accessibility through in-depth experimentation and analysis. The research results show that the energy conversion rate of solar panels reached 25%, surpassing the market standard of 24%, which is crucial for enhancing self-sufficiency in energy in urban areas. The rainwater harvesting system performed well, achieving a capacity of 600 liters per square meter, slightly below the market rate of 650 liters, but still demonstrating significant potential in dense urban environments. Additionally, our project provided 3 square meters of community garden space per resident, exceeding the market average, effectively promoting urban greening and improving residents’ quality of life. In terms of transportation, our experimental model featured 1.5 bike-sharing stations per 1000 residents, better than the market data of 1.2 stations, contributing to the development of sustainable urban transportation. These outcomes not only showcase the potential of sustainable urban planning but also provide practical references and guidance for future urban development.
- Published
- 2024
- Full Text
- View/download PDF
33. Research on Innovation of Translation Teaching and Translation Strategies for College Students in Multimedia Background
- Author
-
Li Dan
- Subjects
multimedia ,teaching innovation ,glr analysis algorithm ,adversarial neural network ,bleu evaluation method ,68t27 ,Mathematics ,QA1-939 - Abstract
In the multimedia context, it is important to enrich the teaching forms, challenge the traditional teaching concepts and realize the innovation of education mode. In this paper, a detailed review of translation strategies for college students in the multimedia context is presented, and the traditional GLR translation teaching analysis algorithm is analyzed. To compensate for the shortcomings of low translation teaching efficiency caused by over-fitting in the traditional GLR translation teaching analysis algorithm, a Bayesian model is constructed, and an adversarial neural network is built on its basis. Generate a translation teaching innovation model applicable to the translation teaching of university students. The translation teaching method is evaluated using the BLEU evaluation method. Experimental results: Both the correct translation rate of utterances based on the statistical computing method and dynamic memory algorithm reached 90%-95%. The traditional GLR translation teaching analysis algorithm achieved 95% correctness in recognizing declarative sentences, while the correctness rate for question and exclamation sentences was less than 95%. The correct translation rate of all the statements of the innovative model of translation teaching reached more than 97%. It can be seen that: The innovative model of translation teaching for college students with multimedia backgrounds is simpler and faster in calculation and more practical than other translation teaching algorithms, which is suitable for English translation work of college students and meets the proofreading needs of college students for translation teaching.
- Published
- 2024
- Full Text
- View/download PDF
34. Application of finite element analysis in structural analysis and computer simulation
- Author
-
Zhang ZhiQiang
- Subjects
finite element analysis ,structural analysis ,computer simulation ,fitting effect ,operational efficiency ,68t27 ,Mathematics ,QA1-939 - Abstract
In today’s highly developed technology, computer and Internet technology has seen a climax of innovation and its application areas are becoming more and more extensive. Computer simulation technology is the direction of computer development proposed in recent years, which can change our way of life to a greater extent. In order to explore the role of finite element analysis in structural analysis and computer simulation, this paper uses ANSYS finite element analysis combined with structural analysis methods and verified by computer simulation examples of welding thermal cycles. The results show that the computer simulation of the simulated temperature curve trend and the experimentally measured temperature curve is basically the same. Absolute error curve increases first and then decreases, basically at 11 s when the maximum, followed by a rapid decline, and then gradually slow down the rate of decline, and eventually converge on 200 °C or 180 °C or so. Such a computer simulation in a certain range to be able to more accurately simulate the welding temperature field, the study of welding problems is very valuable reference. For the simulation speed of computer simulation, combined with the structural analysis of finite element analysis, the running time was reduced by an average of 3.58 min, and the overall efficiency was improved by 21.81%. It shows that the FEA method can effectively reduce the running time and significantly improve the running efficiency. In summary, finite element analysis can optimize common problems in structural analysis, strengthen the analysis effect, and expand the application of computer simulation technology.
- Published
- 2024
- Full Text
- View/download PDF
35. Research on the optimization of the path of the agricultural logistics industry under the innovative mode of 'blockchain + supply chain finance'
- Author
-
Huang Man and Lian Jie
- Subjects
blockchain model ,supply chain financial structure ,asymmetric cryptographic algorithm ,adaboost-dpso-svm algorithm model ,dual-chain fusion development mechanism ,68t27 ,Mathematics ,QA1-939 - Abstract
To promote the development and application of “blockchain+supply chain finance” in the agricultural logistics industry, this paper uses the ECC algorithm in the blockchain model as the basis and implements an asymmetric encryption algorithm based on elliptic curve mathematical theory to optimize the ECC algorithm. Based on the ECC algorithm in the blockchain model, this paper implements an asymmetric encryption algorithm based on elliptic curve mathematical theory to optimize the ECC algorithm. Combining the dynamic variational particle swarm optimization algorithm and the AdaBoost integration algorithm, the DPSO-SVM algorithm is weighted and combined into a strong classifier, and an evaluation model based on AdaBoost-DPSO-SVM is established, and the model is compared with other models. The experimental results show that the data platform based on “blockchain + supply chain finance” can well solve the problem of loss of confidentiality in the process of identifying agricultural products in the agricultural logistics industry. This also shows that through the way of “blockchain + supply chain finance”, it can provide a better development path for the agricultural logistics industry and can truly achieve low-cost commercial communication, i.e. multi-functional integration development.
- Published
- 2024
- Full Text
- View/download PDF
36. Analysis of risk factors for multi-drug resistant bacterial infections and prevention and control based on logistic regression analysis
- Author
-
Guo Changcheng, Jia Liping, Li Yan, Chen Xiuqin, and Yang Kun
- Subjects
logistic regression ,multilayer neural network ,convolutional network model ,multi-drug resistant bacteria ,preventive control ,68t27 ,Mathematics ,QA1-939 - Abstract
Logistic regression and neural networks have developed rapidly in recent years, and the poor diet of people in modern society has led to the emergence of various diseases in which drug-resistant bacterial infections occur during treatment, so this paper proposes logistic regression to analyze risk factors and preventive control of multi-drug-resistant bacterial infections. A logistic regression model was established to determine the magnitude of the effect of each factor on the dependent variable based on the standardized values, and the prevalence was recoded in the prediction stage, with the screened indicators serving as factors and covariates. The number of neurons in the input and output layers is determined, and the weights are continuously adjusted in iterations to calculate the average error rate between the actual number of morbidities and the predicted values. The gradient explosion and dispersion problems of in-depth analysis are solved by selecting the maximum probability for classification. The error values were calculated by using the cost function, adjusting the model parameters, comparing the errors between predicted and observed values, and updating the weights with the hidden layer error values, thus improving the accuracy of the model for analyzing risk factors of multi-drug resistant bacterial infections and preventing and controlling the deterioration of the disease. The analysis of the results showed that the logistic regression analysis method, using the area of the ROC curve as a discriminant, yielded an AUC of 0.831 in this study, which combined with the neural network model to predict multi-drug resistant bacteria infections with a higher accuracy of 85.6%, identify the potential risk of multi-drug resistant bacteria occurrence, and prevent the aggravation of the infection.
- Published
- 2024
- Full Text
- View/download PDF
37. On the Application of Artificial Intelligence in Local Legislation
- Author
-
Wang Ke
- Subjects
artificial intelligence ,local legislation ,generative adversarial networks ,intelligent screening ,intelligent review ,68t27 ,Mathematics ,QA1-939 - Abstract
The expansion of local legislative authority has prompted the introduction of various local regulations, which have promoted local governance in various places. However, the formulation of local legislation suffers from problems such as singularity and fragmentation, and its informatization has not kept pace with the development of artificial intelligence. In order to study the application of artificial intelligence in local legislation, this paper applies artificial intelligence to the intelligent screening of legislative solicitations and the intelligent review of draft regulations through the study of generative adversarial networks and their optimization models. Facing legislative opinions with large amounts of data and complex text, the text recognition rate of AI reaches 98.24%, the success rate of similar opinion de-duplication is 84.69%, and the success rate of classifying opinions applying to different fields and different legal articles is 79.09%. Artificial intelligence can also filter out 71.13% of invalid opinions. In reviewing draft regulations, the success rate of artificial intelligence in judging whether it conflicts with the higher law is 83.01%, and the success rate of judging whether it conflicts with the same law is 80.64%. Artificial intelligence has a natural advantage in assisting local legislators to deal with a large amount of repetitive paperwork, which can effectively improve the efficiency of local legislation. Using artificial intelligence to assist local legislation can help local legislation make great progress and development to maintain local stability better and promote local development.
- Published
- 2024
- Full Text
- View/download PDF
38. On a plausible concept-wise multipreference semantics and its relations with self-organising maps
- Author
-
Giordano, Laura, Gliozzi, Valentina, and Dupré, Daniele Theseider
- Subjects
Computer Science - Artificial Intelligence ,Computer Science - Logic in Computer Science ,68T27 ,I.2.4 - Abstract
Inthispaperwedescribeaconcept-wisemulti-preferencesemantics for description logic which has its root in the preferential approach for modeling defeasible reasoning in knowledge representation. We argue that this proposal, beside satisfying some desired properties, such as KLM postulates, and avoiding the drowning problem, also defines a plausible notion of semantics. We motivate the plausibility of the concept-wise multi-preference semantics by developing a logical semantics of self-organising maps, which have been proposed as possible candidates to explain the psychological mechanisms underlying category generalisation, in terms of multi-preference interpretations., Comment: 13 pages
- Published
- 2020
39. The Tactician (extended version): A Seamless, Interactive Tactic Learner and Prover for Coq
- Author
-
Blaauwbroek, Lasse, Urban, Josef, and Geuvers, Herman
- Subjects
Computer Science - Artificial Intelligence ,Computer Science - Logic in Computer Science ,68T27 ,F.4.1 - Abstract
We present Tactician, a tactic learner and prover for the Coq Proof Assistant. Tactician helps users make tactical proof decisions while they retain control over the general proof strategy. To this end, Tactician learns from previously written tactic scripts and gives users either suggestions about the next tactic to be executed or altogether takes over the burden of proof synthesis. Tactician's goal is to provide users with a seamless, interactive, and intuitive experience together with robust and adaptive proof automation. In this paper, we give an overview of Tactician from the user's point of view, regarding both day-to-day usage and issues of package dependency management while learning in the large. Finally, we give a peek into Tactician's implementation as a Coq plugin and machine learning platform., Comment: 19 pages, 2 figures. This is an extended version of a paper published in CICM-2020. For the project website, see https://coq-tactician.github.io
- Published
- 2020
- Full Text
- View/download PDF
40. A framework for step-wise explaining how to solve constraint satisfaction problems
- Author
-
Bogaerts, Bart, Gamba, Emilio, and Guns, Tias
- Subjects
Computer Science - Logic in Computer Science ,Computer Science - Artificial Intelligence ,68T27 ,F.4.1 - Abstract
We explore the problem of step-wise explaining how to solve constraint satisfaction problems, with a use case on logic grid puzzles. More specifically, we study the problem of explaining the inference steps that one can take during propagation, in a way that is easy to interpret for a person. Thereby, we aim to give the constraint solver explainable agency, which can help in building trust in the solver by being able to understand and even learn from the explanations. The main challenge is that of finding a sequence of simple explanations, where each explanation should aim to be as cognitively easy as possible for a human to verify and understand. This contrasts with the arbitrary combination of facts and constraints that the solver may use when propagating. We propose the use of a cost function to quantify how simple an individual explanation of an inference step is, and identify the explanation-production problem of finding the best sequence of explanations of a CSP. Our approach is agnostic of the underlying constraint propagation mechanisms, and can provide explanations even for inference steps resulting from combinations of constraints. In case multiple constraints are involved, we also develop a mechanism that allows to break the most difficult steps up and thus gives the user the ability to zoom in on specific parts of the explanation. Our proposed algorithm iteratively constructs the explanation sequence by using an optimistic estimate of the cost function to guide the search for the best explanation at each step. Our experiments on logic grid puzzles show the feasibility of the approach in terms of the quality of the individual explanations and the resulting explanation sequences obtained.
- Published
- 2020
- Full Text
- View/download PDF
41. An ASP approach for reasoning in a concept-aware multipreferential lightweight DL
- Author
-
Giordano, Laura and Dupré, Daniele Theseider
- Subjects
Computer Science - Artificial Intelligence ,68T27 ,I.2.4 - Abstract
In this paper we develop a concept aware multi-preferential semantics for dealing with typicality in description logics, where preferences are associated with concepts, starting from a collection of ranked TBoxes containing defeasible concept inclusions. Preferences are combined to define a preferential interpretation in which defeasible inclusions can be evaluated. The construction of the concept-aware multipreference semantics is related to Brewka's framework for qualitative preferences. We exploit Answer Set Programming (in particular, asprin) to achieve defeasible reasoning under the multipreference approach for the lightweight description logic EL+bot. The paper is under consideration for acceptance in TPLP., Comment: Paper presented at the 36th International Conference on Logic Programming (ICLP 2020), University Of Calabria, Rende (CS), Italy, September 2020
- Published
- 2020
42. Structural Decompositions of Epistemic Logic Programs
- Author
-
Hecher, Markus, Morak, Michael, and Woltran, Stefan
- Subjects
Computer Science - Computational Complexity ,Computer Science - Artificial Intelligence ,Computer Science - Computation and Language ,Computer Science - Data Structures and Algorithms ,68T27 ,I.2.8 ,G.2.2 ,G.2.3 ,F.4.1 - Abstract
Epistemic logic programs (ELPs) are a popular generalization of standard Answer Set Programming (ASP) providing means for reasoning over answer sets within the language. This richer formalism comes at the price of higher computational complexity reaching up to the fourth level of the polynomial hierarchy. However, in contrast to standard ASP, dedicated investigations towards tractability have not been undertaken yet. In this paper, we give first results in this direction and show that central ELP problems can be solved in linear time for ELPs exhibiting structural properties in terms of bounded treewidth. We also provide a full dynamic programming algorithm that adheres to these bounds. Finally, we show that applying treewidth to a novel dependency structure---given in terms of epistemic literals---allows to bound the number of ASP solver calls in typical ELP solving procedures.
- Published
- 2020
43. Modeling a GDPR Compliant Data Wallet Application in Prova and AspectOWL.
- Author
-
Mitsikas, Theodoros, Schäfermeier, Ralph, and Paschke, Adrian
- Abstract
We present a GDPR-compliant data privacy and access use case of a distributed data wallet and we explore its modeling using two options, AspectOWL and Prova. This use case requires a representation capable of expressing the dynamicity and interaction between parties. While both approaches provide the expressiveness of non-monotonic states and fluent state transitions, their scope and semantics are vastly different. AspectOWL is a monotonic contextualized ontology language, able to represent dynamic state transitions and knowledge retention by wrapping parts of the ontology in isolated contexts, called aspects, while Prova can handle state transitions at runtime using non-monotonic state transition semantics. We present the two implementations and we discuss the similarities, advantages, and differences of the two approaches. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
44. Integrated Information Theory and Isomorphic Feed-Forward Philosophical Zombies
- Author
-
Hanson, Jake R. and Walker, Sara I.
- Subjects
Computer Science - Information Theory ,68T27 - Abstract
Any theory amenable to scientific inquiry must have testable consequences. This minimal criterion is uniquely challenging for the study of consciousness, as we do not know if it is possible to confirm via observation from the outside whether or not a physical system knows what it feels like to have an inside - a challenge referred to as the "hard problem" of consciousness. To arrive at a theory of consciousness, the hard problem has motivated the development of phenomenological approaches that adopt assumptions of what properties consciousness has based on first-hand experience and, from these, derive the physical processes that give rise to these properties. A leading theory adopting this approach is Integrated Information Theory (IIT), which assumes our subjective experience is a "unified whole", subsequently yielding a requirement for physical feedback as a necessary condition for consciousness. Here, we develop a mathematical framework to assess the validity of this assumption by testing it in the context of isomorphic physical systems with and without feedback. The isomorphism allows us to isolate changes in $\Phi$ without affecting the size or functionality of the original system. Indeed, we show that the only mathematical difference between a "conscious" system with $\Phi>0$ and an isomorphic "philosophical zombies" with $\Phi=0$ is a permutation of the binary labels used to internally represent functional states. This implies $\Phi$ is sensitive to functionally arbitrary aspects of a particular labeling scheme, with no clear justification in terms of phenomenological differences. In light of this, we argue any quantitative theory of consciousness, including IIT, should be invariant under isomorphisms if it is to avoid the existence of isomorphic philosophical zombies and the epistemological problems they pose., Comment: 13 pages
- Published
- 2019
- Full Text
- View/download PDF
45. Decision-making and Fuzzy Temporal Logic
- Author
-
Nascimento, José Cláudio do
- Subjects
Computer Science - Artificial Intelligence ,Economics - Theoretical Economics ,Mathematics - Logic ,68T27 ,I.2.7 ,B.6.0 - Abstract
This paper shows that the fuzzy temporal logic can model figures of thought to describe decision-making behaviors. In order to exemplify, some economic behaviors observed experimentally were modeled from problems of choice containing time, uncertainty and fuzziness. Related to time preference, it is noted that the subadditive discounting is mandatory in positive rewards situations and, consequently, results in the magnitude effect and time effect, where the last has a stronger discounting for earlier delay periods (as in, one hour, one day), but a weaker discounting for longer delay periods (for instance, six months, one year, ten years). In addition, it is possible to explain the preference reversal (change of preference when two rewards proposed on different dates are shifted in the time). Related to the Prospect Theory, it is shown that the risk seeking and the risk aversion are magnitude dependents, where the risk seeking may disappear when the values to be lost are very high., Comment: 11 pages, 7 figures. This new version has a new subsection and news references
- Published
- 2019
46. Deductive belief change.
- Author
-
Aravanis, Theofanis
- Abstract
In a 2003-article, Sven Ove Hansson discusses the justificatory structure of a belief base, by highlighting that some beliefs of the belief base are held only because they are (deductively) justified by some other beliefs. He concludes that the relation between the justificatory structure of a belief base and the vulnerability of its beliefs (which in turn reflects their resistance to change) remains an open issue, both on a conceptual and on a technical level. Motivated by Hanssons' remarks, we introduce in this article a new interesting type of change-operation, called deductive belief change (contraction and revision), and abbreviated as DBC. DBC associates in a natural manner the deductive justification that the logical sentences of the language have, in the context of a belief base B, with their vulnerability relative to B. According to DBC, the more explicit B-beliefs imply a sentence φ, the more resistant to change φ is, with respect to B. We characterize DBC both axiomatically, in terms of natural postulates, and constructively, in terms of kernel belief change, illustrating its simple and intuitive structure. Interestingly enough, as we prove, kernel belief change (and its central specialization partial-meet belief change) already encodes a strong coupling between justificatory structure and vulnerability, as it implements DBC. Furthermore, we show that deductive belief revision, properly adapted to the belief-sets realm, is indistinguishable from Parikh's relevance-sensitive revision, a fundamental type of revision which, due to its favourable properties, constitutes a promising candidate for a variety of real-world applications. As a last contribution, we study relevance in the context of belief bases, and prove that kernel belief change respects Parikh's notion of relevance. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
47. Customized decision tree-based approach for classification of soil on cloud environment.
- Author
-
Shastry, K. Aditya and Sanjay, H. A.
- Subjects
- *
SOIL classification , *ENVIRONMENTAL soil science , *K-nearest neighbor classification , *CROP growth , *REGRESSION trees , *SOIL texture , *SOILS - Abstract
Agriculture is the economic backbone and the main means of livelihood in numerous developing countries. Numerous challenges related to farming and agriculturists exist. Cultivators face crop loss due to inappropriate selection of crops, inappropriate use of fertilizers, alterations in soil, ambiguous conditions in climate, and so on. The type of soil forms a crucial element in agriculture.The class of soil plays an important role in identifying what kind of crop should be planted along with the manure type to be applied. Classification of soil is essential to make effective use of the resources of soil. The texture of the soil has a major impact on crop growth. The role played by soil texture in determining the type of crop to be grown is significant. It is also employed in soil labs for determining the categories of soil. Soil texture plays a major role in determining the suitability of crops and handling famines. Soil chemical properties include "Electrical Conductivity" ( E C ), "Organic Carbon" ( O C ), "Phosphorous" (P), "Potassium" (K), "Power of Hydrogen" ( P H ), "Zinc" ( Z n ), "Boron" (B), and "Sulphur" (S). The crop growth is heavily influenced by the soil's chemical composition. Keeping these considerations in mind, this work develops a customised decision tree ( C DT ) that serves as a soil classifier (SC). A predictive framework is then devised that utilises the C DT to perform the soil classification based on the texture of the soil and its chemical properties. Extensive experiments on several real-world soil datasets from Karnataka, India; and benchmark agricultural datasets such as seeds, Urban Land Cover (ULC), Satellite Image of Land Data (LS), and Forest Cover Type (FCT) were conducted. The results demonstrated that the designed C DT classifier outperformed existing classifiers such as k-Nearest Neighbor ( K NN ), Logistic Regression ( L R ), Artificial Neural Network ( A NN ), Classification and Regression Trees ( C ART ), C 4.5 ), traditional SVM ( S VM ), and Random Forest ( R F ) in terms of Accuracy ( A cc ), Sensitivity ( S ens ), Specificity ( S pec ), Precision ( P rec ), and F-Score ( F S ) on these datasets. The devised SC was deployed on the Heroku (Hk) cloud for effective access. Effective access in terms of end-user availability at all times was provided. An expert system for soil classification was built to provide information about soil classification round the clock using an internet-enabled device to the stakeholders of agriculture, such as cultivators and agricultural organizations. The agricultural raw data was stored in the form of blob objects on Amazon S3 (AS3). [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
48. Analogical proportions in monounary algebras
- Author
-
Antić, Christian
- Published
- 2024
- Full Text
- View/download PDF
49. Logic program proportions
- Author
-
Antić, Christian
- Published
- 2023
- Full Text
- View/download PDF
50. Linear Temporal Public Announcement Logic: A New Perspective for Reasoning About the Knowledge of Multi-classifiers.
- Author
-
Hoseinpour Dehkordi, Amirhoshang, Alizadeh, Majid, and Movaghar, Ali
- Abstract
In this paper, a formal transition system model is presented called Linear Temporal Public Announcement Logic (LTPAL) to extract knowledge in a classification process. The model combines Public Announcement Logic (PAL) and Linear Temporal Logic (LTL). For this purpose, first, an epistemic logic model is created to capture information gathered by classifiers in single-framed data input. Next, using LTL, classifiers are considered for data stream inputs. Then, a verification method is proposed for such data streams. Finally, we formalize natural language properties in LTPAL with a video-stream object detection sample. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.