15,974 results
Search Results
2. Exam paper generation based on performance prediction of student group
- Author
-
Wu, Zhengyang, primary, He, Tao, additional, Mao, Chenjie, additional, and Huang, Changqin, additional
- Published
- 2020
- Full Text
- View/download PDF
3. SHARE: Designing multiple criteria-based personalized research paper recommendation system
- Author
-
Arpita Chaudhuri, Monalisa Sarma, and Debasis Samanta
- Subjects
Information Systems and Management ,Artificial Intelligence ,Control and Systems Engineering ,Software ,Computer Science Applications ,Theoretical Computer Science - Published
- 2022
4. SimCC: A novel method to consider both content and citations for computing similarity of scientific papers
- Author
-
Reyhani Hamedani, Masoud, primary, Kim, Sang-Wook, additional, and Kim, Dong-Jin, additional
- Published
- 2016
- Full Text
- View/download PDF
5. Exam paper generation based on performance prediction of student group
- Author
-
Chenjie Mao, Changqin Huang, Tao He, and Zhengyang Wu
- Subjects
Information Systems and Management ,Computer science ,media_common.quotation_subject ,02 engineering and technology ,Machine learning ,computer.software_genre ,Theoretical Computer Science ,Task (project management) ,Artificial Intelligence ,ComputingMilieux_COMPUTERSANDEDUCATION ,0202 electrical engineering, electronic engineering, information engineering ,Performance prediction ,Quality (business) ,media_common ,business.industry ,05 social sciences ,050301 education ,Computer Science Applications ,Control and Systems Engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,Focus (optics) ,business ,0503 education ,computer ,Software ,Student group - Abstract
Exam paper generation is an indispensable part of teaching. Existing methods focus on the use of question extraction algorithms with labels for each question provided. Obviously, manual labeling is inefficient and cannot avoid label bias. Furthermore, the quality of the exam papers generated by the existing methods is not guaranteed. To address these problems, we propose a novel approach to generating exam papers based on prediction of exam performance. As such, we update the quality of the initially generated questions one by using dynamic programming, as well as in batches by using genetic algorithms. We performed the prediction task by using Deep Knowledge Tracing. Our approach considered the skill weight, difficulty, and distribution of exam scores. By comparisons, experimental results indicate that our approach performed better than the two baselines. Furthermore, it can generate exam papers with adaptive difficulties closely to the expected levels, and the related student exam scores will be guaranteed to be relatively reasonable distribution. In addition, our approach was evaluated in a real learning scenarios and shows advantages.
- Published
- 2020
6. Bayesian sparse joint dynamic topic model with flexible lead-lag order.
- Author
-
Wang, Feifei, Zhou, Rui, Feng, Yichao, and Lu, Xiaoling
- Subjects
- *
DYNAMIC models , *CONFERENCE papers , *LEAD time (Supply chain management) , *CORPORA - Abstract
Currently, text documents from multiple sources have become available in many fields. It is of great interest to study the relationship between documents from different sources and uncover the underlying causality. Zhu et al. (2021) proposed a joint dynamic topic model (JDTM). They classified all topics into three groups and used the "shared topics" with a fixed time lag order to characterize the shared information between two corpora. Although JDTM is a powerful tool for discovering the lead-lag relationship, there are two potential shortcomings. First, different shared topics should have distinct meanings, which should lead to different time lag orders between the two corpora. Second, for dynamic documents, not all topics are represented in each time slice, and thus topic sparsity should be considered. To address these two problems, we propose a sparse joint dynamic topic model (SJDTM) with a flexible lead-lag order. We assume a birth-and-death mechanism for all topics and a flexible lead-lag order for different shared topics. The performance of SJDTM is evaluated using both synthetic data and two real text corpora consisting of conference papers and journal papers. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
7. A note on the paper “A multi-population harmony search algorithm with external archive for dynamic optimization problems” by Turky and Abdullah
- Author
-
Ranginkaman, Amir Ehsan, primary, Kazemi Kordestani, Javidan, additional, Rezvanian, Alireza, additional, and Meybodi, Mohammad Reza, additional
- Published
- 2014
- Full Text
- View/download PDF
8. Why are papers about filters on residuated structures (usually) trivial?
- Author
-
Víta, Martin, primary
- Published
- 2014
- Full Text
- View/download PDF
9. Using semi-structured data for assessing research paper similarity
- Author
-
Hurtado Martín, Germán, primary, Schockaert, Steven, additional, Cornelis, Chris, additional, and Naessens, Helga, additional
- Published
- 2013
- Full Text
- View/download PDF
10. Graph model for conflict resolution based on the combination of probabilistic uncertain linguistic and EDAS method.
- Author
-
Liu, Peide, Wang, Xue, Fu, Yingxin, and Wang, Peng
- Subjects
- *
CONFLICT management , *ELECTRONIC paper , *GROUP decision making - Abstract
The ranking of decision makers (DMs)' preferences for feasible states in the graph model for conflict resolution (GMCR) is crucial for accurately determining stability results. This paper addresses the issue of subjective ranking methods lacking theoretical foundation and causing ambiguity when the number of feasible states is high by proposing the implementation of the multi-attribute decision-making (MADM) method in the GMCR. The paper utilizes the average level to choose evaluation based on distance from average solution (EDAS) method for determining the DM's preference ranking, which can effectively reduce the impact of anomalous evaluations. Further, the PUL-EDAS method based on probabilistic uncertainty linguistics (PUL) is developed, which overcomes the shortcomings of the traditional EDAS method, which only applies to the simple evaluation of information. The PUL aligns with DMs' daily evaluation practice by providing an interval for the quality of qualitative linguistic evaluations. Furthermore, it utilizes an objective aggregation method to calculate comprehensive evaluation information from all DMs. In addition, the four fundamental stability definitions, applicable solely under crisp preferences, are extended to the PUL context, providing related extended definitions. Finally, to ensure the scientific validity and practicality of the proposed theory, this paper selects digital rural governance as the research context for conflict calculus analysis, comparing it with other MADM methods in the preference ranking section. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Call for Papers for a Special Issue of the Information Sciences Journal on Collective Intelligence
- Published
- 2008
- Full Text
- View/download PDF
12. A note on the paper: Optimizing web servers using page rank prefetching for clustered accesses
- Published
- 2005
- Full Text
- View/download PDF
13. Call for papers special issue ”Fuzzy Decision-Making Applications in Industrial Engineering”
- Published
- 2004
- Full Text
- View/download PDF
14. Call for papers special issue ”Hybrid Intelligent Systems using Fuzzy Logic Neural Networks and Genetic Algorithms
- Published
- 2004
- Full Text
- View/download PDF
15. Call For papers: Special Issue of Information Sciences on Chance Discovery
- Published
- 2004
- Full Text
- View/download PDF
16. Call for papers
- Published
- 2004
- Full Text
- View/download PDF
17. Call for papers: Special Issue on Graph Theory and Applications
- Published
- 2004
- Full Text
- View/download PDF
18. Call for Papers
- Published
- 2003
- Full Text
- View/download PDF
19. Research issues in real-time database systems Survey paper
- Author
-
Ulusoy, O, primary
- Published
- 1995
- Full Text
- View/download PDF
20. SimCC: A novel method to consider both content and citations for computing similarity of scientific papers
- Author
-
Masoud Reyhani Hamedani, Sang-Wook Kim, and Dong-Jin Kim
- Subjects
Scheme (programming language) ,Information Systems and Management ,Information retrieval ,Relation (database) ,Computer science ,05 social sciences ,050905 science studies ,Computer Science Applications ,Theoretical Computer Science ,Weighting ,Similarity (network science) ,Artificial Intelligence ,Control and Systems Engineering ,Content (measure theory) ,Relevance (information retrieval) ,0509 other social sciences ,050904 information & library sciences ,Citation ,computer ,Software ,computer.programming_language - Abstract
To compute the similarity of scientific papers, text-based similarity measures, link-based similarity measures, and hybrid methods can be applied. The text-based and link-based similarity measures take into account only a single aspect of scientific papers, content or citations, respectively. The hybrid methods consider both content and citations; however, they do not carefully consider the relation between the content of a pair of papers involved in a citation relationship. In this paper, we propose a novel method, SimCC (similarity based on content and citations), that considers both aspects, content and citations, to compute the similarity of scientific papers. Unlike previous methods, SimCC effectively reflects both content and authority of scientific papers simultaneously in similarity computation by applying a new RA (relevance and authority) weighting scheme. Also, we propose an RA+R weighting scheme to consider the recency of papers and an RA+E weighting scheme to take into account the author expertise of papers in similarity computation. The effectiveness of our proposed method is demonstrated by extensive experiments on a real-world dataset of scientific papers. The results show that our method achieves more than 100% improvement in accuracy in comparison with previous methods.
- Published
- 2016
21. A short technical paper: Determining whether a vote assignment is dominated
- Author
-
Jajodia, Sushil, primary and Mutchler, David, additional
- Published
- 1991
- Full Text
- View/download PDF
22. Why are papers about filters on residuated structures (usually) trivial?
- Author
-
Martin Víta
- Subjects
Pure mathematics ,Information Systems and Management ,Property (philosophy) ,Generalization ,Extension (predicate logic) ,Computer Science Applications ,Theoretical Computer Science ,Algebra ,Artificial Intelligence ,Control and Systems Engineering ,Simple (abstract algebra) ,Filter (mathematics) ,Residuated lattice ,Software ,Quotient ,Mathematics - Abstract
In this paper we introduce a notion of a t-filter on residuated lattices which is a generalization of several special types of filters. We provide some basic properties of t-filters and show how particular results about special types of filters (e.g. Extension property, Triple of equivalent characteristics, and Quotient characteristics) are uniformly covered by this simple general framework.
- Published
- 2014
23. Using semi-structured data for assessing research paper similarity
- Author
-
Helga Naessens, Germán Hurtado Martín, Steven Schockaert, and Chris Cornelis
- Subjects
Information Systems and Management ,Information retrieval ,Computer science ,Latent Dirichlet allocation ,Computer Science Applications ,Theoretical Computer Science ,Task (project management) ,symbols.namesake ,Artificial Intelligence ,Control and Systems Engineering ,Explicit semantic analysis ,Similarity (psychology) ,symbols ,Vector space model ,Semi-structured data ,Language model ,Adaptation (computer science) ,Software - Abstract
The task of assessing the similarity of research papers is of interest in a variety of application contexts. It is a challenging task, however, as the full text of the papers is often not available, and similarity needs to be determined based on the papers' abstract, and some additional features such as their authors, keywords, and the journals in which they were published. Our work explores several methods to exploit this information, first by using methods based on the vector space model and then by adapting language modeling techniques to this end. In the first case, in addition to a number of standard approaches we experiment with the use of a form of explicit semantic analysis. In the second case, the basic strategy we pursue is to augment the information contained in the abstract by interpolating the corresponding language model with language models for the authors, keywords and journal of the paper. This strategy is then extended by revealing the latent topic structure of the collection using an adaptation of Latent Dirichlet Allocation, in which the keywords that were provided by the authors are used to guide the process. Experimental analysis shows that a well-considered use of these techniques significantly improves the results of the standard vector space model approach.
- Published
- 2013
24. A note on the paper 'A multi-population harmony search algorithm with external archive for dynamic optimization problems' by Turky and Abdullah
- Author
-
Mohammad Reza Meybodi, Amir Ehsan Ranginkaman, Javidan Kazemi Kordestani, and Alireza Rezvanian
- Subjects
Scheme (programming language) ,Information Systems and Management ,Optimization problem ,Point (typography) ,Computer science ,business.industry ,Computer Science Applications ,Theoretical Computer Science ,Dynamic problem ,Artificial Intelligence ,Control and Systems Engineering ,Multi population ,Benchmark (computing) ,Harmony search ,Artificial intelligence ,business ,computer ,Software ,computer.programming_language - Abstract
In a very recently presented paper, Turky and Abdullah 5 proposed a novel multi-population harmony search with external archive (MHSA-ExtArchive) for dynamic optimization problems. In the experimental results, the authors claimed that their approach could outperform several state-of-the-art algorithms. They also showed the superiority of their method by means of numerical experiments on Moving Peaks Benchmark (MPB). Despite the interesting idea of applying multi-population scheme on harmony search and using a new type of external archive for dealing with dynamic problems, we believe that there are two very important shortcomings in the result analysis, which we point out in this short note. The main motivation of the present note is to contribute toward preventing the same mistakes from happening by the other researchers.
- Published
- 2014
25. Corrections to the paper “the identification of the parameters of time-invariant stochastic systems by a method derived from the continuous-time kalman filter”
- Author
-
Smith, M.W.A., primary and Roberts, A.P., additional
- Published
- 1980
- Full Text
- View/download PDF
26. Call for papers
- Published
- 1985
- Full Text
- View/download PDF
27. Errata: Corrections to two papers
- Author
-
Inoue, Katsushi, primary and Takanami, Itsuo, additional
- Published
- 1980
- Full Text
- View/download PDF
28. Some remarks on a paper by R. R. Yager
- Author
-
Klement, Erich Peter, primary
- Published
- 1982
- Full Text
- View/download PDF
29. Papers to appear in forthcoming numbers
- Published
- 1971
- Full Text
- View/download PDF
30. A note on the paper: Optimizing web servers using page rank prefetching for clustered accesses
- Author
-
Wai-Ki Ching
- Subjects
World Wide Web ,Web server ,Information Systems and Management ,Artificial Intelligence ,Control and Systems Engineering ,Computer science ,Page rank ,computer.software_genre ,computer ,Software ,Computer Science Applications ,Theoretical Computer Science - Abstract
In this short note, we briefly present and discuss an example of page rank algorithm given in [Information Sciences 150 (2003) 165-176].
- Published
- 2005
31. A short technical paper: Determining whether a vote assignment is dominated
- Author
-
David Mutchler and Sushil Jajodia
- Subjects
Information Systems and Management ,Operations research ,Computer science ,media_common.quotation_subject ,Computer Science Applications ,Theoretical Computer Science ,Artificial Intelligence ,Control and Systems Engineering ,Voting ,Mutual exclusion ,Meaning (existential) ,Mathematical economics ,Software ,media_common - Abstract
One way to achieve mutual exclusion in a distributed system is to assign votes to each site in the system. If the total number of votes is odd, the assignment is known to be nondominated, meaning that no other assignment can provide strictly greater access and still achieve mutual exclusion. We characterize in this note dominated even-totaled vote assignments. As a consequence, we obtain that the problem of determining whether an even-totaled vote assignment is dominated is trivial if each site is assigned exactly one vote; however, the problem is NP-complete in general.
- Published
- 1991
32. Call for papers: Special Issue on Graph Theory and Applications
- Author
-
Chung-Kung Yen and Paul P. Wang
- Subjects
Information Systems and Management ,Artificial Intelligence ,Control and Systems Engineering ,Computer science ,Management science ,Library science ,Graph theory ,Software ,Information science ,Computer Science Applications ,Theoretical Computer Science - Published
- 2004
33. On the Euler sequence spaces which include the spaces ℓ p and ℓ∞ I
- Author
-
Altay, B., Başar, F., and Mursaleen, M.
- Subjects
- *
ABSTRACTING , *PAPER , *SPACE , *EULER characteristic - Abstract
Abstract: In the present paper, we introduce the Euler sequence space consisting of all sequences whose Euler transforms of order r are in the space ℓ p of non-absolute type which is the BK-space including the space ℓ p and prove that the spaces and ℓ p are linearly isomorphic for 1⩽ p ⩽∞. Furthermore, we give some inclusion relations concerning the space . Finally, we determine the α-, β- and γ-duals of the space for 1⩽ p ⩽∞ and construct the basis for the space , where 1⩽ p <∞. [Copyright &y& Elsevier]
- Published
- 2006
- Full Text
- View/download PDF
34. Some remarks on a paper by R. R. Yager
- Author
-
Erich Peter Klement
- Subjects
Pure mathematics ,Information Systems and Management ,Artificial Intelligence ,Control and Systems Engineering ,Additive function ,Point (geometry) ,Monotonic function ,Fuzzy logic ,Software ,Computer Science Applications ,Theoretical Computer Science ,Mathematics - Abstract
We show that slight technical changes in the definition transform the probability of fuzzy events introduced by R. R. Yager [16] into a new concept of such probabilities having nice properties, both from an intuitive and from a mathematical point of view: monotonicity, additivity, and continuity.
- Published
- 1982
35. Corrections to the paper 'the identification of the parameters of time-invariant stochastic systems by a method derived from the continuous-time kalman filter'
- Author
-
M.W.A. Smith and A.P. Roberts
- Subjects
Information Systems and Management ,Computer science ,Invariant extended Kalman filter ,Computer Science Applications ,Theoretical Computer Science ,Extended Kalman filter ,Artificial Intelligence ,Control and Systems Engineering ,Nonlinear filter ,Control theory ,Filtering problem ,Fast Kalman filter ,Ensemble Kalman filter ,Unscented transform ,Alpha beta filter ,Software - Published
- 1980
36. Deep reinforce learning for joint optimization of condition-based maintenance and spare ordering.
- Author
-
Hao, Shengang, Zheng, Jun, Yang, Jie, Sun, Haipeng, Zhang, Quanxin, Zhang, Li, Jiang, Nan, and Li, Yuanzhang
- Subjects
- *
CONDITION-based maintenance , *REINFORCEMENT learning , *DEEP learning , *MACHINE learning , *SYSTEM failures , *MARKOV processes - Abstract
Condition-based maintenance (CBM) policy can avoid premature or late maintenance and reduce system failures and maintenance costs. Most existing CBM studies cannot solve the dimensional disaster problem in multi-component complex systems. Only some studies consider the constraint of maintenance resources when searching for the optimal maintenance policy, which is hard to apply to practical maintenance. This paper studies the joint optimization of the CBM policy and spare components inventory for the multi-component system in large state and action spaces. We use Markov Decision Process to model it and propose an improved deep reinforcement learning algorithm based on the stochastic policy and actor-critic framework. In this algorithm, factorization decomposes the system action into the linear combination of each component's action. The experimental results show that the algorithm proposed in this paper has better time performance and lower system cost compared with other benchmark algorithms. The training time of the former is only 28.5% and 9.12% of that of PPO and DQN algorithms, and the corresponding system cost is decreased by 17.39% and 27.95%, respectively. At the same time, our algorithm has good scalability and is suitable for solving Markov decision-making problems in large-scale state and action space. • Considering minor and major repair, we model the joint optimization of CBM and spare ordering for large multi-component systems based on MDP. • An improved DRL algorithm is presented to deal with the MDP model in large-scale discrete state and action space. • We validate our DRL algorithm has good time performance and optimal decision-making series solution via comparisons with DQN and PPO algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
37. Simplification logic for the management of unknown information.
- Author
-
Pérez-Gámez, Francisco, Cordero, Pablo, Enciso, Manuel, and Mora, Ángel
- Subjects
- *
HEYTING algebras , *INFORMATION resources management , *IMPLICATION (Logic) - Abstract
This paper aims to contribute to the extension of classical Formal Concept Analysis (FCA), allowing the management of unknown information. In a preliminary paper, we define a new kind of attribute implications to represent the knowledge from the information currently available. The whole FCA framework has to be appropriately extended to manage unknown information. This paper introduces a new logic for reasoning with this kind of implications, which belongs to the family of logics with an underlying Simplification paradigm. Specifically, we introduce a new algebra, named weak dual Heyting Algebra, that allows us to extend the Simplification logic for these new implications. To provide a solid framework, we also prove its soundness and completeness and show the advantages of the Simplification paradigm. Finally, to allow further use of this extension of FCA in applications, an algorithm for automated reasoning, which is directly built from logic, is defined. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
38. Specification transformation method for functional program generation based on partition-recursion refinement rule.
- Author
-
Zuo, Zhengkang, Zeng, Zhicheng, Su, Wei, Huang, Qing, Ke, Yuhan, Liu, Zengxin, Wang, Changjing, and Liang, Wei
- Subjects
- *
MULTIPLICATION , *POLYNOMIALS , *PROTOTYPES , *ALGORITHMS , *COMPUTER software - Abstract
Implementations that follow the functional programming paradigm are being used in more and more domains. As functional programming paradigm has mathematical reference transparency, refinement to functional programs contributes to improving the reliability of the transformation process and simplifying the refinement steps. However, it is a challenge to generate functional programs from specifications. Most existing transformation methods refine specifications into abstract algorithm-level programs based on loop invariants rather than functional programs. This paper proposes a novel functional program generation method based on the partition-recursion refinement rule. It establishes a novel program refinement framework based on functional theory for the first time. This is the first study to regard the whole program refinement process as a composition of abstract functions. This paper designs a recurrence-based algorithm design language (Radl+) and implements a software prototype to map Radl+ algorithms into executable Haskell programs. To prove the feasibility and efficiency of this method, this paper transforms the polynomial multiplication problem from a specification into an executable Haskell program. This case shows that compared with existing approaches, the proposed method can simplify the transformation steps and reduce the number of lines of generated code from 38 to 10. • Novel refinement framework provides a new approach to generating a functional program. • The composition of abstract functions explains the program refinement process. • Substitution rule and Recursion rule have none of the side effects. • Software prototype transforms the polynomial multiplication problem into Haskell program. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
39. Heterogeneous cognitive learning particle swarm optimization for large-scale optimization problems.
- Author
-
Zhang, En, Nie, Zihao, Yang, Qiang, Wang, Yiqiao, Liu, Dong, Jeon, Sang-Woon, and Zhang, Jun
- Subjects
- *
COGNITIVE learning , *PARTICLE swarm optimization , *ZONE of proximal development , *HOTEL suites - Abstract
Large-scale optimization problems (LSOPs) become increasingly ubiquitous but complicated in real-world scenarios. Confronted with such sophisticated optimization problems, most existing optimizers dramatically lose their effectiveness. To tackle this type of problems effectively, we propose a heterogeneous cognitive learning particle swarm optimizer (HCLPSO). Unlike most existing particle swarm optimizers (PSOs), HCLPSO partitions particles in the current swarm into two categories, namely superior particles (SP) and inferior particles (IP), based on their fitness, and then treats the two categories of particles differently. For inferior particles, this paper devises a random elite cognitive learning (RECL) strategy to update each one with a random superior particle chosen from SP. For superior particles, this paper designs a stochastic dominant cognitive learning (SDCL) strategy to evolve each one by randomly selecting one guiding exemplar from SP and then updating it only when the selected exemplar is better. With the collaboration between these two learning mechanisms, HCLPSO expectedly evolves particles effectively to explore the search space and exploit the found optimal zones appropriately to find optimal solutions to LSOPs. Furthermore, to help HCLPSO traverse the vast search space with promising compromise between intensification and diversification, this paper devises a dynamic swarm partition scheme to dynamically separate particles into the two categories. With this dynamic strategy, HCLPSO gradually switches from exploring the search space to exploiting the found optimal zones intensively. Experiments are executed on the publicly acknowledged CEC2010 and CEC2013 LSOP benchmark suites to compare HCLPSO with several state-of-the-art approaches. Experimental results reveal that HCLPSO is effective to tackle LSOPs, and attains considerably competitive or even far better optimization performance than the compared state-of-the-art large-scale methods. Furthermore, the effectiveness of each component in HCLPSO and the good scalability of HCLPSO are also experimentally verified. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
40. Evidential Markov decision-making model based on belief entropy to predict interference effects.
- Author
-
Pan, Lipeng and Gao, Xiaozhuan
- Subjects
- *
MARKOV processes , *DEMPSTER-Shafer theory , *DECISION theory , *ENTROPY , *QUANTUM interference - Abstract
Some cognitive and decision making experiments have demonstrated the classical decision theory may be violated. Recently, the interference effects of quantum theory have attracted a strong interest in applying some fields outside physics. It can be also used to explain the paradox in decision models. Existing some experiments and studies attribute the main reason for the existence of interference effects to uncertain information in decision process. Dempster-Shafer evidence theory extends the framework of discernment to power sets so it can describe unknown and imprecise information. This paper proposes evidential Markov decision-making model based on belief entropy to quantitatively predict and determine the value of interference effects. In new model, the frame of discernment is extended by introducing hesitant or unknown states which could be hidden by participants. Moreover, new model assumes there is no input of any information at initial states so it has the most chaotic states and is determined according to the maximum belief entropy. Finally, this paper discusses the effectiveness of new model by comparing with other methods as studying the interference effects of decision process. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
41. Writer-independent signature verification; Evaluation of robotic and generative adversarial attacks.
- Author
-
Bird, Jordan J., Naser, Abdallah, and Lotfi, Ahmad
- Subjects
- *
GENERATIVE adversarial networks , *DATA augmentation , *DENIAL of service attacks , *ROBOTICS , *CONVOLUTIONAL neural networks , *FORGERY , *MACHINE learning - Abstract
Forgery of a signature with the aim of deception is a serious crime. Machine learning is often employed to detect real and forged signatures. In this study, we present results which argue that robotic arms and generative models can overcome these systems and mount false-acceptance attacks. Convolutional neural networks and data augmentation strategies are tuned, producing a model of 87.12% accuracy for the verification of 2,640 human signatures. Two approaches are used to successfully attack the model with false-acceptance of forgeries. Robotic arms (Line-us and iDraw) physically copy real signatures on paper, and a conditional Generative Adversarial Network (GAN) is trained to generate signatures based on the binary class of 'genuine' and 'forged'. The 87.12% error margin is overcome by all approaches; prevalence of successful attacks is 32% for iDraw 2.0, 24% for Line-us, and 40% for the GAN. Fine-tuning with examples show that false-acceptance is preventable. We find attack success reduced by 24% for iDraw, 12% for Line-us, and 36% for the GAN. Results show exclusive behaviours between human and robotic forgers, suggesting training wholly on human forgeries can be attacked by robots, thus we argue in favour of fine-tuning systems with robotic forgeries to reduce their prevalence. • Development of a computer vision-based system for signature spoofing attack detection. • A Conditional GAN can generate "real" and "fake" signatures. • Two robots can physically replicate human signatures with pen and paper. • The GAN and both robots can fool the model and mount false-acceptance attacks. • Verification model can be defended by fine-tuning on generative and robotic forgeries. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
42. A supervised data augmentation strategy based on random combinations of key features.
- Author
-
Ding, Yongchang, Liu, Chang, Zhu, Haifeng, and Chen, Qianjun
- Subjects
- *
DATA augmentation , *CONVOLUTIONAL neural networks , *IMAGE recognition (Computer vision) , *ARTIFICIAL intelligence , *FEATURE extraction , *CLASSIFICATION - Abstract
Data augmentation strategies have always been important in machine learning techniques and play a unique role in model performance optimization processes. Therefore, in recent years, these techniques have become popular in the artificial intelligence field. In this paper, a new data augmentation strategy is proposed based on the interpretation algorithm of deep convolutional neural networks, i.e., constructing new training samples by deeply exploiting key features extracted from interpretable networks to achieve sample augmentation. Thus, a novel supervised data augmentation approach known as Supervised Data Augmentation–Key Feature Extraction (SDA-KFE) was proposed. By introducing the Neural Network Interpreter-Segmentation Recognition and Interpretation (NNI-SRI) algorithm, an augmentation strategy is proposed that can balance the high accuracy and high robustness of the final model while ensuring a large amount of data augmentation. The advantages of the SDA-KFE algorithm are mainly reflected in the following aspects. First, it is easy to implement. This algorithm is implemented based on the lightweight NNI-SRI algorithm, which lays the foundation for the implementation of SDA-KFE so that it can be easily implemented on convolutional neural networks. Second, this model, which is widely applicable, can be applied to almost any deep convolutional network. Through research and experiments on this proposed algorithm, SDA-KFE can be applied in graphical image binary classification and multiclassification models. Third, SDA-KFE can rapidly construct data samples with diverse variations. Under the premise of determining the classification labels of the generated samples, the distribution of the feature unit composition of the samples can be controlled. Compared with traditional data augmentation methods, SDA-KFE can control the direction of the model performance, i.e., the balance between the pursuit of high accuracy and robust performance of the model. Therefore, the novel supervised augmentation approach proposed in this paper is relevant for optimizing deep convolutional neural networks, solving model overfitting, augmenting data types, etc. The data augmentation algorithm proposed in this paper can be regarded as a useful supplement to traditional data augmentation methods, such as horizontal or vertical image flipping, cropping, color transformation, extension and rotation. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
43. A user-knowledge vector space reconstruction model for the expert knowledge recommendation system.
- Author
-
Gao, Li, Liu, Yi, Chen, Qing-kui, Yang, He-yu, He, Yi-qi, and Wang, Yan
- Subjects
- *
VECTOR spaces , *RECOMMENDER systems , *INSTITUTIONAL repositories , *PROBLEM solving , *TEXTUAL criticism - Abstract
• EKRS is an intelligent research assistance system to recommend knowledge to scholars. • EKRS is formed through mapping two sets of IR and CRD. • IR and CRD were reconstructed based on the VSM. • LRA improving the solution process and decreasing the complexity of the UKVSM. Expert Knowledge Recommendation System (EKRS) is an intelligent research assistance system. The system is formed by mapping two sets of conceptual spaces through Institutional Repository (IR) and Core Resource Dataset (CRD) in 2018. The user knowledge pattern matching (UKPM) of EKRS has problems such as uncertain user knowledge text matching, slow update of expert knowledge, and inability to accurately track user knowledge. This paper establishes a user knowledge vector space reconstruction model (UKVSM) through the following steps to solve the above problems. Firstly, the text feature items of IR and CRD are reconstructed and the depth and density correction coefficient matrix of the original node of the text semantic meaning is calculated based on the similarity of feature items of the semantic layer. Secondly, in order to improve the efficiency of UKPM exact matching, the Lagrangian relaxation algorithm (LRA) is used to optimize the two sets of knowledge matching strategies. Finally, the real data set is extracted from the EKRS platform, and the model and algorithm proposed in this paper are tested and verified respectively, and compared with other methods. Experiments show that reconstruction model can improve the accuracy of user knowledge task assignment in EKRS, while LRA can improve the efficiency of model solving. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
44. K-DGHC: A hierarchical clustering method based on K-dominance granularity.
- Author
-
Yu, Bin, Zheng, Zijian, and Dai, Jianhua
- Subjects
- *
HIERARCHICAL clustering (Cluster analysis) , *SOCIAL dominance , *EUCLIDEAN metric , *GRANULAR computing , *RANDOM noise theory , *EUCLIDEAN distance - Abstract
Existing hierarchical clustering (HC) algorithms generally depend on the Euclidean characteristic metric (Euclidean distance, Manhattan distance, Chebyshev distance, etc.) on Euclidean space to describe the similarity between objects, which makes the clustering process oriented to data sets with uniform and regular distribution in Euclidean space. Although such methods can visually distinguish the cluster distribution of data, it is not effective for the data sets which are densely distributed, interlaced and complex in Euclidean space. As a scalable, efficient and robust method, granular computing generally analyzes data from the perspective of similarity and proximity. In consideration of the advantages of granular computing in extracting data information from a multi-level perspective, in order to reduce the limitations of HC methods based on Euclidean features on non-Euclidean data, this paper proposes a novel HC method based on non-Euclidean feature structure. First, this paper constructs the similarity between objects based on K -dominance granularity and neighborhood search, and considers the environmental information of data points from both global and local perspectives. Secondly, a new HC method based on non-Euclidean feature structure is designed on the basis of the similarity measurement constructed in this paper. Finally, through comparative analysis, the experimental results prove that our method can more accurately identify the densely distributed and interlaced data sets in Euclidean space; it is significantly better than comparison algorithms using different Euclidean features to measure similarity; it has good robustness when additional Gaussian noise is added. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
45. A consensus measure-based three-way clustering method for fuzzy large group decision making.
- Author
-
Guo, Lun, Zhan, Jianming, Xu, Zeshui, and Alcantud, José Carlos R.
- Subjects
- *
GROUP decision making , *DECISION making , *TRUST - Abstract
In fuzzy large group decision making methods, an effective clustering method can greatly reduce the complexity of decision making, and it is an important ingredient for reaching a group consensus. In this paper, a novel fuzzy large group decision making method is established using three-way clustering and an adaptive exit-delegation mechanism. Traditional clustering approaches group together individuals (isolated points) that deviate from the whole. The individuals (edge points) may exist and wander in between two or more classes. Both circumstances can lead to unstable and unreasonable clustering results. To overcome both setbacks, we propose a three-way clustering method based on the k -means clustering algorithm. The method first applies k -means clustering to perform an initial division of the universe of decision-makers. Then, in the spirit of three-way clustering, the edge points and outliers are separated from the clustering results by resorting to the three-way relationships between individuals and classes. The final clustering stems from an adaptive exit-delegation mechanism, and a consensus measure-based model determines the intra-group individual weight and inter-individual trust weight. Finally, the feasibility and effectiveness of the methodology that arises from the model designed in this paper are verified by comparative analyses. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
46. The L2 convergence of stream data mining algorithms based on probabilistic neural networks.
- Author
-
Rutkowska, Danuta, Duda, Piotr, Cao, Jinde, Rutkowski, Leszek, Byrski, Aleksander, Jaworski, Maciej, and Tao, Dacheng
- Subjects
- *
ARTIFICIAL neural networks , *DATA mining , *MATHEMATICAL proofs , *ONLINE algorithms , *ALGORITHMS , *TRACKING algorithms - Abstract
This paper concerns a new incremental approach to mining data streams. It is known that patterns in a data stream may evolve over time. In many cases, we need to track and analyze the nature of these changes. In the paper, the probabilistic neural networks are considered as basic models for tracking changes in data streams. We present globally convergent stream data mining algorithms applied to problems of regression, classification, and density estimation in a time-varying (drifting) environment. The algorithms are derived from the Parzen kernel-based probabilistic neural networks working in the online mode. For each problem, a theorem is presented ensuring the L 2 convergence of the algorithm designed for tracking drifting regression, density, or discriminant functions. Illustrative examples explain in detail how to choose the bandwidth of the Parzen kernel and the learning rate of the online algorithm. The performance of all algorithms is shown in exemplary simulations. It should be noted that this paper is one of very few, in the existing literature, presenting mathematically justified stream data mining algorithms. • The incremental version of the Generalized Regression Neural Network (IGRNN) able to track drifting regression functions. • The incremental version of the Probabilistic Neural Network (IPNN) working in non-stationary environments. • Application of IPNN for tracking drifting discriminant functions. • Mathematical proofs of the L 2 convergence of all the proposed estimators. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
47. Multiplicative consistency analysis of interval-valued fuzzy preference relations.
- Author
-
Wan, Shuping, Cheng, Xianjuan, and Dong, Jiu-Ying
- Subjects
- *
INPAINTING , *DECISION making , *COMPARATIVE studies - Abstract
Interval-valued fuzzy preference relations (IVFPRs) have been applied to many real-life decision-making problems. However, most definitions of consistency of IVFPRs do not satisfy invariability to compared objects' labels. To overcome this drawback, this paper mainly focuses on the multiplicative consistency analysis of interval-valued fuzzy preference relations (IVFPRs). Firstly, this paper proposes a new multiplicative consistency of complete IVFPRs. It is proved that this new multiplicative consistency is robust and invariable to compared objects' labels. Then, the definition of acceptable incomplete IVFPRs (In-IVFPRs) is presented. To make full use of all direct and indirect evaluations of decision-makers, an algorithm is devised to evaluate the missing elements of an acceptable incomplete In-IVFPR. To comprehensively describe the closeness between any two complete IVFPRs, the total deviation of two complete IVFPRs is defined based on the p -norm of a vector. By minimizing the total deviation of two complete IVFPRs, a programming model is built to determine an interval weight vector from a complete IVFPR. Subsequently, a novel decision-making method with an In-IVFPR is proposed. Lastly, three practical and numerical examples and simulation-based comparative analyses are provided to further validate the practicability and advantages of the proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
48. Finite/fixed-time practical sliding mode: An event-triggered approach.
- Author
-
Song, Feida, Wang, Leimin, Wang, Qingyi, and Wen, Shiping
- Subjects
- *
MEASUREMENT errors , *SMART structures - Abstract
This paper proposes a unified event-triggered sliding-mode control framework to attain the finite/fixed-time reachability of practical sliding-mode band. In event-triggered sliding-mode control, the practical sliding mode makes the size of the sliding-mode band dependent on the event function rather than the disturbance bound and sampling interval and provides better control performance due to this advantage. Under this paper's unified framework, the predesigned practical sliding-mode band can be respectively reached within a finite/fixed time by choosing different parameters. Then, different from the asymptotical convergence obtained in other investigations, the ultimate finite-time stability of the controlled system can be guaranteed. In the sliding phase, by adjusting the initial value of integration for settling time from initial value of the controlled system to the point where the sliding phase starts, a more precise estimation to settling time is obtained and can be generalized to different kinds of systems. In addition, in comparison to other results in finite-time event-triggered sliding-mode control, signum function is subtracted from the measurement error which eliminates the Zeno phenomenon and ensures the reliable operation of the digital controller in reality. Finally, a numerical example is given to verify the effectiveness of the theoretical results. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
49. Three-way decision for probabilistic linguistic conflict analysis via compounded risk preference.
- Author
-
Wang, Tianxing, Huang, Bing, Li, Huaxiong, Liu, Dun, and Yu, Hong
- Subjects
- *
LINGUISTIC analysis , *PSYCHOLOGICAL factors , *DECISION theory , *GRANULAR computing , *PROSPECT theory , *COMPUTER software development , *REGRET - Abstract
Three-way decision, an essential granular computing research tool, provides an efficient solution to complex and uncertain problems. Behavioral decision theory can analyze the risk preferences of decision-makers effectively. Scholars have conducted preliminary exploration on the fusion of these two theories, but it is still challenging to describe the different types of risk preferences of decision-makers. This paper combines prospect theory with regret theory and studies the compound risk preference modeling of three-way decision to address this issue. Because three attitudes of conflicts coincide with three-way decision, many scholars have conducted multi-dimensional research on three-way conflict analysis and accomplished remarkable results. However, few relevant studies consider psychological factors and risk attitudes of decision-makers, and it is more appropriate to describe agents' attitudes on issues using linguistic terms. This paper applies the proposed three-way decision model based on compounded risk preference and probabilistic linguistic term sets to the conflict analysis problem. We utilize examples to explain the decision-making process of the proposed model and three-way conflict analysis method with the influence of the compounded risk preference under the action of reference point and regret avoidance coefficient. The illustrative example illustrates that the proposed three-way decision model can effectively solve the software development conflict analysis problem for different decision-makers and the comparative analysis shows the advantages of the proposed model and method compared with the two existing methods. Finally, we verify the performance of the three-way decision model based on compounded risk preference by UCI data sets in parameter experiments. The changes of the reference point from 10 to 0 and regret avoidance coefficient in 0, 0.15 and 0.3 respectively demonstrate the trend rule of the model's thresholds and delay-decision rate index. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
50. Fed-ESD: Federated learning for efficient epileptic seizure detection in the fog-assisted internet of medical things.
- Author
-
Ding, Weiping, Abdel-Basset, Mohamed, Hawash, Hossam, Abdel-Razek, Sara, and Liu, Chuansheng
- Subjects
- *
EPILEPSY , *INTERNET of things , *PRIVACY , *ENERGY industries , *INTERNET privacy , *SUDDEN death , *ELECTROENCEPHALOGRAPHY , *LEARNING - Abstract
• This paper presents a lightweight and efficient spatial–temporal transformer network to learn collaboratively and efficiently to detect epileptic seizures. • A hierarchical FL framework is introduced to enable resource-efficient training of the detection network. • The proposed Fed-ESD mitigates the risk of a single point of failure by alleviating reliance on a centralized authority. Epilepsy is a predominant paroxysmal neurological disturbance that is usually recognized as the incidence of impulsive seizures rarely seen in medicine. Automatic detection of epileptic seizures from electroencephalogram (EEG) signals is viewed as an effective diagnosis of patients on the Internet of Medical Things (IoMT). To design a robust detection service in an IoMT environment, the EEG signals of different patients are collected from geographically distributed patients to a centralized server. However, this makes the patient's privacy prone to exposure and adds to the energy and communication costs. Also, the central server can be subject to malevolent attacks, resulting in non-efficient solutions. In this regard, for the first time, this paper presents a privacy-preserving federated learning framework for epileptic seizure detection (called Fed-ESD) from EEG signals in the fog-computing-based IoMT. A lightweight and efficient spatiotemporal transformer network is introduced to collaboratively learn spatial and temporal representations from the local data of each participant. The proposed Fed-ESD employs geographically situated fog nodes as local aggregators to enable sharing of location-based EEG signals for comparable IoMT applications. Moreover, a greedy method is introduced for deciding on the ideal fog node to be the coordinator node responsible for global aggregation during the training, thereby decreasing the reliance on the central server in the IoMT. Experimental evaluations demonstrate the efficiency of the proposed Fed-ESD in terms of detection performance, resource-efficiency, stability, and scalability for deployment in the IoMT. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.