18 results on '"97r40"'
Search Results
2. Mathematical Methods of Randomized Machine Learning.
- Author
-
Popkov, Yu. S.
- Subjects
- *
MACHINE learning - Abstract
In this paper, a review of mathematical methods of randomized machine learning is presented. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
3. Prediction of COVID-19 corona virus pandemic based on time series data using support vector machine.
- Author
-
Singh, Vijander, Poonia, Ramesh Chandra, Kumar, Sandeep, Dass, Pranav, Agarwal, Pankaj, Bhatnagar, Vaibhav, and Raja, Linesh
- Subjects
- *
COVID-19 pandemic , *PANDEMICS , *COVID-19 , *SUPPORT vector machines , *SARS disease , *TIME series analysis , *VIRUS diseases - Abstract
Predicting the probability of CORONA virus outbreak has been studied in recent days, but the published literature seldom contains multiple model comparisons or predictive analysis of uncertainty. Time series parameters are the core factors influencing infectious diseases such as severe acute respiratory syndrome (SARS) and influenza. As a global pandemic is imminent, the prediction of real-time transmission of COVID-19 is crucial. The objective of this paper is to produce a real-time forecasts using the SVM model. The purpose of this study is to investigate the Corona Virus Disease 2019 (COVID-19) prediction of confirmed, deceased and recovered cases. This prediction will help to plan resources, determine government policy, and provide survivors with immunity passports, and use the same plasma for care. In this analysis, data including attributes such as location wise confirmed, deceased, recovered COVID-19, longitude and latitude were collected from January 22, 2020 to April 25, 2020 worldwide. Support Vector Machine was used to explore the impact on identification, deceased, and recovery. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
4. Blending under deconstruction: The roles of logic, ontology, and cognition in computational concept invention.
- Author
-
Confalonieri, Roberto and Kutz, Oliver
- Abstract
The cognitive-linguistic theory of conceptual blending was introduced by Fauconnier and Turner in the late 90s to provide a descriptive model and foundational approach for the (almost uniquely) human ability to invent new concepts. Whilst blending is often described as 'fluid' and 'effortless' when ascribed to humans, it becomes a highly complex, multi-paradigm problem in Artificial Intelligence. This paper aims at presenting a coherent computational narrative, focusing on how one may derive a formal reconstruction of conceptual blending from a deconstruction of the human ability of concept invention into some of its core components. It thus focuses on presenting the key facets that a computational framework for concept invention should possess. A central theme in our narrative is the notion of refinement, understood as ways of specialising or generalising concepts, an idea that can be seen as providing conceptual uniformity to a number of theoretical constructs as well as implementation efforts underlying computational versions of conceptual blending. Particular elements underlying our reconstruction effort include ontologies and ontology-based reasoning, image schema theory, spatio-temporal reasoning, abstract specification, social choice theory, and axiom pinpointing. We overview and analyse adopted solutions and then focus on open perspectives that address two core problems in computational approaches to conceptual blending: searching for the shared semantic structure between concepts—the so-called generic space in conceptual blending—and concept evaluation, i.e., to determine the value of newly found blends. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
5. A paraconsistent approach to actions in informationally complex environments.
- Author
-
Białek, Łukasz, Dunin-Kęplicz, Barbara, and Szałas, Andrzej
- Abstract
Contemporary systems situated in real-world open environments frequently have to cope with incomplete and inconsistent information that typically increases complexity of reasoning and decision processes. Realistic modeling of such informationally complex environments calls for nuanced tools. In particular, incomplete and inconsistent information should neither trivialize nor stop both reasoning or planning. The paper introduces ACTLOG, a rule-based four-valued language designed to specify actions in a paraconsistent and paracomplete manner. ACTLOG is an extension of 4QLBel, a language for reasoning with paraconsistent belief bases. Each belief base stores multiple world representations. In this context, ACTLOG's action may be seen as a belief bases' transformer. In contrast to other approaches, ACTLOG actions can be executed even when the underlying belief base contents is inconsistent and/or partial. ACTLOG provides a nuanced action specification tools, allowing for subtle interplay among various forms of nonmonotonic, paraconsistent, paracomplete and doxastic reasoning methods applicable in informationally complex environments. Despite its rich modeling possibilities, it remains tractable. ACTLOG permits for composite actions by using sequential and parallel compositions as well as conditional specifications. The framework is illustrated on a decontamination case study known from the literature. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
6. Digital forensics and investigations meet artificial intelligence.
- Author
-
Costantini, Stefania, De Gasperis, Giovanni, and Olivieri, Raffaele
- Abstract
In the frame of Digital Forensic (DF) and Digital Investigations (DI), the "Evidence Analysis" phase has the aim to provide objective data, and to perform suitable elaboration of these data so as to help in the formation of possible hypotheses, which could later be presented as elements of proof in court. The aim of our research is to explore the applicability of Artificial Intelligence (AI) along with computational logic tools – and in particular the Answer Set Programming (ASP) approach — to the automation of evidence analysis. We will show how significant complex investigations, hardly solvable for human experts, can be expressed as optimization problems belonging in many cases to the ℙ or ℕℙ complexity classes. All these problems can be expressed in ASP. As a proof of concept, in this paper we present the formalization of realistic investigative cases via simple ASP programs, and show how such a methodology can lead to the formulation of tangible investigative hypotheses. We also sketch a design for a feasible Decision Support System (DSS) especially meant for investigators, based on artificial intelligence tools. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
7. Pseudonym generation using genetic algorithm in vehicular ad hoc networks.
- Author
-
Chaudhary, Bhawna and Singh, Karan
- Subjects
- *
ANONYMS & pseudonyms , *GENETIC algorithms , *AD hoc computer networks , *GENETIC techniques , *INFORMATION networks , *INFORMATION sharing , *VEHICULAR ad hoc networks - Abstract
Crossover technique can help to minimize the overhead of pseudonym and reduce the chances of occurrence of hazards that can occur due to the leak of information in the network. To achieve anonymity and preserving information of a vehicle in vehicular ad hoc networks (VANETs), a novel pseudonym assignment approach is being proposed. However, a VANET is a dynamic and complex network and its important characteristic is the demand for accurate information exchange in-between communicating nodes (vehicles) moving in the network. Therefore in a VANET, an effective pseudonym generation algorithm should efficiently adapt to the dynamic changes and able to contribute to preserving privacy. In this paper, we formulate pseudonym assignment (PAP) problem in vehicular ad hoc networks and propose an algorithm that generates pseudonyms using the crossover technique of Genetic Algorithm (GA) by crossing an initial pair of pseudonyms and generating a new set of pseudonyms which are assigned to vehicles at different time difference. The experimental analysis by computing the variation between the pseudonyms shows that the proposed solution can work well for the PAP and evaluations demonstrate the effectiveness of the algorithm through showing relatively changing values of the anonymous pseudonym set. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
8. Deep learning with perspective modeling for early detection of malignancy in mammograms.
- Author
-
Kumar, Ashok, Mukherjee, Saurabh, and Luhach, Ashish Kr.
- Subjects
- *
LUNG cancer , *DEEP learning , *MAMMOGRAMS , *SOFT computing , *BREAST imaging , *BREAST cancer - Abstract
Malignancy in human bodies are named behind the body part in which it invades exponentially like lungs cancer if malignancy is invading lungs. Growth of a cell is controlled by its centroid and it goes uncontrolled if it is behaving abnormally. Objective of this work is to deliver a classification system that can be used to classify breast images as a benign or malignant and if malignant then can further classify which type of malignancy that is non-invasive or invasive cancer. This model can also prescribe treatment for predicted malignant class with details like time taken, degree of seriousness, probability of curing by opted treatment because treatment of a breast cancer depends on type and stage of malignancy. To achieve higher or clinical usage accuracy by deploying advances of soft computing and image analysis like deep learning and deep neural networks to decrease breast cancer death as a concrete effort using mammograms by detecting breast cancer in an early stage. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
9. Relative deviation learning bounds and generalization with unbounded loss functions.
- Author
-
Cortes, Corinna, Greenberg, Spencer, and Mohri, Mehryar
- Abstract
We present an extensive analysis of relative deviation bounds, including detailed proofs of two-sided inequalities and their implications. We also give detailed proofs of two-sided generalization bounds that hold in the general case of unbounded loss functions, under the assumption that a moment of the loss is bounded. We then illustrate how to apply these results in a sample application: the analysis of importance weighting. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
10. Association rule mining for the usability of the CAPTCHA interfaces: a new study of multimedia systems.
- Author
-
Brodić, Darko and Amelio, Alessia
- Subjects
- *
CAPTCHA (Challenge-response test) , *MACHINE learning , *HUMAN-computer interaction , *ARTIFICIAL intelligence , *COMPUTER algorithms - Abstract
This paper presents an analysis of the CAPTCHA interfaces in terms of their usability to Internet users. The usability is represented by the time needed to the users for finding a solution to the CAPTCHA, which is called response time. Specifically, the analysis is focused on four examples of text and image-based CAPTCHA. The aim is to study the cognitive factors influencing the Internet users in finding a solution to these four CAPTCHA types. Accordingly, an experiment is conducted on 100 Internet users, characterized by demographic factors, such as age, gender, Internet experience, and education level. Each user is asked to solve the four CAPTCHA types, and the response time for each of them is registered. Collected data including demographic factors and response time is subjected to association rule mining, using the FP-Growth algorithm for extracting the association rules. They show the dependence of the response time on the co-occurrence of the demographic factors. Also, an additional statistical analysis is performed using the nonparametric one-way Kruskal Wallis’ test. Experiments comparing the proposed method with the earlier studies of the CAPTCHA usability show the novelty of the method for the understanding of usability of CAPTCHA interfaces, which is based on the cognitive factors that influence the response time. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
11. A machine learning approach for the identification of the Lattice Discrete Particle Model parameters.
- Author
-
Alnaggar, Mohammed and Bhanot, Naina
- Subjects
- *
CONCRETE , *COMPOSITE materials , *HETEROGENEITY , *COALESCENCE (Chemistry) , *MACHINE learning - Abstract
Concrete is a composite material that is governed by complex constitutive behavior under various loading and environmental conditions. Only comprehensive computational models can represent such behavior and capture the effects of heterogeneity, crack coalescence and damage localization. Such models are usually governed by a large set of parameters that require, correspondingly, multiple experimental tests for their proper calibration. In many experimental campaigns, not all of the needed tests are performed. In this case, the uniqueness of the calibration results cannot be guaranteed. In this research, a Machine Learning (ML) approach is proposed to solve this problem by predicting the unknown characteristics of the concrete based on a statistical interpolation of large concrete testing databases and by using these interpolated data to identify the model parameters. The ML framework is demonstrated using the Lattice Discrete Particle Model (LDPM), which is a comprehensive concrete model that successfully replicates concrete behavior under multi-axial stresses in both static and dynamic loading conditions. The ML approach consists of an initial training of an Artificial Neural Network (ANN) to reverse engineer LDPM using pilot concrete data that represent common concrete properties. Next, an adaptive updating technique is implemented to improve the parameter identification capabilities and to allow continuous learning. The paper discussed multiple validations performed by using both original and updated ANNs. The results show the excellent parameter identification capabilities of the framework and its ability to adaptively update and improve its predictions. Additionally, the proposed ML approach improves convergence, accuracy and speed of other parameter identification methods, such as the nonlinear least square method, when used to provide the initial guess values of the parameters to be identified. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
12. Conformal decision-tree approach to instance transfer.
- Author
-
Zhou, S., Smirnov, E., Schoenmakers, G., and Peeters, R.
- Abstract
Instance transfer for classification aims at boosting generalization performance of classification models for a target domain by exploiting data from a relevant source domain. Most of the instance-transfer approaches assume that the source data is relevant to the target data for the complete set of features used to represent the data. This assumption fails if the target data and source data are relevant only for strict subsets of the input features which we call 'partially input-feature relevant'. In this case these approaches may result in sub-optimal classification models or even in a negative transfer. This paper proposes a new decision-tree approach to instance transfer when the source data are partially input-feature relevant to the target data. The approach selects input features for tree nodes using univariate transfer of source instances. The instance transfer is guided by a conformal test for source relevance estimation. Experimental results on real-world data sets demonstrate that the new decision-tree approach is capable of outperforming existing instance-transfer approaches, especially, when the source data are partially input-feature relevant to the target data. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
13. Semi-supervised learning with regularized Laplacian.
- Author
-
Avrachenkov, K., Chebotarev, P., and Mishenin, A.
- Subjects
- *
LAPLACIAN matrices , *MATHEMATICAL regularization , *GRAPH theory , *LINEAR algebra , *MATHEMATICAL optimization - Abstract
We study a semi-supervised learning method based on the similarity graph and regularized Laplacian. We give convenient optimization formulation of the regularized Laplacian method and establish its various properties. In particular, we show that the kernel of the method can be interpreted in terms of discrete and continuous-time random walks and possesses several important properties of proximity measures. Both optimization and linear algebra methods can be used for efficient computation of the classification functions. We demonstrate on numerical examples that the regularized Laplacian method is robust with respect to the choice of the regularization parameter and outperforms the Laplacian-based heat kernel methods. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
14. On the understanding of profiles by means of post-processing techniques: an application to financial assets.
- Author
-
Gibert, Karina and Conti, Dante
- Subjects
- *
STOCK exchanges , *DATA mining , *FINANCIAL databases , *ASSETS (Accounting) , *DECISION making - Abstract
In last years, mining financial data has taken remarkable importance to complement classical techniques. Knowledge Discovery in Databases provides a framework to support analysis and decision-making regarding complex phenomena. Here, clustering is used to mine financial patterns from Venezuelan Stock Exchange assets (Bolsa de Valores de Caracas), and two major indexes related to that market: Dow Jones (USA) and BOVESPA (Brazil). Also, from a practical point of view, understanding clusters is crucial to support further decision-making. Only few works addressed bridging the existing gap between the raw data mining (DM) results and effective decision-making. Traffic lights panel (TLP) is proposed as a post-processing tool for this purpose. Comparison with other popular DM techniques in financial data, like association rules mining, is discussed. The information learned with the TLP improves quality of predictive modelling when the knowledge discovered in the TLP is used over a multiplicative model including interactions. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
15. Mixed intelligent-multivariate missing imputation.
- Author
-
Gibert, K.
- Subjects
- *
MISSING data (Statistics) , *STATISTICAL matching , *PROBABILITY theory , *COMPUTATIONAL mathematics , *MULTIVARIATE analysis , *DISTRIBUTION (Probability theory) - Abstract
In real applications, important rates of missing data are often found and have to be preprocessed before the analysis. The literature for missing imputation is abundant. However, the most precise imputation methods require long time, and sometimes specific software; this implies a significant delay to get final results. The Mixed Intelligent-Multivariate Missing Imputation (MIMMI) method is proposed as a hybrid missing imputation methodology based on clustering. The MIMMI is a non-parametric method that combines the prior expert knowledge with multivariate analysis without requiring assumptions on the probabilistic models of the variables (normality, exponentiality, etc.). The proposed imputation values implicitly take into account the joint distribution of all variables and can be determined in a relatively short time. The MIMMI uses the conditional mean according to the self-underlying structure of the data set. It provides a good trade-off between accuracy and both simplicity and required time to data preparation. The mechanics of the method is illustrated with some case-studies, both synthetic and real applications related with human behaviour. In both cases, acceptable quality results were obtained in short time. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
16. Mining closed patterns in relational, graph and network data.
- Author
-
Garriga, Gemma, Khardon, Roni, and De Raedt, Luc
- Subjects
- *
ALGORITHMS , *LOWEST common multiple , *MATHEMATICAL proofs , *GRAPH algorithms , *SEMANTIC differential scale , *MATHEMATICAL programming - Abstract
Recent theoretical insights have led to the introduction of efficient algorithms for mining closed item-sets. This paper investigates potential generalizations of this paradigm to mine closed patterns in relational, graph and network databases. Several semantics and associated definitions for closed patterns in relational data have been introduced in previous work, but the differences among these and the implications of the choice of semantics was not clear. The paper investigates these implications in the context of generalizing the LCM algorithm, an algorithm for enumerating closed item-sets. LCM is attractive since its run time is linear in the number of closed patterns and since it does not need to store the patterns output in order to avoid duplicates, further reducing memory signature and run time. Our investigation shows that the choice of semantics has a dramatic effect on the properties of closed patterns and as a result, in some settings a generalization of the LCM algorithm is not possible. On the other hand, we provide a full generalization of LCM for the semantic setting that has been previously used by the Claudien system. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
17. Robust ν-support vector machine based on worst-case conditional value-at-risk minimization.
- Author
-
Wang, Yongqiao
- Subjects
- *
SUPPORT vector machines , *PERFORMANCE evaluation , *HYPOTHESIS , *VALUE at risk , *PROBLEM solving , *DATA analysis , *QUADRATIC programming , *ROBUST programming - Abstract
By minimizing the conditional expectation of random loss in the 1−β worst case, the performance of the ν-support vector machine (SVM) severely depends on its assumption on the underlying distribution. This paper proposes a robust ν-SVM based on worst-case conditional value-at-risk (WCCVaR) minimization, which assumes that the underlying distribution comes from a certain set of potential distributions and minimizes the maximum CVaR associated with these distributions. The problem can be transformed into a quadratic programme and handle nonlinear classification problems. Experiments on six data sets clearly show that the robust approach can achieve superior results than the ν-SVM. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
18. Remarks on the future of AI: machines and communication.
- Author
-
Richter, Michael and von Mammen, Sebastian
- Subjects
- *
ARTIFICIAL intelligence , *HUMAN-computer interaction , *COMMUNICATION & technology , *DIGITAL communications , *SOCIAL intelligence - Abstract
Artificial Intelligence will continue to flourish in many ways. In this article, we present a view that centers on communication. Genuine human-computer communication has been an ambitious goal of artificial intelligence for decades and bears great potential for its future, yet nature still remains the sole successful crafter of systems capable of this art. Observations of natural communication processes suggest a holistic view that unites different means and levels of communication. Patterns from various senses as well as vast amounts of contextual information are integrated to yield an approximate understanding of reality. Rather than focusing on individual aspects of high-level communication techniques, we advocate tracing this holistic approach to communication, and systematically increasing algorithmic, integrative communication capabilities. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.