1,175 results
Search Results
52. Finitely distinguishable erasing pattern languages.
- Author
-
Bayeh, Fahimeh, Gao, Ziyuan, and Zilles, Sandra
- Subjects
- *
LINGUISTICS , *COMPUTATIONAL learning theory , *STATISTICAL decision making , *COMPUTATIONAL complexity , *COMPUTER science - Abstract
Pattern languages have been an object of study in various subfields of computer science for decades. This paper introduces and studies a decision problem on patterns called the finite distinguishability problem: given a pattern π , are there finite sets T + and T − of strings such that the only pattern language containing all strings in T + and none of the strings in T − is the language generated by π ? This problem is related to the complexity of teacher-directed learning, as studied in computational learning theory, as well as to the long-standing open question whether the equivalence of two patterns is decidable. We show that finite distinguishability is decidable if the underlying alphabet is of size other than 2 or 3, and provide a number of related results, such as (i) partial solutions for alphabet sizes 2 and 3, and (ii) decidability proofs for variants of the problem for special subclasses of patterns, namely, regular, 1-variable, and non-cross patterns. For the same subclasses, we further determine the values of two complexity parameters in teacher-directed learning, namely the teaching dimension and the recursive teaching dimension. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
53. Combinatorial properties of Farey graphs.
- Author
-
Wang, Yucheng, Bao, Qi, and Zhang, Zhongzhi
- Subjects
- *
DOMINATING set , *INDEPENDENT sets , *NP-hard problems , *COMPUTER science , *SOCIAL network theory , *MATCHING theory , *SCIENTIFIC community - Abstract
Combinatorial problems are a fundamental research subject of theoretical computer science, and for a general graph many combinatorial problems are NP-hard and even #P-complete. Thus, it is interesting to seek or design special graphs for which these difficult combinatorial problems can be exactly solved. In this paper, we study some combinatorial problems for the Farey graphs, which are translated from Farey sequences and have received considerable attention from the scientific community. We determine exactly the domination number, the independence number, and the matching number. Moreover, we derive exact or recursive solutions to the number of minimum dominating sets, the number of dominating sets, the number of maximum independent sets, the number of independent sets, the number of maximum matchings, as well as the number of matchings. Finally, we obtain explicit expressions for the number of acyclic orientations and the number of root-connected acyclic orientations. Since the considered combinatorial problems have found wide applications in diverse fields, such as network science and graph data miming, this work is helpful for deepening our understanding of the applications for these combinatorial problems. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
54. Lazy evaluations in Łukasiewicz type fuzzy logic.
- Author
-
Nagy, Benedek, Basbous, Raed, and Tajti, Tibor
- Subjects
- *
FUZZY logic , *FUZZY systems , *COMPUTER science , *TREE branches , *DECISION making , *LOGIC - Abstract
Lazy evaluations are playing important roles in various fields of computer science including hardware design, programming, decision making. In this paper one of the most known and used fuzzy logic system, the Łukasiewicz logic is considered. Pruning algorithms are presented to quicken the evaluations of logical formulae in Łukasiewicz logic by cutting out those branches of the formula tree that have no influence on the final result for some reasons. Correctness of the algorithm is also proven. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
55. Investigation of machine learning techniques on proteomics: A comprehensive survey.
- Author
-
Sonsare, Pravinkumar M. and Gunavathi, C.
- Subjects
- *
MACHINE learning , *DIHEDRAL angles , *TERTIARY structure , *SCIENTISTS , *PROTEOMICS , *COMPUTER science , *PROTEIN structure , *MASS spectrometry - Abstract
Proteomics is the extensive investigation of proteins which has empowered the recognizable proof of consistently expanding quantities of protein. Proteins are necessary part of living life form, with numerous capacities. The proteome is the complete arrangement of proteins that are created or altered by a life form or framework of the organism. Proteome fluctuates with time and unambiguous prerequisites, or stresses, that a cell or organism experiences. Proteomics is an interdisciplinary area that has derived from the hereditary data of different genome ventures. Much proteomics information is gathered with the assistance of high throughput techniques, for example, mass spectrometry and microarray. It would regularly take weeks or months to analyze the information and perform examinations by hand. Therefore, scholars and scientific experts are teaming up with computer science researchers and mathematicians to make projects and pipeline to computationally examine the protein information. Utilizing bioinformatics procedures, scientists are prepared to do quicker investigation and protein information storing. The goal of this paper is to brief about the review of machine learning procedures and its application in the field of proteomics. • Study of machine learning algorithm, proteomics,artificial neural network. • Study of machine learning algorithm for protein secondary structure prediction. • Study of machine learning algorithm for protein tertiary structure prediction. • Study of machine learning algorithm for protein torsion angle prediction. • Study of machine learning algorithm for protein loop modeling. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
56. Burning number of theta graphs.
- Author
-
Liu, Huiqing, Zhang, Ruiting, and Hu, Xiaolan
- Subjects
- *
NP-complete problems , *THETA functions , *COMPUTER science - Abstract
The burning number b (G) of a graph G was introduced by Bonato, Janssen, and Roshanbin [Lecture Notes in Computer Science 8882(2014)] to measure the speed of the spread of contagion in a graph. The graph burning problem is NP-complete even for trees. In this paper, we show that the burning number of any theta graph of order n = q 2 + r with 1 ≤ r ≤ 2 q + 1 is either q or q + 1. Furthermore, we characterize all theta graphs that have burning number q or q + 1. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
57. Social mimic optimization algorithm and engineering applications.
- Author
-
Balochian, Saeed and Baloochian, Hossein
- Subjects
- *
MATHEMATICAL optimization , *SUSTAINABLE engineering , *PARTICLE swarm optimization , *SWARM intelligence , *COMPUTER science , *EVOLUTIONARY computation - Abstract
• A novel SMO inspired by mimicking behavior to solve optimization problems is presented. • SMO includes a mimic operator to simulate search in the response space. • SMO does not require control operator respect to other Meta heuristic methods. • SMO solve optimization problems with minimum population size. • SMO is compared with 14 well-known and state of the art optimization algorithms. Increase in complexity of real world problems has provided an area to explore efficient methods to solve computer science problems. Meta-heuristic methods based on evolutionary computations and swarm intelligence are instances of techniques inspired by nature. This paper presents a novel social mimic optimization (SMO) algorithm inspired by mimicking behavior to solve optimization problems. The proposed algorithm is evaluated using 23 test functions. Obtained results are compared with 14 known optimization algorithms including Whale optimization algorithm (WOA), Grasshopper optimization algorithm (GOA), Particle Swarm Optimization (PSO), Stochastic fractal search (SFS), Grey Wolf Optimizer (GWO), Optics Inspired Optimization (OIO), League Championship Algorithm (LCA), Wind Driven Optimization (WDO), Harmony search (HS), Firefly Algorithm (FA), Artificial Bee Colony (ABC), Biogeography Based Optimization (BBO), Bat Algorithm (BA), and Teaching Learning Based Optimization (TLBO). Obtained results indicate higher capability of the SMO algorithm in solving high-dimensional decision variables. Furthermore, SMO is used to solve two classic engineering design problems. Three important features of SMO are simple implementation, solving optimization problems with minimum population size and not requiring control parameters. Results of various evaluations show superiority of the proposed method in finding the optimal solution with minimum function evaluations. This superiority is achieved based on reducing number of initial population. The proposed method can be applied to applications like automatic evolution of robotics, automatic control of machines and innovation of machines in finding better solutions with less cost. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
58. New approximation algorithms for the minimum cycle cover problem.
- Author
-
Yu, Wei, Liu, Zhaohui, and Bao, Xiaoguang
- Subjects
- *
WEIGHTED graphs , *APPROXIMATION algorithms , *UNDIRECTED graphs , *TRAVELING salesman problem , *COMPUTER science - Abstract
• The minimum cycle cover problem. • A simplified approach of analysis. • An O (n 2) time 24/5-approximation algorithm. • A new O (n 3) time 14/3-approximation algorithm. • An improved O (n 5) time 32/7-approximation algorithm. Given an undirected weighted graph G = (V , E) with nonnegative weight function obeying the triangle inequality, a set { C 1 , C 2 , ... , C k } of cycles is called a cycle cover if V ⊆ ⋃ i = 1 k V (C i) and its cost is given by the maximum weight of the cycles. The Minimum Cycle Cover Problem aims to find a cycle cover of cost at most λ with the minimum number of cycles. An O (n 2) 24/5-approximation algorithm and an O (n 5) 14/3-approximation algorithm are given by Yu and Liu (Improved approximation algorithms for some min-max cycle cover problems. Theoretical Computer Science 654 (2016) 45–58). However, the original proofs for approximation ratios are incomplete. In this paper we first present a corrected simplified analysis on the 24/5-approximation algorithm. Based on the simplified approach of analysis and some new observations, we present a new 14/3-approximation algorithm that runs in O (n 3) and give an improved 32/7-approximation algorithm that runs in O (n 5). [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
59. Online sequential class-specific extreme learning machine for binary imbalanced learning.
- Author
-
Shukla, Sanyam and Raghuwanshi, Bhagat Singh
- Subjects
- *
MACHINE learning , *SEQUENTIAL learning , *COMPUTATIONAL complexity , *DIGITAL learning , *ONLINE education , *COMPUTER science - Abstract
Many real-world applications suffer from the class imbalance problem, in which some classes have significantly fewer examples compared to the other classes. In this paper, we focus on online sequential learning methods, which are considerably more preferable to tackle the large size imbalanced classification problems effectively. For example, weighted online sequential extreme learning machine (WOS-ELM), voting based weighted online sequential extreme learning machine (VWOS-ELM) and weighted online sequential extreme learning machine with kernels (WOS-ELMK), etc. handle the imbalanced learning effectively. One of our recent works class-specific extreme learning machine (CS-ELM) uses class-specific regularization and has been shown to perform better for imbalanced learning. This work proposes a novel online sequential class-specific extreme learning machine (OSCSELM), which is a variant of CS-ELM. OSCSELM supports online learning technique in both chunk-by-chunk and one-by-one learning mode. It targets to handle the class imbalance problem for both small and larger datasets. The proposed work has less computational complexity in contrast with WOS-ELM for imbalanced learning. The proposed method is assessed by utilizing benchmark real-world imbalanced datasets. Experimental results illustrate the effectiveness of the proposed approach as it outperforms the other methods for imbalanced learning. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
60. Maximal sensitivity of Boolean nested canalizing functions.
- Author
-
Li, Yuan and Adeyeye, John O.
- Subjects
- *
BOOLEAN functions , *NORMAL forms (Mathematics) , *COMPUTER engineering , *COMPUTER science - Abstract
Boolean nested canalizing functions (NCF) have important applications in molecular regulatory networks, engineering and computer science. In the literature, there are two sensitivities to measure the complexity of a Boolean function. One is average sensitivity, the other one is maximal sensitivity. We follow the tradition and omit the word "maximal". In other words, in this paper, sensitivity is always maximal sensitivity. Using the past work of the authors and their coauthors on a characterization of NCF, we obtain the formula of the sensitivity of any NCF. We find that the sensitivity of any NCF is between ⌈ n + 2 2 ⌉ and n. Both lower and upper bounds are tight. We prove that the block sensitivity, hence the l -block sensitivity, is the same as the sensitivity for NCF. It is well known that monotone Boolean functions (MBF) also have this property. We characterize all functions which are both monotone and nested canalizing (MNCF). The closed formula of the cardinality of the set of MNCFs is also provided. • The closed formula of the maximal sensitivity of Boolean nested canalizing functions is obtained. • Both the lower and upper tight bounds of the maximal sensitivity of Boolean nested canalizing functions are computed. • The maximal sensitivity and the block sensitivity of nested canalizing functions are equal. • The algebraic normal forms of monotone nested canalizing functions are provided. • The closed formula of the number of monotone nested canalizing functions is obtained. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
61. Cardiovascular models for personalised medicine: Where now and where next?
- Author
-
Hose, D. Rodney, Lawford, Patricia V., Huberts, Wouter, Hellevik, Leif Rune, Omholt, Stig W., and van de Vosse, Frans N.
- Subjects
- *
CONTROL theory (Engineering) , *COMPUTER science , *COMPUTER engineering , *ARTIFICIAL intelligence , *MODEL validation , *MACHINE learning , *SIMULATION methods & models - Abstract
• Model personalisation requires more than anatomical personalisation. • Model uncertainty and sensitivity are important considerations for clinical interpretation. • Model verification and validation are critical for trust underpinning clinical decision support. • The cardiovascular digital twin will support diagnosis and prognosis by responding continuously to increasing volumes of information collected as the individual goes about their daily life. The aim of this position paper is to provide a brief overview of the current status of cardiovascular modelling and of the processes required and some of the challenges to be addressed to see wider exploitation in both personal health management and clinical practice. In most branches of engineering the concept of the digital twin, informed by extensive and continuous monitoring and coupled with robust data assimilation and simulation techniques, is gaining traction: the Gartner Group listed it as one of the top ten digital trends in 2018. The cardiovascular modelling community is starting to develop a much more systematic approach to the combination of physics, mathematics, control theory, artificial intelligence, machine learning, computer science and advanced engineering methodology, as well as working more closely with the clinical community to better understand and exploit physiological measurements, and indeed to develop jointly better measurement protocols informed by model-based understanding. Developments in physiological modelling, model personalisation, model outcome uncertainty, and the role of models in clinical decision support are addressed and 'where-next' steps and challenges discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
62. Smart City and information technology: A review.
- Author
-
Camero, Andrés and Alba, Enrique
- Subjects
- *
SMART cities , *INFORMATION technology , *TECHNICAL literature , *INFORMATION science , *COMPUTER science , *PUBLIC opinion - Abstract
Smart City is a recent concept that is gaining momentum in public opinion, and thus, it is making its way into the agendas of researchers and city authorities all over the world. However, there is no consensus of what exactly is a smart city, and academic research is, at best, building applications in numerous silos. This paper explores the computer science and information technology literature about Smart City. Using data analysis techniques, we contribute to present the domain from an objective data-based point of view, aiming to highlight its major trends, and providing a single entry point for newcomers. • Smart city concept is rapidly evolving and gaining attention worldwide. • Most smart city publications remained uncited by related literature. • Smart people and economy applications present opportunities for future research. • Identifying most influential works and discussing existing and open issues [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
63. Time-optimal symbolic control of a changeover process based on an approximately bisimilar symbolic model.
- Author
-
Fakhroleslam, Mohammad, Pola, Giordano, De Santis, Elena, and Di Benedetto, Maria Domenica
- Subjects
- *
CHEMICAL process control , *STATE feedback (Feedback control systems) , *CHEMICAL processes , *SCIENTIFIC literature , *COMPUTER systems , *COMPUTER science - Abstract
• An approximately bisimilar symbolic model is constructed for a safe changeover process. • An automatic controller is designed for safe changeover process for the first time. • The synthesis of the proposed controller in a finite-state space is very fast and flexible. • The error bounds of the proposed controller are adjustable as design parameters. • The effectiveness of the symbolic controller is investigated via numerical simulation. Many process control problems with complex qualitative specifications cannot be addressed via conventional control design methods. Examples of such specifications include logic specifications expressed in the design of start-up, shut-down, changeover, and emergency shutdown operating procedures. In recent years, it has been shown in the control systems and computer science communities that symbolic models provide convenient and powerful mechanisms to synthesize controllers enforcing such qualitative specifications. The use of symbolic models reduces the synthesis of the controllers to a fixed-point computation problem over a finite-state abstract system. In this paper, after explaining the notion of approximate bisimulation for incrementally globally asymptotically stable (δ -GAS) nonlinear control systems, the construction of approximately bisimilar symbolic models for such systems is presented. Then synthesis of time-optimal symbolic controller for this class of systems is performed based on results from the computer science literature. As a benchmark chemical process control problem, an approximately bisimilar symbolic model is constructed for a safe changeover process. Then a symbolic controller is designed and it is refined to a controller to be applied to the original process. Simulation results show the effectiveness of the symbolic controller. Although the construction of the symbolic model may be complex, the synthesis of the controller in a finite-state space is fast and most importantly the error bounds are adjustable as design parameters. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
64. An extended taxonomy of advanced information visualization and interaction in conceptual modeling.
- Author
-
Bork, Dominik and De Carlo, Giuliano
- Subjects
- *
DATA visualization , *CONCEPTUAL models , *TAXONOMY , *COMPUTER science - Abstract
Conceptual modeling is integral to computer science research and is widely adopted in industrial practices, e.g., business process and enterprise architecture management. Providing adequate and usable modeling tools is necessary to adopt modeling languages efficiently. Meta-modeling platforms provide a rich and mature set of functionalities for realizing state-of-the-art modeling tools. These tools, albeit their stability and rich set of features, often lack a modern look and feel considering (i) how they visualize the models, and (i i) how modelers interact with the models. Current web technologies enable much richer, advanced opportunities for visualizing and interacting with conceptual models. However, a structured and comprehensive overview of possible information visualization and interaction techniques linked to conceptual models and modeling tools must be established. This paper aims to fill this gap by presenting an extended taxonomy of advanced information visualization and interaction in conceptual modeling. We present a generic taxonomy that is afterward contextualized within the specific domain of conceptual modeling. The taxonomy serves orientation in the vast developing field of information visualization and interaction and hopefully sparks innovation if future modeling tool development. • We propose a taxonomy for advanced information visualization and interaction in conceptual modeling. • We show the current design space of visualization and interaction in conceptual modeling. • We exemplify the taxonomy dimensions with many illustrative examples. • We discuss opportunities and future research directions. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
65. Asymptotic normality for the size of graph tries built from M-ary tree labelings.
- Author
-
Fuchs, Michael and Yu, Tsan-Cheng
- Subjects
- *
ASYMPTOTIC normality , *DATA structures , *MOMENTS method (Statistics) , *TREE graphs , *COMPUTER science , *APPLICATION software , *ASYMPTOTIC expansions - Abstract
Graph tries are a new and interesting data structure proposed by Jacquet in 2014. They generalize the classical trie data structure which has found many applications in computer science and is one of the most popular data structure on words. For his generalization, Jacquet considered the size (or space requirement) and derived an asymptotic expansion for the mean and the variance when graph tries are built from n independently chosen random labelings of a rooted M -ary tree. Moreover, he conjectured a central limit theorem for the (suitably normalized) size as the number of labelings tends to infinity. In this paper, we verify this conjecture with the method of moments. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
66. Emergent patterns in agent-environment interactions and their roles in supporting agile spatial skills.
- Author
-
Mettler, Bérénice, Verma, Abhishek, and Feit, Andrew
- Subjects
- *
AUTOMATIC control systems , *SENSORY perception , *COMPUTER science , *MOTOR ability , *COGNITION , *SPATIAL analysis (Statistics) - Abstract
This paper provides a review of the analysis and modeling of human spatial planning, perception, and learning based on the dynamics of agent-environment interactions. The approach stems from an analysis and modeling framework that was previously conceived using interaction patterns emerging from system-wide interactions as the basic unit of analysis. The paper first discusses the rationals for using patterns in agent-environment interactions as units of organization of behavior, and as functional units of the modeling framework. These concepts are then illustrated through two applications using experimental data from a first-person flight simulator that implements agile obstacle navigation tasks. The first application focuses on the analysis of the formation and evolution of interaction patterns over successive trials, and the use of these patterns as basic elements of the task environment representation, enabling the evaluation of the learning process and assessment of the operator performance. The second application focuses on the analysis of interaction patterns as functional units supporting the modeling of the underlying perceptual guidance and control mechanisms. These examples demonstrate the relevance of dynamics in agent-environment interactions for studying a wide range of functions across the human control hierarchy. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
67. Cloud security: Emerging threats and current solutions.
- Author
-
Coppolino, Luigi, D’Antonio, Salvatore, Mazzeo, Giovanni, and Romano, Luigi
- Subjects
- *
CLOUD computing , *OPEN source software , *INFORMATION & communication technologies , *COMPUTER science , *INFORMATION theory - Abstract
Many organizations are stuck in the cloudify or not to cloudify limbo, mainly due to concerns related to the security of enterprise sensitive data. Removing this barrier is a key pre-condition to fully unleash the tremendous potential of cloud computing. In this paper, we provide a comprehensive analysis of the main threats that hamper cloud computing adoption on a wide scale, and a right to the point review of the solutions that are currently being provided by the major vendors. The paper also presents the (near) future directions of cloud security research, by taking a snapshot of the main research trends and most accredited approaches. The study is done on a best of breed selection of proprietary and Open Source cloud offerings. The paper is thus a useful navigation tool, that can be used by the IT personnel to gain more insight into the security risks related to the use of cloud computing, as well as to quickly weigh the pros and cons of state of the art solutions. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
68. Home automation networks: A survey.
- Author
-
Toschi, Guilherme Mussi, Campos, Leonardo Barreto, and Cugnasca, Carlos Eduardo
- Subjects
- *
MACHINE-to-machine communications , *HOME automation , *UBIQUITOUS computing , *SYSTEM integration , *COMPUTER science - Abstract
Home Automation Networks provide a promising opportunity in designing smart home systems and applications. In this context, Machine-to-Machine (M2M) networks are emerging as an efficient means to provide automated communication among distributed ubiquitous devices on in a standardized manner, but none have been adopted universally. In an effort to present the technologies used in the M2M and home integration environment, this paper presents the home area network elements and definitions, and reviews the standards, architectures and initiatives created to enable M2M communication and integration in several different environments, especially at the smart home domain. This paper points out differences between them and identifies trends for the future. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
69. AI-based computer vision using deep learning in 6G wireless networks.
- Author
-
Kamruzzaman, MM and Alruwaili, Omar
- Subjects
- *
ARTIFICIAL intelligence , *DEEP learning , *OBJECT recognition (Computer vision) , *COMPUTER vision , *HUMAN facial recognition software , *SMART devices , *COMPUTER science - Abstract
Modern businesses benefit significantly from advances in computer vision technology, one of the important sectors of artificially intelligent and computer science research. Advanced computer vision issues like image processing, object recognition, and biometric authentication can benefit from using deep learning methods. As smart devices and facilities advance rapidly, current networks such as 4 G and the forthcoming 5 G networks may not adapt to the rapidly increasing demand. Classification of images, object classification, and facial recognition software are some of the most difficult computer vision problems that can be solved using deep learning methods. As a new paradigm for 6Core network design and analysis, artificial intelligence (AI) has recently been used. Therefore, in this paper, the 6 G wireless network is used along with Deep Learning to solve the above challenges by introducing a new methodology named Optimizing Computer Vision with AI-enabled technology (OCV-AI). This research uses deep learning – efficiency algorithms (DL-EA) for computer vision to address the issues mentioned and improve the system's outcome. Therefore, deep learning 6 G proposed frameworks (Dl-6 G) are suggested in this paper to recognize pattern recognition and intelligent management systems and provide driven methodology planned to be provisioned automatically. For Advanced analytics wise, 6 G networks can summarize the significant areas for future research and potential solutions, including image enhancement, machine vision, and access control. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
70. Near-optimal selection of representative measuring points for robust temperature field reconstruction with the CRO-SL and analogue methods.
- Author
-
Salcedo-Sanz, S., García-Herrera, R., Camacho-Gómez, C., Alexandre, E., Carro-Calvo, L., and Jaume-Santero, F.
- Subjects
- *
CORAL reefs & islands , *TEMPERATURE , *PROCESS optimization , *COMPUTER science , *DATA science - Abstract
In this paper we tackle a problem of representative measuring points selection for temperature field reconstruction. This problem is a version of the more general Representative Selection (RS) problem, well-known in computer and data science. In this particular case, the objective is to select the best set of N measuring points (i.e. N representative points), in such a way that a reconstruction error is minimized when reconstructing the monthly average temperature field. We use a novel meta-heuristic algorithm, the Coral Reefs Optimization with Substrate Layer (CRO-SL), which is an evolutionary-type method able to combine several different search procedures within a single population. The CRO-SL is combined with the Analogue Method (AM) to identify the most representative points. This approach exhibits strong performance from experiments with gridded and un-gridded temperature field datasets (European Climate Assessment & Dataset (ECA) and ERA-Interim reanalysis (ERA)). Different aspects such as the error assessment and the comparison with alternative approaches, are discussed in the experimental analysis of this article. We show that the algorithm performs better than a greedy approach, i.e. the best solution for N points is different from the N best individual predictors. The solutions obtained with the proposed methodology are climatologically consistent and include points from Scandinavia, Central and Southern Europe, the Black Sea and Central and South Western Asia as the more representative in the case of the ECA dataset; similar areas are selected for ERA. We have found out that once the number of stations/points goes over a threshold, the improvement in the model is obtained by increasing the density of data in the given zones, instead of adding data from different zones to the algorithm. The method proposed may have direct application in Palaeoclimalogy, where there are a large amount of distributed proxies with scarce information, so the proposed approach could be useful to select the most important ones to reconstruct a desired field. • We solve a problem of representative measuring points selection for temperature field reconstruction. • The Analogue Method is used as reconstruction algorithm. • An evolutionary-type meta-heuristic is proposed to select the best set of representative points. • A reconstruction error for the temperature field is minimized by the algorithm. • Experiments in two monthly temperature datasets in Europe have shown the good performance of the algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
71. Time series feature learning with labeled and unlabeled data.
- Author
-
Wang, Haishuai, Zhang, Qin, Wu, Jia, Pan, Shirui, and Chen, Yixin
- Subjects
- *
TIME series analysis , *SPECTRAL analysis (Phonetics) , *FEATURE selection , *LEAST squares , *COMPUTER science - Abstract
Highlights • A novel time series feature selection task with labeled and unlabeled data. • A new semi-supervised time series feature learning model is proposed. • The model integrates least square minimization, spectral analysis, scaled pseudo labels as well as time series feature similarity regularization terms. • Experiments on real-world data demonstrating significant performance gain of the proposed model. Abstract Time series classification has attracted much attention in the last two decades. However, in many real-world applications, the acquisition of sufficient amounts of labeled training data is costly, while unlabeled data is usually easily to be obtained. In this paper, we study the problem of learning discriminative features (segments) from both labeled and unlabeled time series data. The discriminative segments are often referred to as shapelets. We present a new Semi-Supervised Shapelets Learning (SSSL for short) model to efficiently learn shapelets by using both labeled and unlabeled time series data. Briefly, SSSL engages both labeled and unlabeled time series data in an integrated model that considers the least squares regression, the power of the pseudo-labels, shapelets regularization, and spectral analysis. The experimental results on real-world data demonstrate the superiority of our approach over existing methods. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
72. The Turán number of star forests.
- Author
-
Lan, Yongxin, Li, Tao, Shi, Yongtang, and Tu, Jianhua
- Subjects
- *
GRAPH theory , *NUMBER theory , *GEOMETRIC vertices , *COMPUTER science , *SUBGRAPHS - Abstract
Abstract The Turán number of a graph H , denoted by ex (n, H), is the maximum number of edges in any graph on n vertices containing no H as a subgraph. Let S ℓ denote the star on ℓ + 1 vertices and let k · S ℓ denote disjoint union of k copies of S ℓ. In this paper, for appropriately large n , we determine the Turán numbers for k · S ℓ and F , where F is a forest with components each of order 4, which improve the results of Lidický et al. (2013). [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
73. A didactic object-oriented, prototype-based visual programming environment.
- Author
-
García Perez-Schofield, Baltasar and Ortin, Francisco
- Subjects
- *
VISUAL environment , *OBJECT-oriented programming , *JAVASCRIPT programming language , *COMPUTER engineering , *COMPUTER science students , *COMPUTER science , *SOFTWARE architecture - Abstract
Abstract Object-oriented programming is widely used in both the industry and the education areas. The most-common model of object-oriented programming is the class-based one. However, popular languages not implementing this model are gaining traction as time goes by. This alternative model is the prototype-based one, with one key characteristic: there are no classes. In this paper, a visual tool is proposed as a vehicle for learning the prototype-based object-oriented programming, present, for instance, in Self, Lua, or JavaScript. This software has been in use for three years in a subject of the Computer Science Engineering degree, at the University of Vigo. Highlights • Pooi is an interactive environment which updates with each instruction. • The system sports an diagram viewer, an object inspector, and a REPL. • The software was designed for undergraduate students of computer science engineering. • This tool has been used successfully in lecturing object-oriented programming. • Pooi is free, offering also the sources, and a set of tutorials and examples: http://jbgarcia.webs.uvigo.es/prys/pooi/. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
74. Non-atomic one-round walks in congestion games.
- Author
-
Vinci, Cosimo
- Subjects
- *
GAME theory , *SOCIAL choice , *APPROXIMATION theory , *COMPUTER game programming , *COMPUTER science , *MATHEMATICAL models - Abstract
Abstract In this paper we study the approximation ratio of the solutions achieved after an ϵ -approximate one-round walk in non-atomic congestion games. Prior to this work, the solution concept of one-round walks had been studied for atomic congestion games with linear latency functions only (Christodoulou et al. [1] , Bilò et al. [2]). We give an explicit formula to determine the approximation ratio for non-atomic congestion games having general latency functions. In particular, we focus on polynomial latency functions, and, we prove that the approximation ratio is exactly ((1 + ϵ) (p + 1)) p + 1 for every polynomial of degree p. Then, we show that, by resorting to static (resp. dynamic) resource taxation, the approximation ratio can be lowered to (1 + ϵ) p + 1 (p + 1) p (resp. (1 + ϵ) p + 1 (p + 1) !). [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
75. Gathering of robots in a ring with mobile faults.
- Author
-
Das, Shantanu, Focardi, Riccardo, Luccio, Flaminia L., Markou, Euripides, and Squarcina, Marco
- Subjects
- *
RING networks , *MOBILE agent systems , *MALWARE , *COMPUTER algorithms , *COMPUTER science - Abstract
Abstract This paper studies the well-known problem of gathering multiple mobile agents moving in a graph, but unlike previous results, we consider the problem in the presence of an adversarial mobile entity which we call the malicious agent. The malicious entity can occupy any empty node and prevent honest mobile agents from entering this node. This new adversarial model is interesting as it models transient mobile faults that can appear anywhere in a network. Moreover, our model lies between the less powerful delay-fault model , where the adversary can block an agent for only a finite time, and the more powerful but static fault model of black holes that can even destroy the agents. We study the problem for ring networks and we provide a complete characterization of the solvability of gathering, depending on the size n of the ring and the number of agents k. We consider both oriented or unoriented rings with either synchronous or asynchronous agents. We prove that in an unoriented ring network with asynchronous agents the problem is not solvable when k is even, while for synchronous agents the problem is unsolvable when both n is odd and k is even. We then present algorithms that solve gathering for all the remaining cases, thus completely solving the problem. Finally, we provide a proof-of-concept implementation of the synchronous algorithms using programmable Lego Mindstorms EV3 robots. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
76. Which fragments of the interval temporal logic HS are tractable in model checking?
- Author
-
Bozzelli, Laura, Molinari, Alberto, Montanari, Angelo, Peron, Adriano, and Sala, Pietro
- Subjects
- *
SOFTWARE verification , *INTEGRATED circuit verification , *COMPUTER logic , *COMPUTER science , *LOGIC circuits - Abstract
Abstract Since the 80s, model checking (MC) has been applied to the automatic verification of hardware/software systems. Point-based temporal logics, such as LTL , CTL , CTL ⁎ , and the like, are commonly used in MC as the specification language; however, there are some inherently interval-based properties of computations, e.g., temporal aggregations and durations, that cannot be properly dealt with by these logics, as they model a state-by-state evolution of systems. Recently, an MC framework for the verification of interval-based properties of computations, based on Halpern and Shoham's interval temporal logic (HS , for short) and its fragments, has been proposed and systematically investigated. In this paper, we focus on the boundaries that separate tractable and intractable HS fragments in MC. We first prove that MC for the logic BE of Allen's relations started-by and finished-by is provably intractable, being Expspace -hard. Such a lower bound immediately propagates to full HS. Then, in contrast, we show that other noteworthy HS fragments, i.e., the logic A A ‾ B B ‾ (resp., A A ‾ E E ‾) of Allen's relations meets , met-by , starts (resp., finishes), and started-by (resp., finished-by), are well-behaved, and turn out to have the same complexity as LTL (Pspace -complete). Halfway are the fragments A A ‾ B B ‾ E ‾ and A A ‾ E B ‾ E ‾ , whose Expspace membership and Pspace hardness are already known. Here, we give an original proof of Expspace membership, that substantially simplifies the complexity of the constructions previously used for such a result. Contraction techniques—suitably tailored to each HS fragment—are at the heart of our results, enabling us to prove a pair of remarkable small-model properties. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
77. A simple linear time algorithm for the locally connected spanning tree problem on maximal planar chordal graphs.
- Author
-
Calamoneri, Tiziana, Dell'Orefice, Matteo, and Monti, Angelo
- Subjects
- *
SPANNING trees , *SUBGRAPHS , *PLANAR graphs , *ALGORITHMS , *COMPUTER science - Abstract
Abstract A locally connected spanning tree (LCST) T of a graph G is a spanning tree of G such that, for each node, its neighborhood in T induces a connected subgraph in G. The problem of determining whether a graph contains an LCST or not has been proved to be NP-complete, even if the graph is planar or chordal. The main result of this paper is a simple linear time algorithm that, given a maximal planar chordal graph, determines in linear time whether it contains an LCST or not, and produces one if it exists. We give an analogous result for the case when the input graph is a maximal outerplanar graph. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
78. From ontology to executable program code.
- Author
-
Elve, Arne Tobias and Preisig, Heinz A.
- Subjects
- *
COMPUTER simulation , *INFORMATION modeling , *COMPUTER science , *ONTOLOGIES (Information retrieval) , *MATHEMATICAL models - Abstract
• Presenting a method for automatic generation of program code for process models. • Ontology captures model information and equations for the modelled domain. • Model generated in a graph-based model environment reflecting the ontology information. • Equations translated to output format using language templates for the mathematical operators. The implementation of coded mathematical process models is regarded as a cumbersome and challenging task, reasons being that the modeller needs to have expertise both in modelling and computer science. Our ProcessModellerSuite implements a staged approach to modelling starting with the formulation of a context-dependent ontology defining a structure against which the mathematical representation of the principal model components is defined. Process models are then generated by interactively constructing a graph of communicating principle components, which enables the generation of arbitrary complex process models and intermediate storage of customised unit models. This storage of unit models forms the equivalent of the traditional unit-operations libraries, by allowing for insertion of the unit models into other graphs. A task builder combines the information from the graph with the used model components to automatically generate executable program code of the process model, which will be the topic of this paper. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
79. Joint optimal checkpointing and rejuvenation policy for real-time computing tasks.
- Author
-
Levitin, Gregory, Xing, Liudong, and Luo, Liang
- Subjects
- *
REAL-time computing , *REJUVENATION , *MULTIPROCESSORS , *SYSTEMS engineering , *COMPUTER science - Abstract
Highlights • Real-time computing system with checkpointing and rejuvenation is considered; • An algorithm for evaluating probability of timely task completion is presented; • A problem of joint optimal checkpointing and rejuvenation scheduling is formulated; • An illustrative example is presented. Abstract Performance of a software system can deteriorate from higher to lower levels due to software aging. To counteract the aging effect, software rejuvenation is widely implemented to restore the performance of a degraded system before the system crash actually takes place. To facilitate an effective system function restoration after each rejuvenation action, it is desirable to apply checkpointing to occasionally save the system state on a reliable storage so that the mission task can be resumed from the last saved checkpoint (instead of being restarted from the very beginning). As both rejuvenation and checkpointing procedures incur system overhead while bringing these benefits, it is significant to determine the optimal rejuvenation and checkpointing scheduling policy optimizing the system performance measures of interest. This paper makes new contributions by modeling and optimizing the joint maintenance policy involving state-based rejuvenation and periodic checkpointing schedule for software systems performing real-time computing tasks. The system can undergo multiple performance degradation levels or states, and transition time between different states can assume arbitrary types of distributions. The proposed solution methodology encompasses an efficient numerical algorithm for evaluating the probability of task completion (PTC) by a pre-specified deadline. The joint optimal rejuvenation and checkpointing policy is further determined to maximize the PTC of the considered real-time task. Examples are provided to illustrate applications of the proposed methodology as well as effects of system parameters on the optimization solution. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
80. A new fractal reliability model for networks with node fractal growth and no-loop.
- Author
-
Sun, Lina, Bai, Yanan, Huang, Ning, and Li, Ruiying
- Subjects
- *
FRACTALS , *BLOCK diagrams , *ELECTRIC power distribution grids , *INTERNET of things , *COMPUTER science , *INFORMATION technology - Abstract
Abstract Evaluating the reliability of networked systems with existing exact or approximate methods often needs to characterize the detailed topology with node scale, which brings complexity and high computation effort. In this paper, a new reliability model based on the fractal unit with a bigger scale than nodes and a much smaller scale than whole network is proposed for networks with fractal growth and no-loop (NF-NL). The introduced model simplifies the K -terminal reliability (KTR) of a NF-NL network to a multiplication of different KTR of fractal units in the network. The corresponding algorithm is also given, which has a linear-time complexity O (V) when the fractal unit scale is very small. Compared with the existing models, the proposed model provides a novel way to construct the reliability model only dependent on two factors: (1) the fractal unit characteristics and (2) its iterative process. Finally, the widely investigated Koch network case is studied with the proposed model. Highlights • A network classification based on fractal growth is put forward. • A reliability model based on the fractal unit and its iterative process is proposed. • The algorithm with an approximate complexity O (V) is provided to calculate the model. • The reliability of the widely investigated Koch network is studied and discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
81. Knowledge-enhanced document embeddings for text classification.
- Author
-
Sinoara, Roberta A., Camacho-Collados, Jose, Rossi, Rafael G., Navigli, Roberto, and Rezende, Solange O.
- Subjects
- *
TEXT mining , *SEMANTIC computing , *COMPUTER science , *MACHINE learning , *DATA mining - Abstract
Abstract Accurate semantic representation models are essential in text mining applications. For a successful application of the text mining process, the text representation adopted must keep the interesting patterns to be discovered. Although competitive results for automatic text classification may be achieved with traditional bag of words, such representation model cannot provide satisfactory classification performances on hard settings where richer text representations are required. In this paper, we present an approach to represent document collections based on embedded representations of words and word senses. We bring together the power of word sense disambiguation and the semantic richness of word- and word-sense embedded vectors to construct embedded representations of document collections. Our approach results in semantically enhanced and low-dimensional representations. We overcome the lack of interpretability of embedded vectors, which is a drawback of this kind of representation, with the use of word sense embedded vectors. Moreover, the experimental evaluation indicates that the use of the proposed representations provides stable classifiers with strong quantitative results, especially in semantically-complex classification scenarios. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
82. An approach to XBRL interoperability based on Ant Colony Optimization algorithm.
- Author
-
Yaghoobirafi, Kamaleddin and Nazemi, Eslam
- Subjects
- *
XBRL (Document markup language) , *ANT algorithms , *SEMANTIC computing , *COMPUTER science , *TAXONOMY - Abstract
Abstract Extensible Business Reporting Language (XBRL) is an XML-based language developed for enhancing interoperability among the entities involved in process of business reporting. Although this language is adopted by various regulators all around the world and has contributed greatly to semantic interoperability in this field, the variations between taxonomies and also between elements of instance documents, still cause many inconsistencies between elements. Although some existing approaches suppose the conversion of XBRL to ontologies and then resolve the inconsistencies by applying some mapping techniques, it does not seem practical because of low precision and incompleteness of these conversions. In this paper, a novel approach is proposed which utilizes Ant Colony Optimization (ACO) in order to detect best semantic mappings between inconsistent concepts of two XBRL documents. This approach analyzes the possible mappings with respect to various factors like concept names, all label texts, presentation and calculation hierarchies and so on. This makes the approach capable of finding mappings, which were not easily discoverable otherwise. The proposed approach is implemented and applied to actual XBRL reports. The results are measured with aid of well-known criteria (precision, recall and F-measure) and are compared with the well-known Hungarian algorithm and illustrate the better performance in accordance with these three criteria. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
83. Evaluation of a process for architectural assumption management in software development.
- Author
-
Yang, Chen, Liang, Peng, and Avgeriou, Paris
- Subjects
- *
COMPUTER architecture , *COMPUTER software development -- Management , *SOFTWARE engineering , *SOFTWARE maintenance , *COMPUTER science - Abstract
Highlights • An architectural assumption management process with guidelines is developed. • Two case studies were conducted to validate the process. • The process can help to make architectural assumptions explicit. • The process can help to identify and reduce invalid architectural assumptions. Abstract Context Architectural assumption management is critical to the success of software development projects. In this paper, we propose an Architectural Assumption Management (AAM) process, comprised of four AAM activities: Architectural Assumption Making, Description, Evaluation, and Maintenance. Objective Evaluating the AAM process in architectural assumption management, regarding the ease of understanding and the effort of conducting the AAM process, as well as the effectiveness of using the AAM process to make architectural assumptions explicit and to identify and reduce invalid architectural assumptions. Method An explanatory study with 88 first-year master students in software engineering, and an exploratory study with five practitioners from five companies. Results (1) the ease of understanding the AAM process is moderate for first-year master students but easy for practitioners; (2) the effort of conducting the AAM process is moderate for first-year master students; (3) Making and Evaluation took the students more time than Description and Maintenance; (4) the practitioners considered Evaluation as the most time consuming activity; (5) the AAM process can help to make architectural assumptions explicit and to identify and reduce invalid architectural assumptions in projects. The majority of the students and practitioners agreed that Architectural Assumption Evaluation is the most helpful activity for all these three aspects. For other activities, there are different opinions about their helpfulness; and (6) there are various factors identified that can impact the aforementioned results. Being aware of and properly adjusting these factors can facilitate the application of the AAM process in projects. Conclusions The AAM process aims at systematically managing architectural assumptions in software development. The results of the case studies provide preliminary empirical evidence for the evaluation of the AAM process. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
84. Assessing affective experience of in-situ environmental walk via wearable biosensors for evidence-based design.
- Author
-
Chen, Zheng, Schulz, Sebastian, Qiu, Ming, Yang, Wen, He, Xiaofan, Wang, Zhuo, and Yang, Ling
- Subjects
- *
BIOSENSORS , *DETECTORS , *AFFECTIVE computing , *COMPUTER science , *HUMAN-computer interaction - Abstract
Abstract In environmental psychology research, the most commonly used methods are phenomenological interviews and psychometric scales. Recently, with the development of wearable bio-sensing devices, a new approach based on bio-sensing data is becoming possible. In this study, we examined the feasibility of using wearable biosensors to document affective experience during in-situ walk. An eight-channelled Procomp multi-bio-sensing devices (EKG, EEG, skin conductance, temperature, facial EMG, respiration) were used, in addition with a GPS tracker, to measure the in situ physiological affective responses to environmental stimuli. This pilot experiment revealed consistent results between bio-sensing measures and two traditional methods, i.e. phenomenological interviews and psychological Likert scale rating, which indicated that mobile bio-sensing could be a promising method in measuring in-situ affective responses to environmental stimuli as well as diagnosing potential environmental stressor. This new bio-sensory method, as exemplified in this paper, could help identifying negative stressful stimuli and providing evidence-based diagnosis to support design strategies. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
85. LTE: An enhanced hybrid domain downlink scheduling.
- Author
-
Priya, L.R. and Ruba Soundar, K.
- Subjects
- *
LONG-Term Evolution (Telecommunications) , *CELL phone system standards , *MOBILE communication systems , *COMPUTER scheduling , *COMPUTER science - Abstract
Abstract The third Generation Partnership Project (3GPP) has successfully launched 4G data on mobile communication by utilizing the Long Term Evolution (LTE). The multimedia processing such as video calling/conferencing, video transmission/reception in mobile communication are in need of multiple user support, speed, high data transmission rate, low data loss rate and high throughput. In this paper an enhanced scheduling algorithm known as, EM-LWDF (Modified Largest Weighted Delay First) is proposed in combination of time/frequency scheduling for LTE. The proposed scheduling algorithm is evaluated under various simulation scenarios and compared with conventional algorithm such as PF, M-LWDF, EXP/PF, and proposed +LWDF. The evaluation is made in terms of system throughput, fairness index and Packet Loss Rate (PLR) and the evaluation results shows that the proposed scheduling algorithm shows remarkable performance improvement. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
86. A content-based recommender system for computer science publications.
- Author
-
Wang, Donghui, Liang, Yanchun, Xu, Dong, Feng, Xiaoyue, and Guan, Renchu
- Subjects
- *
COMPUTER science , *STOCHASTIC learning models , *SEQUENTIAL analysis , *WEB services , *CHI-squared test - Abstract
As computer science and information technology are making broad and deep impacts on our daily lives, more and more papers are being submitted to computer science journals and conferences. To help authors decide where they should submit their manuscripts, we present the Content-based Journals & Conferences Recommender System on computer science, as well as its web service at http://www.keaml.cn/prs/ . This system recommends suitable journals or conferences with a priority order based on the abstract of a manuscript. To follow the fast development of computer science and technology, a web crawler is employed to continuously update the training set and the learning model. To achieve interactive online response, we propose an efficient hybrid model based on chi-square feature selection and softmax regression. Our test results show that, the system can achieve an accuracy of 61.37% and suggest the best journals or conferences in about 5 s on average. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
87. Towards a coherence-oriented complex search experience management method.
- Author
-
Zhang, Yin, Zhang, Bin, Gao, Kening, Li, Pengfei, Zhao, Yuli, and Zhang, Changsheng
- Subjects
- *
SOCIAL interaction , *SEARCH algorithms , *PROBLEM solving , *COHERENCE (Philosophy) , *COMPUTER science - Abstract
Experiences of complex search tasks are important in social interaction and in problem solving. Considering the high importance of complex search experiences, many search experience management systems (SEMSs) have been introduced. Like any other life experience, complex search experiences should maintain 3 types of global coherence: temporal, causal and thematic coherence. However, to the best of our knowledge, none of the available SEMSs were designed to support all the 3 types of global coherence. In this paper, we introduce a coherence-oriented complex search experience management method named TimeTree. By organizing queries and clicks of a complex search task as a relative chronological source-tracking tree (RCST), TimeTree manages to support all the 3 types of global coherence. We describe a user study to evaluate TimeTree in 2 typical types of complex search task. The subjective evaluation results, the expert evaluation results, and the objective evaluation results all suggest that TimeTree can help maintain temporal, causal and thematic coherence for complex search experiences. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
88. Graph operations based on using distance-based graph entropies.
- Author
-
Ghorbani, Modjtaba, Dehmer, Matthias, and Zangi, Samaneh
- Subjects
- *
GRAPH connectivity , *ENTROPY (Information theory) , *GEOMETRIC vertices , *FULLERENES , *COMPUTER science - Abstract
Let G be a connected graph. The eccentricity of vertex v is the maximum distance between v and other vertices of G . In this paper, we study a new version of graph entropy based on eccentricity of vertices of G . In continuing, we study this graph entropy for some classes of graph operations. Finally, we compute the graph entropy of two classes of fullerene graphs. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
89. A compositional treatment of iterated open games.
- Author
-
Ghani, Neil, Kupke, Clemens, Lambert, Alasdair, and Nordvall Forsberg, Fredrik
- Subjects
- *
REPEATED games (Game theory) , *SEMANTICS , *COMPUTER science , *INFINITE games (Game theory) , *GAME theory - Abstract
Compositional Game Theory is a new, recently introduced model of economic games based upon the computer science idea of compositionality. In it, complex and irregular games can be built up from smaller and simpler games, and the equilibria of these complex games can be defined recursively from the equilibria of their simpler subgames. This paper extends the model by providing a final coalgebra semantics for infinite games. In the course of this, we introduce a new operator on games to model the economic concept of subgame perfection. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
90. Implementing flipped classroom that used an intelligent tutoring system into learning process.
- Author
-
Mohamed, Hafidi and Lamia, Mahnane
- Subjects
- *
FLIPPED classrooms , *LEARNING , *COMPUTER science , *INTELLIGENT tutoring systems , *COMPUTER assisted instruction - Abstract
Students nowadays are hard to be motivated to solve logical problems with traditional teaching methods. Computers, Smartphone's, tablets and other smart devices disturb their attention. But those smart devices can be used as auxiliary tools of modern teaching methods. The flipped classroom is one such innovative method that moves the solving problems outside the classroom via technology and reinforces solving problems inside the classroom via learning activities. In this paper, the authors implement flipped classroom as an element of Internet of Things (IOT) into learning process of mathematical logic course. In the flipped classroom, an Intelligent Tutoring System (ITS) was used to help students work with the problems in the course outside the classroom. This study showed that perceived usefulness, self-efficacy, compatibility, and perceived support for enhancing social ties are important antecedents to continuance intention to use flipped classroom. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
91. Addressing expensive multi-objective games with postponed preference articulation via memetic co-evolution.
- Author
-
Żychowski, Adam, Gupta, Abhishek, Mańdziuk, Jacek, and Ong, Yew Soon
- Subjects
- *
MULTIPLE criteria decision making , *MEMETICS , *THEORY of knowledge , *MATHEMATICAL optimization , *COMPUTER science - Abstract
This paper presents algorithmic and empirical contributions demonstrating that the convergence characteristics of a co-evolutionary approach to tackle Multi-Objective Games (MOGs) with postponed preference articulation can often be hampered due to the possible emergence of the so-called Red Queen effect. Accordingly, it is hypothesized that the convergence characteristics can be significantly improved through the incorporation of memetics (local solution refinements as a form of lifelong learning), as a promising means of mitigating (or at least suppressing) the Red Queen phenomenon by providing a guiding hand to the purely genetic mechanisms of co-evolution. Our practical motivation is to address MOGs characterized by computationally expensive evaluations, wherein there is a natural need to reduce the total number of true evaluations consumed in achieving good quality solutions. To this end, we propose novel enhancements to co-evolutionary approaches for tackling MOGs, such that memetic local refinements can be efficiently applied on evolved candidate strategies by searching on computationally cheap surrogate payoff landscapes (that preserve postponed preference conditions). The efficacy of the proposal is demonstrated on a suite of test MOGs that have been designed. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
92. A semantic-rich similarity measure in heterogeneous information networks.
- Author
-
Zhou, Yu, Huang, Jianbin, Li, He, Sun, Heli, Peng, Yan, and Xu, Yueshen
- Subjects
- *
INFORMATION theory , *SEMANTICS , *MATRICES (Mathematics) , *COMPUTER software , *COMPUTER science - Abstract
Most of the existing similarity metrics in heterogeneous information networks depend on the pre-specified meta-path or meta-structure. This dependency may cause them to be sensitive to different meta-paths or meta-structures. In this paper, we propose a stratified meta-structure-based similarity measure named SMSS in heterogeneous information networks. The stratified meta-structure can be constructed automatically and capture rich semantics.Then, we define the commuting matrix of the stratified meta-structure by virtue of the commuting matrices of meta-paths and meta-structures. As a result, the SMSS is defined by virtue of this commuting matrix. Experimental evaluations show that the existing metrics are sensitive to different meta-paths or meta-structures and that the proposed SMSS outperforms the state-of-the-art metrics in terms of ranking and clustering. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
93. Lightweight scheme of secure outsourcing SVD of a large matrix on cloud.
- Author
-
Pramkaew, Chakan and Ngamsuriyaroj, Sudsanguan
- Subjects
- *
SINGULAR value decomposition , *CLOUD computing , *BIOINFORMATICS , *MATRICES software , *COMPUTER science - Abstract
For efficiency and economic reasons, a cloud system would be the most attractive choice for high computation tasks. But, computation on cloud is mostly done on clear text. As a result, the risk of data leak would be very high. The singular value decomposition (SVD) is widely used in several scientific computation areas including computer science, engineering, bioinformatics and physics, and the computation of the SVD consumes high computing power especially for large matrices. Hence, it would be efficient to outsource such computation to a cloud. In addition, many matrices are sparse containing lots of zeroes, and may have no meaning, whereas some applications contain sensitive bitmap images which the positions of zeroes are very significant. In other words, knowing the positions of zeroes would clearly expose the whole image. This paper proposes a novel secure SVD computation on cloud, and the main idea is to locally encrypt a source matrix before sending it to a cloud. The cloud then computes the SVD in an encrypted matrix without requiring any special algorithm, and the outputs will be locally decrypted to obtain the final results. For the encryption, our approach adds a random matrix to the source matrix to ensure that no element including zeroes is exposed in a clear format on the cloud. Moreover, the encryption will preserve the equivalent SVD computation on cloud. The security analysis demonstrates that our proposed scheme gives secure and correct computation while all zeroes are kept hidden. In addition, our experimental results show that the entropy of our encrypted matrix is high; consequently, it would give high resistance to attacks. Furthermore, the performance analysis shows that the complexity of the local workload is O (n 2) while the complexity of the cloud workload is O (n 3). [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
94. Construction method of concept lattice based on improved variable precision rough set.
- Author
-
Zhang, Ruiling, Xiong, Shengwu, and Chen, Zhong
- Subjects
- *
COMPUTER science , *ARTIFICIAL neural networks , *DISCRIMINANT analysis , *ALGORITHMS , *PARAMETER estimation - Abstract
This paper mainly focuses on how to construct concept lattice effectively and efficiently based on improved variable precision rough set. On the basis of preprocessing formal concept, one algorithm that can determine the value range of variable precision parameter β according to the approximate classification quality is proposed. An improved β -upper and lower distribution attribute reduction algorithm is also proposed based on the improved variable precision rough set, the algorithm can be used for attribute reduction on the original data of the concept lattice, and to eliminate the redundant knowledge or noises of the formal context. For the reduced formal context, the paper combines the concept construction algorithm with an improved rule acquisition algorithm seamlessly, and proposes a novel approach of concept lattice construction based on improved variable precision rough set. Finally, a concept lattice generation prototype system is developed, this paper also performs comprehensive experiments, and the effectiveness of the improved algorithm is proved through the experimental results. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
95. Computer graphics “Made in Germany”: Darmstadt, the leading “Computer Graphics and Visual Computing Hub” in Europe: The way from 1975 to 2014.
- Author
-
Encarnação, José L. and Fellner, Dieter W.
- Subjects
- *
COMPUTER graphics , *COMPUTER science , *INFORMATION technology , *COMPUTATIONAL complexity , *INFORMATION society - Abstract
The paper reports on the 40 years of development of Computer Graphics and, more recently, Visual computing (VC) at the Technische Universität Darmstadt in Germany, from its beginning in 1975 to the leading “Computer Graphics and Visual Computing Hub” in Europe as of 2014. This development is described along three axes. First, the institutional development and its rational to establish Computer Graphics as a discipline of Computer Science and as an enabling technology for developing our Knowledge Society, are described. Second, the scientific and technological impact based on the teaching activities and the large number of theses submitted in Darmstadt for the area during these 40 years are addressed. Finally, the research road maps of the Computer Graphics and Visual Computing Hub in Darmstadt are presented relatively to the different stages of CG and VC research, relatively to a scientific view to the large number of projects implemented over these 40 years and, finally, also relatively to the project results as seen from the media. In order to manage the quantity as well as the complexity of the information available, the description of these road maps is divided in four time periods: 1975–1984, 1985–1994, 1995–2004 and 2004–2015. The paper also gives the view of the authors on how they see the future of Computer Graphics and Visual Computing. At the end, the paper includes an extensive list of references for the reported content. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
96. From action icon to knowledge icon: Objective-oriented icon taxonomy in computer science.
- Author
-
Ma, Xiaoyue, Matta, Nada, Cahier, Jean-Pierre, Qin, Chunxiu, and Cheng, Yanjie
- Subjects
- *
COMPUTER science , *COGNITIVE psychology , *ICONS (Computer graphics) , *COMPUTER interfaces , *APPLICATION software - Abstract
Icon plays a critical role in computer interface design. Studies on icon taxonomy explain the way in which various types of icon represent the objects and provide designers creation rules by which icons are more in line with users’ cognitive psychology. However, along with larger and larger use of icons, the previous classification criterion causes the boundary between categories blur. What’s more, Single classification standard is not able to well illustrate the icons applied in today’s computer applications. The purpose of this paper is to present an objective-oriented icon taxonomy which proposes to categorize icons into action icon and knowledge icon. To assess this proposition, we analyzed a sample of icons that applied in computer interface and suggest precise application domains to both action icon and knowledge icon categories. The results of this practice manifested that action icon and knowledge icon implied a high relation with applied environment and explicated the development trace of computer icons. This work is one of the first to point out the notion of knowledge icon and to highlight the importance of objective of icon application. Findings in this paper could enrich icon use in computer interface design, especially provides possible way to improve online knowledge sharing by visual tool like icon. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
97. Two-sided context specifications in formal grammars.
- Author
-
Barash, Mikhail and Okhotin, Alexander
- Subjects
- *
CONTEXT sensitive languages (Computer science) , *FORMAL languages , *COMPUTER operators , *COMPUTER science , *COMPUTER algorithms , *COMPARATIVE grammar - Abstract
In a recent paper (M. Barash, A. Okhotin, “An extension of context-free grammars with one-sided context specifications”, Inform. and Comput. , 2014), the authors introduced an extension of the context-free grammars equipped with an operator for referring to the left context of the substring being defined. This paper proposes a more general model, in which context specifications may be two-sided, that is, both the left and the right contexts can be specified by the corresponding operators. The paper gives the definitions, presents several examples of grammars and establishes a basic normal form theorem. This normal form, in particular, leads to a simple parsing algorithm working in time O ( n 4 ) , where n is the length of the input string. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
98. Special issue SOCO 2020: New trends in soft computing and its application in industrial and environmental problems.
- Author
-
Sedano, Javier, Urda, Daniel, Calvo-Rolle, José Luis, Quintián, Héctor, and Corchado, Emilio
- Subjects
- *
SOFT computing , *INDUSTRIAL applications , *COMPUTER science , *MACHINE learning , *GRADUATE students , *INTELLIGENT tutoring systems - Abstract
The seven papers included in this special issue represent a selection of extended contributions presented at the 15th International Conference on Soft Computing Models in Industrial and Environmental Applications, SOCO 2020, held in Burgos, Spain, September 2020, and organized by University of Burgos and BISITE and GICAP Research Groups. SOCO 2020 international conference represents a collection or set of computational techniques in machine learning, computer science and some engineering disciplines which investigate, simulate, and analyse very complex issues and phenomena. This special issue is aimed at practitioners, researchers and postgraduate students who are engaged in developing and applying advanced intelligent systems principles to solving real-world problems on the mentioned fields. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
99. MD-ELM: Originally Mislabeled Samples Detection using OP-ELM Model.
- Author
-
Akusok, Anton, Veganzones, David, Miche, Yoan, Björk, Kaj-Mikael, Jardin, Philippe du, Severin, Eric, and Lendasse, Amaury
- Subjects
- *
MACHINE learning , *MACHINE theory , *INFORMATION science , *COMPUTER science , *ESTIMATION theory - Abstract
This paper proposes a methodology for identifying data samples that are likely to be mislabeled in a c -class classification problem (dataset). The methodology relies on an assumption that the generalization error of a model learned from the data decreases if a label of some mislabeled sample is changed to its correct class. A general classification model used in the paper is OP-ELM; it also provides a fast way to estimate the generalization error by PRESS Leave-One-Out. It is tested on two toy datasets, as well as on real life datasets for one of which expert knowledge about the identified potential mislabels has been sought. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
100. Inductive theorem proving based on tree grammars.
- Author
-
Eberhard, Sebastian and Hetzl, Stefan
- Subjects
- *
INDUCTION (Logic) , *INVARIANTS (Mathematics) , *STRUCTURAL analysis (Science) , *GRAMMAR , *REASONING , *COMPUTER science - Abstract
Induction plays a key role in reasoning in many areas of mathematics and computer science. A central problem in the automation of proof by induction is the non-analytic nature of induction invariants. In this paper we present an algorithm for proving universal statements by induction that separates this problem into two phases. The first phase consists of a structural analysis of witness terms of instances of the universal statement. The result of such an analysis is a tree grammar which induces a quantifier-free unification problem which is solved in the second phase. Each solution to this problem is an induction invariant. The arguments and techniques used in this paper heavily exploit a correspondence between tree grammars and proofs already applied successfully to the generation of non-analytic cuts in the setting of pure first-order logic. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.