612 results
Search Results
202. Towards a coherence-oriented complex search experience management method.
- Author
-
Zhang, Yin, Zhang, Bin, Gao, Kening, Li, Pengfei, Zhao, Yuli, and Zhang, Changsheng
- Subjects
- *
SOCIAL interaction , *SEARCH algorithms , *PROBLEM solving , *COHERENCE (Philosophy) , *COMPUTER science - Abstract
Experiences of complex search tasks are important in social interaction and in problem solving. Considering the high importance of complex search experiences, many search experience management systems (SEMSs) have been introduced. Like any other life experience, complex search experiences should maintain 3 types of global coherence: temporal, causal and thematic coherence. However, to the best of our knowledge, none of the available SEMSs were designed to support all the 3 types of global coherence. In this paper, we introduce a coherence-oriented complex search experience management method named TimeTree. By organizing queries and clicks of a complex search task as a relative chronological source-tracking tree (RCST), TimeTree manages to support all the 3 types of global coherence. We describe a user study to evaluate TimeTree in 2 typical types of complex search task. The subjective evaluation results, the expert evaluation results, and the objective evaluation results all suggest that TimeTree can help maintain temporal, causal and thematic coherence for complex search experiences. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
203. A compositional treatment of iterated open games.
- Author
-
Ghani, Neil, Kupke, Clemens, Lambert, Alasdair, and Nordvall Forsberg, Fredrik
- Subjects
- *
REPEATED games (Game theory) , *SEMANTICS , *COMPUTER science , *INFINITE games (Game theory) , *GAME theory - Abstract
Compositional Game Theory is a new, recently introduced model of economic games based upon the computer science idea of compositionality. In it, complex and irregular games can be built up from smaller and simpler games, and the equilibria of these complex games can be defined recursively from the equilibria of their simpler subgames. This paper extends the model by providing a final coalgebra semantics for infinite games. In the course of this, we introduce a new operator on games to model the economic concept of subgame perfection. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
204. Implementing flipped classroom that used an intelligent tutoring system into learning process.
- Author
-
Mohamed, Hafidi and Lamia, Mahnane
- Subjects
- *
FLIPPED classrooms , *LEARNING , *COMPUTER science , *INTELLIGENT tutoring systems , *COMPUTER assisted instruction - Abstract
Students nowadays are hard to be motivated to solve logical problems with traditional teaching methods. Computers, Smartphone's, tablets and other smart devices disturb their attention. But those smart devices can be used as auxiliary tools of modern teaching methods. The flipped classroom is one such innovative method that moves the solving problems outside the classroom via technology and reinforces solving problems inside the classroom via learning activities. In this paper, the authors implement flipped classroom as an element of Internet of Things (IOT) into learning process of mathematical logic course. In the flipped classroom, an Intelligent Tutoring System (ITS) was used to help students work with the problems in the course outside the classroom. This study showed that perceived usefulness, self-efficacy, compatibility, and perceived support for enhancing social ties are important antecedents to continuance intention to use flipped classroom. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
205. Addressing expensive multi-objective games with postponed preference articulation via memetic co-evolution.
- Author
-
Żychowski, Adam, Gupta, Abhishek, Mańdziuk, Jacek, and Ong, Yew Soon
- Subjects
- *
MULTIPLE criteria decision making , *MEMETICS , *THEORY of knowledge , *MATHEMATICAL optimization , *COMPUTER science - Abstract
This paper presents algorithmic and empirical contributions demonstrating that the convergence characteristics of a co-evolutionary approach to tackle Multi-Objective Games (MOGs) with postponed preference articulation can often be hampered due to the possible emergence of the so-called Red Queen effect. Accordingly, it is hypothesized that the convergence characteristics can be significantly improved through the incorporation of memetics (local solution refinements as a form of lifelong learning), as a promising means of mitigating (or at least suppressing) the Red Queen phenomenon by providing a guiding hand to the purely genetic mechanisms of co-evolution. Our practical motivation is to address MOGs characterized by computationally expensive evaluations, wherein there is a natural need to reduce the total number of true evaluations consumed in achieving good quality solutions. To this end, we propose novel enhancements to co-evolutionary approaches for tackling MOGs, such that memetic local refinements can be efficiently applied on evolved candidate strategies by searching on computationally cheap surrogate payoff landscapes (that preserve postponed preference conditions). The efficacy of the proposal is demonstrated on a suite of test MOGs that have been designed. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
206. A semantic-rich similarity measure in heterogeneous information networks.
- Author
-
Zhou, Yu, Huang, Jianbin, Li, He, Sun, Heli, Peng, Yan, and Xu, Yueshen
- Subjects
- *
INFORMATION theory , *SEMANTICS , *MATRICES (Mathematics) , *COMPUTER software , *COMPUTER science - Abstract
Most of the existing similarity metrics in heterogeneous information networks depend on the pre-specified meta-path or meta-structure. This dependency may cause them to be sensitive to different meta-paths or meta-structures. In this paper, we propose a stratified meta-structure-based similarity measure named SMSS in heterogeneous information networks. The stratified meta-structure can be constructed automatically and capture rich semantics.Then, we define the commuting matrix of the stratified meta-structure by virtue of the commuting matrices of meta-paths and meta-structures. As a result, the SMSS is defined by virtue of this commuting matrix. Experimental evaluations show that the existing metrics are sensitive to different meta-paths or meta-structures and that the proposed SMSS outperforms the state-of-the-art metrics in terms of ranking and clustering. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
207. Lightweight scheme of secure outsourcing SVD of a large matrix on cloud.
- Author
-
Pramkaew, Chakan and Ngamsuriyaroj, Sudsanguan
- Subjects
- *
SINGULAR value decomposition , *CLOUD computing , *BIOINFORMATICS , *MATRICES software , *COMPUTER science - Abstract
For efficiency and economic reasons, a cloud system would be the most attractive choice for high computation tasks. But, computation on cloud is mostly done on clear text. As a result, the risk of data leak would be very high. The singular value decomposition (SVD) is widely used in several scientific computation areas including computer science, engineering, bioinformatics and physics, and the computation of the SVD consumes high computing power especially for large matrices. Hence, it would be efficient to outsource such computation to a cloud. In addition, many matrices are sparse containing lots of zeroes, and may have no meaning, whereas some applications contain sensitive bitmap images which the positions of zeroes are very significant. In other words, knowing the positions of zeroes would clearly expose the whole image. This paper proposes a novel secure SVD computation on cloud, and the main idea is to locally encrypt a source matrix before sending it to a cloud. The cloud then computes the SVD in an encrypted matrix without requiring any special algorithm, and the outputs will be locally decrypted to obtain the final results. For the encryption, our approach adds a random matrix to the source matrix to ensure that no element including zeroes is exposed in a clear format on the cloud. Moreover, the encryption will preserve the equivalent SVD computation on cloud. The security analysis demonstrates that our proposed scheme gives secure and correct computation while all zeroes are kept hidden. In addition, our experimental results show that the entropy of our encrypted matrix is high; consequently, it would give high resistance to attacks. Furthermore, the performance analysis shows that the complexity of the local workload is O (n 2) while the complexity of the cloud workload is O (n 3). [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
208. Semantic segmentation of RGBD images based on deep depth regression.
- Author
-
Guo, Yanrong and Chen, Tao
- Subjects
- *
COMPUTER vision , *ARTIFICIAL intelligence , *ARTIFICIAL neural networks , *SEMANTIC computing , *COMPUTER science - Abstract
Depth information has been discovered to improve the performance of computer vision tasks, such as semantic segmentation and object recognition. However, careful acquisition of depth data needs highly developed depth sensors which are expensive. As a classic computer vision task, depth estimation from a single image has obtained promising results based on supervised learning methods. In this paper, we investigate the extension of color images with corresponding deep-regressed depth images in boosting the performance of semantic segmentation. Furthermore, the usage of combining color channels with the estimated depth or the ground truth depth channel is compared. Specifically, there are two stages in our work. Firstly, we adopt the framework of convolutional neural networks (CNN) for the depth estimation by combing the global depth network and the depth gradient network. After refining based on these two networks, the depth image map can be estimated in a deep-regressed manner. Secondly, after augmenting the color images with the predicted depth images, fully convolutional networks (FCN) are further used to implement the pixel-level semantic labeling. In the experiments, we employ two popular RGBD datasets, i.e., SUNRGBD and NYUDv2, for 37 and 40-class semantic segmentation, respectively. By comparing with the ground truth depth images, experimental results demonstrate that the networks trained on the estimated depth images can achieve comparable performance on facilitating the accuracy of semantic segmentation task. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
209. Early cherry fruit pathogen disease detection based on data mining prediction.
- Author
-
Ilic, Milos, Ilic, Sinisa, Jovic, Srdjan, and Panic, Stefan
- Subjects
- *
INFORMATION & communication technologies , *COMPUTER science , *PLANT protection , *ELECTRONIC data processing , *MONILINIA laxa , *COCCOMYCES - Abstract
Today’s world depends largely on information and communication technologies. These technologies are in use in different areas of human life and work. Each day, more examples of possible application of information and communications technology are being discovered. In most cases, computer science is used to solve complex problems which have mathematical background. The most important and challenging job in agriculture is plant protection. This is due to its complexity and the lack of specialized tools that could predict when the conditions for specific infections are fulfilled. In this paper authors use different mathematics-based techniques for data processing and prediction of possible fruit disease infection. Six significant weather variables and one variable representing the month in the year are selected as predictor variables. Implemented techniques are compared with each other in order to select the best. Prediction includes two most important diseases of cherry fruit: monilinia laxa and coccomyces hiemalis . Data sets used in this research include data of eight year time period, collected over the region of Toplica in Serbia. The best achieved prediction accuracy is 95.8%. Additionally, the same implemented methods can be applied on other fruit species and other diseases for which data are known. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
210. An information dimension of weighted complex networks.
- Author
-
Wen, Tao and Jiang, Wen
- Subjects
- *
MATHEMATICAL models , *COMPUTER science , *CONTROL theory (Engineering) , *MATHEMATICS , *BIOLOGY - Abstract
The fractal and self-similarity are important properties in complex networks. Information dimension is a useful dimension for complex networks to reveal these properties. In this paper, an information dimension is proposed for weighted complex networks. Based on the box-covering algorithm for weighted complex networks (BCANw), the proposed method can deal with the weighted complex networks which appear frequently in the real-world, and it can get the influence of the number of nodes in each box on the information dimension. To show the wide scope of information dimension, some applications are illustrated, indicating that the proposed method is effective and feasible. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
211. Detecting wholesale copying in cultural evolution.
- Author
-
Morin, Olivier and Miton, Helena
- Subjects
SOCIAL evolution ,COMPUTER science ,DATA transmission systems ,COPYING ,MUTATION statistics - Abstract
A cultural practice can spread because it is transmitted with high fidelity, but also because biased transformation leads to its reinvention. The respective effect of these two mechanisms, however, may only be quantified if we can measure and detect high-fidelity transmission. This paper proposes wholesale copying, the reproduction of a set of elements as a set, as an operational definition. Using two corpus of heraldic designs (total n = 13,453), we apply information-theoretic tools to detect cases of wholesale copying and gauge their incidence. Heraldic designs are composed according to rigorous combinatorial rules. Wholesale copying causes the frequency of a design to increase out of proportion with the frequency of the motif and tinctures that make it up. Comparing the frequency of designs with that of their component motifs and tinctures, we show that the amount of information carried by a design tracks its inheritance along family lines. A model predicting the frequency of heraldic designs based solely on the frequency of their component parts systematically outperforms one that assumes a mix of wholesale copying and random mutation (with realistic mutation rates). These findings are consistent with low but non-null incidences of wholesale copying in the diffusion of heraldic designs. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
212. Optimising Kafka for stream processing in latency sensitive systems.
- Author
-
Wiatr, Roman, Słota, Renata, and Kitowski, Jacek
- Subjects
DATA management ,JAVA programming language ,COMPUTER systems ,COMPUTER science ,COMPUTER networks - Abstract
Abstract Many problems, like recommendation services, sensor networks, anti-crime protection, sophisticated AI services, need online data processing coming from the environment in the form of data streams consisting of events. The novelty of the approach in the field of stream processing lies in a synergistic effort toward optimization of such systems and additionally needed client components working as a whole. Building a message passing system for gathering information from mission-critical systems can be beneficial, but it is required to pay close attention to the impact it has on these systems. In this paper, we present the Apache Kafka optimization process for usage Kafka as a messaging system in latency sensitive systems. We propose a set of performance tests that can be used to measure Kafka impact on the system and performance test results of KafkaProducer Java API. KafkaProducer has almost no impact on system overall latency and it has a severe impact on resource consumption in terms of CPU. Optimising Kafka for stream processing in latency sensitive systems we reduce KafkaProducer negative impact by 75%. The tests are performed on an isolated production system. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
213. On the kernelization complexity of string problems.
- Author
-
Basavaraju, Manu, Panolan, Fahad, Rai, Ashutosh, Ramanujan, M.S., and Saurabh, Saket
- Subjects
- *
COMPUTATIONAL complexity , *BOOLEAN algebra , *GRAPH algorithms , *BIOINFORMATICS , *COMPUTER science - Abstract
In the Closest String problem we are given an alphabet Σ, a set of strings S = { s 1 , s 2 , … , s k } over Σ such that | s i | = n and an integer d . The objective is to check whether there exists a string s over Σ such that d H ( s , s i ) ≤ d , i ∈ { 1 , … , k } , where d H ( x , y ) denotes the number of places strings x and y differ at. Closest String is a prototype string problem. This problem together with several of its variants such as Distinguishing String Selection and Closest Substring have been extensively studied from parameterized complexity perspective. These problems have been studied with respect to parameters that are combinations of k , d , | Σ | and n . However, surprisingly the kernelization question for these problems (for the versions when they admit fixed-parameter tractable algorithms) is not studied at all. In this paper we fill this gap in the literature and do a comprehensive study of these problems from kernelization complexity perspective. We settle almost all the problems by either obtaining a polynomial kernel or showing that the problem does not admit a polynomial kernel under a standard assumption in complexity theory. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
214. Online multiple object tracking via exchanging object context.
- Author
-
Yu, Hongyang, Qin, Lei, Huang, Qingming, and Yao, Hongxun
- Subjects
- *
OBJECT recognition (Computer vision) , *HISTOGRAMS , *COMPUTER science , *NEURAL computers , *SMOOTHNESS of functions - Abstract
Multiple object tracking is a key problem for many computer vision applications such as video surveillance, advanced driver assistance or animation. Most of existing tracking-by-detection methods are mainly based on object appearances and motions. However, the contextual information around the target has not been fully exploited. In this paper, we pay more attention to the contextual information and propose an Exchanging Object Context (EOC) model, which takes full advantage of the context information. Specifically, we implement an efficient and accurate online multiple object tracking algorithm with a novel affinity measure to associate detections. This measure calculates the similarity between targets and detections with the background smoothness after exchanging the contexts between detections and targets, using a novel color histogram descriptor. We refine the bounding boxes by measuring the context changes. Extensive experimental results on two public benchmarks demonstrate the effectiveness of the proposed tracking method with comparisons to several state-of-the-art trackers. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
215. Towards an Adaptive Formative Assessment in Context-Aware Mobile Learning.
- Author
-
Louhab, Fatima Ezzahraa, Bahnasse, Ayoub, and Talea, Mohamed
- Subjects
MOBILE learning ,ADAPTIVE control systems ,COMPUTER engineering ,ARTIFICIAL intelligence ,COMPUTER science - Abstract
Today and with the development of computer technologies, traditional learning, which offers a static content for all learners, is no longer desired in learning environments. As a result, the exploitation of this development has given rise to new methods of learning. The mobile learning is one of these methods and specifically the adaptive mobile learning or context-aware mobile learning. Generally, the learning process goes through several stages; the assessment is part of this process and it is a key step in the learning activity, taking several ways. When we talk about an adaptive learning, we should think also about an adaptive assessment, where the learner can take an adaptive test content according to their context. In this paper, we present the Adaptive Formative Assessment in Context-Aware Mobile Learning (AFA-CAML) approach. The goal of this approach is to provide learners with an adaptive and personalized formative assessment taking into account the learner context based on the CAT (Computerized Adaptive Tests) theory. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
216. Independence number and the number of maximum independent sets in pseudofractal scale-free web and Sierpiński gasket.
- Author
-
Shan, Liren, Li, Huan, and Zhang, Zhongzhi
- Subjects
- *
NUMBER theory , *INDEPENDENT sets , *COMPUTER science , *GEOMETRIC vertices , *PROBLEM solving - Abstract
As a fundamental subject of theoretical computer science, the maximum independent set (MIS) problem not only is of purely theoretical interest, but also has found wide applications in various fields. However, for a general graph determining the size of a MIS is NP-hard, and exact computation of the number of all MISs is even more difficult. It is thus of significant interest to seek special graphs for which the MIS problem can be exactly solved. In this paper, we address the MIS problem in the pseudofractal scale-free web and the Sierpiński gasket, which have the same number of vertices and edges. For both graphs, we determine exactly the independence number and the number of all possible MISs. The independence number of the pseudofractal scale-free web is as twice as the one of the Sierpiński gasket. Moreover, the pseudofractal scale-free web has a unique MIS, while the number of MISs in the Sierpiński gasket grows exponentially with the number of vertices. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
217. On strongly M-unambiguous prints and Şerbǎnuţǎ's conjecture for Parikh matrices.
- Author
-
Teh, Wen Chean, Atanasiu, Adrian, and Poovanandran, Ghajendran
- Subjects
- *
INJECTIVE functions , *MATHEMATICAL equivalence , *PROBABILISTIC automata , *COMPUTER science , *VECTOR algebra - Abstract
In the combinatorial study of words, the Parikh matrix mapping was introduced by Mateescu et al. in 2001 as a natural expansion of the classical Parikh mapping. Solving the general injectivity problem of Parikh matrices remains as one of the most sought after triumph among researchers in this area of study. In this paper, we tackle this problem by extending Şerbǎnuţǎ's work regarding prints and M -unambiguity to the context of strong M -equivalence. Consequently, we obtain results on the finiteness of strongly M -unambiguous prints for any finite alphabet. Finally, a related conjecture by Şerbǎnuţǎ is conclusively addressed. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
218. Remediating critical cause-effect situations with an extended BDI architecture.
- Author
-
Faccin, J. and Nunes, I.
- Subjects
- *
PROBLEM solving , *COMPUTER science , *APPLICATION software , *INFRASTRUCTURE (Economics) , *PERFORMANCE evaluation - Abstract
Remediation actions are performed in scenarios in which consequences of a problem should be promptly mitigated when its cause takes too long to be addressed or is unknown. Such scenarios are recurrent in the real world, including in the context of computer science. Existing approaches that address these scenarios are application-specific. Nevertheless, the reasoning about remediation actions as well as cause identification and resolution, in order to address problems permanently, can be abstracted in such a way that they can be incorporated to autonomous software components, often referred to as agents. They can thus autonomously deal with these scenarios, which we refer to as critical cause-effect situations . In this paper, we propose a domain-independent extension to the belief-desire-intention (BDI) architecture that provides such agents with this automated reasoning. Our work provides an extensible solution to this recurrent problem-solving strategy and allows agents to flexibly deal with resource-constrained scenarios. This solution removes the need for manually implementing the coordination of actions performed by agents, using causal models to capture the knowledge required to carry out this task. Therefore, it not only allows the development of systems with remediative behaviour, but also enables the reduction of development effort by means of a reusable infrastructure that can be used in several different domains. Our approach was evaluated based on an existing solution in the network resilience domain, which showed that our extended agent can autonomously address a network challenge, with a reduction in the development effort and no impact in agent performance. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
219. Enhancing context knowledge repositories with justifiable exceptions.
- Author
-
Bozzato, Loris, Eiter, Thomas, and Serafini, Luciano
- Subjects
- *
SEMANTIC computing , *COMPUTER science , *DESCRIPTION logics , *INFORMATION theory , *COMPUTER programming - Abstract
Dealing with context dependent knowledge is a well-known area of study that roots in John McCarthy's seminal work. More recently, the Contextualized Knowledge Repository (CKR) framework has been conceived as a logic-based approach in which knowledge bases have a two layered structure, modeled by a global context and a set of local contexts. The global context not only contains the meta-knowledge defining the properties of local contexts, but also holds the global (context independent) object knowledge that is shared by all of the local contexts. In many practical cases, however, it is desirable to leave the possibility to “override” the global object knowledge at the local level: in other words, it is interesting to recognize the pieces of knowledge that can admit exceptional instances in the local contexts that do not need to satisfy the general axiom. To address this need, we present in this paper an extension of CKR in which defeasible axioms can be included in the global context. The latter are verified in the local contexts only for the instances for which no exception to overriding exists, where exceptions require a justification in terms of facts that are provable from the knowledge base. We formally define this semantics and study some semantic and computational properties, where we characterize the complexity of the major reasoning tasks, among them satisfiability testing, instance checking, and conjunctive query answering. Furthermore, we present a translation of extended CKRs with knowledge bases in the Description Logic SROIQ -RL under the novel semantics to datalog programs under the stable model (answer set) semantics. We also present an implementation prototype and examine its scalability with respect to the size of the input CKR and the amount (level) of defeasibility in experiments. Finally, we compare our representation approach with some major formalisms for expressing defeasible knowledge in Description Logics and contextual knowledge representation. Our work adds to the body of results on using deductive database technology such as SQL and datalog in these areas, and provides an expressive formalism (in terms of intrinsic complexity) for exception handling by overriding. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
220. Independent component analysis by lp-norm optimization.
- Author
-
Park, Sungheon and Kwak, Nojun
- Subjects
- *
INDEPENDENT component analysis , *MAXIMUM likelihood statistics , *GAUSSIAN processes , *COMPUTER algorithms , *COMPUTER science - Abstract
In this paper, a couple of new algorithms for independent component analysis (ICA) are proposed. In the proposed methods, the independent sources are assumed to follow a predefined distribution of the form f ( s ) = α exp ( − β | s | p ) and a maximum likelihood estimation is used to separate the sources. In the first method, a gradient ascent method is used for the maximum likelihood estimation, while in the second, a non-iterative algorithm is proposed based on the relaxation of the problem. The maximization of the log-likelihood of the estimated source X T w given the parameter p and the data X is shown to be equivalent to the minimization of l p -norm of the projected data X T w . This formulation of ICA has a very close relationship with the Lp-PCA where the maximization of the same objective function is solved. The proposed algorithm solves an approximation of the l p -norm minimization problem for both super-( p < 2) and sub-Gaussian ( p > 2) cases and shows superior performance in separating independent sources than the state of the art algorithms for ICA computation. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
221. Trend representation based log-density regularization system for portfolio optimization.
- Author
-
Yang, Pei-Yi, Lai, Zhao-Rong, Wu, Xiaotian, and Fang, Liangda
- Subjects
- *
ELECTRONIC portfolios , *MACHINE learning , *EQUILIBRIUM , *ARTIFICIAL intelligence , *REPRESENTATIONS of algebras , *COMPUTER science - Abstract
Portfolio optimization (PO) has been catching more and more attention in the artificial intelligence and the machine learning communities. In this paper, we propose a novel Trend Representation based Log-density Regularization (TRLR) system for portfolio optimization. Its novelty falls into two aspects. First, it introduces a log-density regularization to the increasing factor of portfolio, which is seldom addressed by previous PO systems. It reflects a relationship between the portfolio and the price relative at an equilibrium point. Second, TRLR exploits a novel trend representation by taking the time variable as regressor in a weighted ridge regression, hence TRLR captures price trend patterns effectively. Extensive experiments conducted on 5 benchmark datasets from real-world financial markets demonstrate that TRLR achieves significantly better performance than other state-of-the-art strategies and runs fast, which shows its effectiveness and efficiency for large-scale applications. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
222. A tissue P system based evolutionary algorithm for multi-objective VRPTW.
- Author
-
Dong, Wenbo, Zhou, Kang, Qi, Huaqing, He, Cheng, and Zhang, Jun
- Subjects
EVOLUTIONARY computation ,ALGORITHMS ,VEHICLE routing problem ,COMPUTER science ,PARETO optimum - Abstract
Multi-objective vehicle routing problem with time windows (VRPTW) has important applications in engineering and computer science, and it is a NP-hard problem. In the last decade, numerous new methods for multi-objective VRPTW have sprung up. However, the calculation speed of most algorithms is not fast enough, and on the other hand, these algorithms did not give a complete Pareto optimal front, although their results are excellent. Hence, in this paper, a tissue P system with three cells based MOEA, termed PDVA, is proposed to solve the multi-objective VRPTW. In PDVA, two mechanisms, the discrete glowworm evolution mechanism (DGEM) and the variable neighborhood evolution mechanism (VNEM), are used as sub-algorithms in two cells respectively to balance the exploration and exploitation reasonably. Simultaneously, some special strategies are used to enhance the performance of the proposed algorithm. The following experiments are presented to test the proposed algorithm. First, the influence of the parameters on the performance of the algorithm is investigated. Second, the validity of the algorithm is highlighted when compared to the DGEM-VNEM algorithm. Third, the quality and diversity of the solutions are improved when compared to the other popular algorithms. These results and comparisons on test instances demonstrate the competitiveness of PDVA in solving multi-objective VRPTW in terms of both quantity and speed. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
223. Dynamic multiple node failure recovery in distributed storage systems.
- Author
-
Itani, May, Sharafeddine, Sanaa, and ElKabani, Islam
- Subjects
CLOUD computing ,WIRELESS communications ,COMPUTER storage devices ,WIRELESS sensor networks ,COMPUTER science - Abstract
Our daily lives are getting more and more dependent on data centers and distributed storage systems in general, whether at the business or at the personal level. With the advent of fog computing, personal mobile devices in a given geographical area may also comprise a very dynamic distributed storage system. These paradigm changes call for the urgent need of devising efficient and reliable failure recovery mechanisms in dynamic scenarios where failures become more likely and nodes join and leave the network more frequently. Redundancy schemes in distributed storage systems have become essential for providing reliability given the fact of frequent node failures. In this work, we address the problem of multiple failure recovery with dynamic scenarios using the fractional repetition code as a redundancy scheme. The fractional repetition (FR) code is a class of regenerating codes that concatenates a maximum distance separable code (MDS) with an inner fractional repetition code where data is split into several blocks then replicated and multiple replicas of each block are stored on various system nodes. We formulate the problem as an integer linear programming problem and extend it to account for three dynamic scenarios of newly arriving blocks, nodes, and variable priority blocks allocation. The contribution of this paper is four-fold: i. we generate an optimized block distribution scheme that minimizes the total system repair cost of all dependent and independent multiple node failure scenarios; ii. we address the practical scenario of having newly arriving blocks and allocate those blocks to existing nodes without any modification to the original on-node block distribution; iii. we consider new-comer nodes and generate an updated optimized block distribution; iv. we consider optimized storage and recovery of blocks with varying priority using variable fractional repetition codes. The four problems are modeled using incidence matrices and solved heuristically. We present a range of results for our proposed algorithms in several scenarios to assess the effectiveness of the solution approaches that are shown to generate results close to optimal. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
224. A mathematical evaluation for measuring correctness of domain ontologies using concept maps.
- Author
-
Iqbal, Rizwan, Azmi Murad, Masrah Azrifah, Sliman, Layth, and da Silva, Clay Palmeira
- Subjects
- *
ONTOLOGIES (Information retrieval) , *CONCEPT mapping , *LEARNING , *COMPUTER science , *METHODOLOGY - Abstract
There is a need for further research in the area of ontology evaluation specifically dealing with ontology development exploiting concept maps. The existing literature on ontology evaluation primarily emphasis on ontology formalisation as well as on performing logical inferences, which is usually not directly relevant for concept maps as they are commonly exploited as communication instruments for learning purposes. Commonly used techniques for evaluating concept maps for knowledge assessment may be adopted for a kind of criteria-based evaluation of a domain concept map with respect to a particular aspect. However, this makes its validity limited to a particular aspect or criteria. This paper presents a mathematical ontology evaluation technique to measure the correctness of domain ontologies engineered using concept maps. It is based on the notion of merging two different mathematical measures, namely closeness index and similarity index to come up with a combined index that takes different criteria or aspects into account while performing ontology evaluation. Therefore, the proposed technique makes the evaluation process more reliable and robust. Two case studies were conducted employing the proposed technique for evaluating two different domain ontologies that were engineered using concept maps. Calculations and results from the case studies showed that depending on the correctness of individual ontology, different values of combined Index was calculated manifesting the measure of correctness of each individual ontology in a quantifiable form. Moreover, the results depict that the technique provides in-depth evaluation, it is easy to adopt, requires no special skills, and is conveniently replicable. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
225. The η-anti-Hermitian solution to some classic matrix equations.
- Author
-
Liu, Xin
- Subjects
- *
MATRICES (Mathematics) , *DIVISION algebras , *COLOR image processing , *SIGNAL processing , *COMPUTER science - Abstract
We in this paper consider the η -anti-Hermitian solution to some classic matrix equations A X = B , A X B = C , A X A η * = B , E X E η * + F Y F η * = H , respectively. We derive the necessary and sufficient conditions for the above matrix equations to have η -anti-Hermitian solutions and also provide the general expressions of solutions when those equations are solvable. As applications, for instance, we give the solvability conditions and general η -anti-Hermitian solution to equation system A X = B , C Y = D , M X M η * + N Y N η * = G . [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
226. On identifying k-nearest neighbors in neighborhood models for efficient and effective collaborative filtering.
- Author
-
Chae, Dong-Kyu, Lee, Sang-Chul, Lee, Si-Yong, and Kim, Sang-Wook
- Subjects
- *
SIGNAL filtering , *K-nearest neighbor classification , *RECOMMENDER systems , *COMPUTER science , *PREDICTION theory - Abstract
Neighborhood models ( NBM s) are the methods widely used for collaborative filtering in recommender systems. Given a target user and a target item, NBM s find k most similar users or items (i.e., k -nearest neighbors) and make a prediction of a target user on an item based on the rating patterns of those neighbors on the item. In NBM s, however, we have a difficulty in satisfying both the performance and accuracy together. In order to pursue an accurate recommendation, NBM s may find the k -nearest neighbors at every recommendation request to exploit the latest ratings, which requires a huge amount of computation time. Alternatively, NBM s may search for the k -nearest neighbors offline, which consequently results in inaccurate recommendation as time goes by, or even may not able to deal with new users or new items, because they cannot exploit the latest ratings generated after the k -nearest neighbors are determined. In this paper, we propose a novel approach that finds the k -nearest neighbors efficiently by identifying only those users and items necessary in computing the similarity. The proposed approach enables NBM s not to require any offline similarity computations but to exploit the latest ratings, thereby resolving speed-accuracy tradeoff successfully. We demonstrate the effectiveness of the proposed approach through extensive experiments. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
227. Multi-modal local receptive field extreme learning machine for object recognition.
- Author
-
Liu, Huaping, Li, Fengxue, Xu, Xinying, and Sun, Fuchun
- Subjects
- *
FEEDFORWARD neural networks , *OBJECT recognition (Computer vision) , *SUPERVISED learning , *GENERALIZABILITY theory , *COMPUTER science - Abstract
Learning rich representations efficiently plays an important role in the multi-modal recognition task, which is crucial to achieving high generalization performance. To address this problem, in this paper, we propose an effective Multi-Modal Local Receptive Field Extreme Learning Machine (MM-LRF-ELM) structure, while maintaining ELM’s advantages of training efficiency. In this structure, LRF-ELM is first conducted for feature extraction for each modality separately. And then, the shared layer is developed by combining these features from each modality. Finally, the Extreme Learning Machine (ELM) is used as supervised feature classifier for the final decision. Experimental validation on Washington RGB-D Object Dataset illustrates that the proposed multiple modality fusion method achieves better recognition performance. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
228. Fault tolerant encoders for Single Error Correction and Double Adjacent Error Correction codes.
- Author
-
Liu, Shanshan, Reviriego, Pedro, Maestro, Juan Antonio, and Xiao, Liyi
- Subjects
- *
FAULT tolerance (Engineering) , *ERROR correction (Information theory) , *SOFT errors , *DATA recovery , *COMPUTER science - Abstract
Error correction codes (ECCs) are commonly used to deal with soft errors in memory applications. Typically, Single Error Correction-Double Error Detection (SEC-DED) codes are widely used due to their simplicity. However, the phenomenon of more than one error in the memory cells has become more serious in advanced technologies. Single Error Correction-Double Adjacent Error Correction (SEC-DAEC) codes are a good choice to protect memories against double adjacent errors that are a major multiple error pattern. An important consideration is that the ECC encoder and decoder circuits can also be affected by soft errors, which will corrupt the memory data. In this paper, a method to design fault tolerant encoders for SEC-DAEC codes is proposed. It is based on the fact that soft errors in the encoder have a similar effect to soft errors in a memory word and achieved by using logic sharing blocks for every two adjacent parity bits. In the proposed scheme, one soft error in the encoder can cause at most two errors on adjacent parity bits, thus the correctness of memory data can be ensured because those errors are correctable by the SEC-DAEC code. The proposed scheme has been implemented and the results show that it requires less circuit area and power than the encoders protected by the existing methods. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
229. Bibliometric analysis of fuzzy theory research in China: A 30-year perspective.
- Author
-
Yu, Dejian, Xu, Zeshui, and Wang, Wanru
- Subjects
- *
FUZZY logic , *COMPUTER science , *SCIENCE publishing , *MATHEMATICAL optimization , *BIBLIOMETRICS - Abstract
The past half-century has witnessed fast development in the field of fuzzy theory (FT), however, few researches have focused on mapping the development of this area in China. Based on the samples of 12,936 publications authored by Chinese scholars on FT researches during the past 30 years, this paper intends to explore the patterns and dynamics by analyzing the geographic distribution of publications, international collaboration, research hot spot, subject categories and journals, and publication contributors. The results indicate that the scientific publications are highly unbalanced at regional levels in China, and the USA is China's most important partner in FT cooperative researches. Collaborations are not indispensable for high-quality research outputs in FT area. The existing researches in the field of FT from Chinese scholars focus primarily on Computer Science and Engineering. The emerging trends of FT researches from Chinese scholars have shifted away from basic FT researches to the applications, such as the areas of decision making, optimization, modeling and design. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
230. Publication trends in gamification: A systematic mapping study.
- Author
-
Kasurinen, Jussi and Knutas, Antti
- Subjects
GAMIFICATION ,USER interfaces ,COMPUTER software development ,MOBILE learning ,COMPUTER science - Abstract
The term gamification and gamified systems are a trending area of research. However, gamification can indicate several different things, such as applying the game-like elements into the design of the user interface of a software, but not all gamification is necessarily associated with software products. Overall, it is unclear what different aspects are studied under the umbrella of ‘gamification’, and what is the current state of the art in the gamification research. In this paper, 1164 gamification studies are analyzed and classified based on their focus areas and the research topics to establish what the research trends in gamification are. Based on the results, e-learning and proof-of-concept studies in the ecological lifestyle and sustainability, assisting computer science studies and improving motivation are the trendiest areas of gamification research. Currently, the most common types of research are the proof-of-concept studies, and theoretical works on the different concepts and elements of gamification. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
231. Multi-functional secure data aggregation schemes for WSNs.
- Author
-
Zhang, Ping, Wang, Jianxin, Guo, Kehua, Wu, Fan, and Min, Geyong
- Subjects
WIRELESS sensor networks ,DATA security ,BANDWIDTHS ,COMPUTER science ,AGGREGATION (Statistics) - Abstract
Secure data aggregation schemes are widely adopted in wireless sensor networks, not only to minimize the energy and bandwidth consumption, but also to enhance the security. Statistics obtained from data aggregation schemes often fall into three categories, i.e., distributive, algebraic, and holistic. In practice, a wide range of reasonable aggregation queries are combinations of several different statistics. Providing multi-functional aggregation support is also a primary demand for data preprocessing in data mining. However, most existing secure aggregation schemes only focus on a single type of statistics. Some statistics, especially holistic ones (e.g., median), are often difficult to compute efficiently in a distributed mode even without considering the security issue. In this paper, we first propose a new M ulti-functi O nal secure D ata A ggregation scheme (MODA), which encodes raw data into well-defined vectors to provide value-preservation, order-preservation and context-preservation, and thus offering the building blocks for multi-functional aggregation. A homomorphic encryption scheme is adopted to enable in-ciphertext aggregation and end-to-end security. Then, two enhanced and complementary schemes are proposed based on MODA, namely, R and O m selected encryption based D ata A ggregation (RODA) and CO mpression based D ata A ggregation (CODA). RODA can significantly reduce the communication cost at the expense of slightly lower but acceptable security on a leaf node, while CODA can dramatically reduce communication cost with the lower aggregation accuracy. The performance results obtained from theoretic analysis and experimental evaluation of three real datasets under different scenarios, demonstrate that our schemes can achieve the performance superior to the most closely related work. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
232. Time-optimized management of IoT nodes.
- Author
-
Kolomvatsos, Kostas
- Subjects
INTERNET of things ,INTEGRATED circuit interconnections ,COMPUTER science ,COMPUTER software ,TIME management - Abstract
The vision of Internet of Things (IoT) aims to offer a vast infrastructure of numerous interconnected devices usually called IoT nodes. The infrastructure consists of the basis of pervasive computing applications. Applications can be built with the participation of the IoT nodes that interact in very dynamic environments. In this setting, one can identify the need for applying updates in the software/firmware of the autonomous nodes. Updates may include software extensions and patches significant for the efficient functioning of the IoT nodes. Legacy methodologies involve centralized models where complex algorithms and protocols are adopted for the distribution of the updates to the nodes. This paper proposes a distributed approach where each node is responsible to initiate and conclude the update process. We envision that each node monitors specific performance metrics (related to the node itself and/or the network) and based on a time-optimized scheme identifies the appropriate time to perform the update process.We propose the adoption of a finite horizon optimal stopping scheme . Our stopping model originates in the Optimal Stopping Theory (OST) and takes into account multiple performance metrics. The aim is to have the nodes capable of identifying when their performance and the performance of the network are of high quality. In that time, nodes could be able to efficiently conclude the update process. We provide a set of formulations and the analysis of our problem. Extensive experiments and a comparison assessment reveal the advantages of the proposed solution. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
233. Less annotation on active learning using confidence-weighted predictions.
- Author
-
Yang, Xiaodong, Chen, Yiqiang, Yu, Hanchao, and Zhang, Yingwei
- Subjects
- *
ACTIVE learning , *STATISTICAL weighting , *MACHINE learning , *ARTIFICIAL neural networks , *COMPUTER science - Abstract
This paper proposes an efficient and effective active online sequential learning approach, named as Less Annotated Active Learning Extreme Learning Machine (LAAL-ELM). It leverages the predictions’ confidence of the new arriving data to actively select both query-annotated samples and confidence-weighted predict-annotated ones to update the classifier, which contributes to less actively query annotation, and applies WOS-ELM, a discriminant model, to significantly reduce the computation complexity for doing online updating in one step. The proposed approach firstly gives a principle to evaluate confidence of the prediction in WOS-ELM; then determines what and how to update the model with new arriving data in the online phase: the uncertain instances are annotated by query their classes, almost-certain ones are weighted on its prediction’s confidence and the certain ones are discarded directly for reducing over-fitting; at last, the weighted and query-annotated samples are used to update the classifier. The proposed approach is evaluated on five real-world benchmark classification issues. And the experimental results demonstrate that the proposed LAAL-ELM can effectively reduce the number of queried samples while maintaining high level of classification performance. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
234. Coupled local–global adaptation for multi-source transfer learning.
- Author
-
Liu, Jieyan, Li, Jingjing, and Lu, Ke
- Subjects
- *
MACHINE learning , *ROBUST control , *ARTIFICIAL neural networks , *ALGORITHMS , *COMPUTER science - Abstract
This paper presents a novel unsupervised multi-source domain adaptation approach, named as coupled local–global adaptation (CLGA). At the global level, in order to maximize the adaptation ability, CLGA regards multiple domains as a unity, and jointly mitigates the gaps of both marginal and conditional distributions between source and target dataset. At the local level, with the intention of maximizing the discriminative ability, CLGA investigates the relationship among distinctive domains, and exploits both class and domain manifold structures embedded in data samples. We formulate both local and global adaptation in a concise optimization problem, and further derive an analytic solution for the objective function. Extensive evaluations verify that CLGA performs better than several existing methods not only in multi-source adaptation tasks but also in single source scenarios. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
235. A new greedy algorithm for sparse recovery.
- Author
-
Wang, Qian and Qu, Gangrong
- Subjects
- *
COMPRESSED sensing , *SIGNAL sampling , *ALGORITHMS , *ARTIFICIAL neural networks , *COMPUTER science - Abstract
Compressed sensing (CS) has been one of the great successes of applied mathematics in the last decade. This paper proposes a new method, combining the advantage of the Compressive Sampling Matching Pursuit (CoSaMP) algorithm and the Quasi–Newton Iteration Projection (QNIP) algorithm, for the recovery of sparse signal from underdetermined linear systems. To get the new algorithm, Quasi–Newton Projection Pursuit (QNPP), the least-squares technique in CoSaMP is used to accelerate convergence speed and QNIP is modified slightly. The convergence rate of QNPP is studied, under a certain condition on the restricted isometry constant of the measurement matrix, which is smaller than that of QNIP. The fast version of QNPP is also proposed which uses the Richardson’s iteration to reduce computation time. The numerical results show that the proposed algorithms have higher recovery rate and faster convergence speed than existing techniques. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
236. A survey on context in modern humanistic computing.
- Author
-
Mylonas, Phivos
- Subjects
- *
COMPUTER science , *SOCIAL networks , *USER-generated content , *INFORMATION theory , *ARTIFICIAL intelligence - Abstract
This survey paper attempts to study the existence, importance and impact of the notion of context in modern humanistic computing. Given its inherent diversity, the term is nowadays widely acknowledged among computer science tasks and has become a major topic of interest in several of its sub-fields, ranging from contextual semantics to social networks, social media and recently emerged innovative applications, such as travel routing. We start by presenting a brief review of contextual semantics as nowadays they are considered suitable for most common content analysis problems. Focus is also given within the next survey sections on the impact of context within the social networks and media field that came into sight over the last years. A short, closing discussion on the identified challenges and potential future research directions concludes this survey. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
237. Functional observers design for descriptor systems via LMI: Continuous and discrete-time cases.
- Author
-
Darouach, Mohamed, Amato, Francesco, and Alma, Marouane
- Subjects
- *
LINEAR matrix inequalities , *DESCRIPTOR systems , *MATHEMATICAL inequalities , *MATRICES (Mathematics) , *COMPUTER science - Abstract
This paper investigates the design of functional observers for linear time-invariant descriptor systems. A new method for designing these observers is given by using an LMI (Linear matrix inequality) formulation. The obtained result unifies the design, it considers the continuous-time and discrete-time cases and concerns the observers of various orders. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
238. On the complexity of input/output logic.
- Author
-
Sun, Xin and Robaldo, Livio
- Subjects
REASONING ,COMPUTER input-output equipment ,COMPUTATIONAL complexity ,DECIDABILITY (Mathematical logic) ,COMPUTER science - Abstract
Input/output logic is a formalism in deontic logic and normative reasoning. Unlike deontic logical frameworks based on possible-world semantics , input/output logic adopts norm-based semantics in the sense of [13] , specifically operational semantics. It is well-known in theoretical computer science that complexity is an indispensable component of every logic. So far, previous literature in input/output systems focuses on proof theory and semantics, while neglects complexity. This paper adds the missing component by giving the complexity results of main decision problems in input/output logic. Our results show that input/output logic is coNP hard and in the 2nd level of the polynomial hierarchy. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
239. Training set selection for monotonic ordinal classification.
- Author
-
Cano, J.-R. and García, S.
- Subjects
- *
MONOTONIC functions , *MACHINE learning , *SET theory , *CLASSIFICATION algorithms , *MATHEMATICAL models , *COMPUTER science - Abstract
In recent years, monotonic ordinal classification has increased the focus of attention for machine learning community. Real life problems frequently have monotonicity constraints. Many of the monotonic classifiers require that the input data sets satisfy the monotonicity relationships between its samples. To address this, a conventional strategy consists of relabeling the input data to achieve complete monotonicity. As an alternative, we explore the use of preprocessing algorithms without modifying the class label of the input data. In this paper we propose the use of training set selection to choose the most effective instances which lead the monotonic classifiers to obtain more accurate and efficient models, fulfilling the monotonic constraints. To show the benefits of our proposed training set selection algorithm, called MonTSS, we carry out an experimentation over 30 data sets related to ordinal classification problems. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
240. “Good” and “acceptable” English in L2 research writing: Ideals and realities in history and computer science.
- Author
-
Hynninen, Niina and Kuteeva, Maria
- Subjects
- *
SECOND language acquisition , *COMPUTER science , *WRITTEN English , *ACADEMIC discourse , *LINGUA francas - Abstract
In light of the recent developments on the international publishing scene, increasingly dominated by L2 writers of English, the question of what is considered to be “good” and “acceptable” English calls for further research. This paper examines in what ways researchers describe the English used for research writing in their field. Interview data were collected from historians and computer scientists working in Finland and Sweden. Our analysis points towards some differences in the way researchers perceive “good” writing in English in their field, and what they themselves report to practice as (co-)authors, readers/reviewers, and proofreaders. The discrepancy between the ideals and realities of research writing in English was clear in the case of the historians. Our findings suggest that in research writing for publication, there is a pull towards some form of standard norm. This standard can be jointly negotiated during the writing, reviewing, and proofreading process. It may also develop in different directions in different disciplines, but it is likely to be based on the principles of understandability and clarity. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
241. Indefinite Core Vector Machine.
- Author
-
Schleif, Frank-Michael and Tino, Peter
- Subjects
- *
SUPPORT vector machines , *DECISION making , *LINEAR statistical models , *MATHEMATICAL complex analysis , *COMPUTER science - Abstract
The recently proposed Krĕin space Support Vector Machine (KSVM) is an efficient classifier for indefinite learning problems, but with quadratic to cubic complexity and a non-sparse decision function. In this paper a Krĕin space Core Vector Machine (iCVM) solver is derived. A sparse model with linear runtime complexity can be obtained under a low rank assumption. The obtained iCVM models can be applied to indefinite kernels without additional preprocessing. Using iCVM one can solve CVM with usually troublesome kernels having large negative eigenvalues or large numbers of negative eigenvalues. Experiments show that our algorithm is similar efficient as the Krĕin space Support Vector Machine but with substantially lower costs, such that also large scale problems can be processed. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
242. Generation of human depth images with body part labels for complex human pose recognition.
- Author
-
Nishi, K. and Miura, J.
- Subjects
- *
PATTERN recognition systems , *COMPUTER science , *GESTURE , *IMAGE analysis , *LEARNING classifier systems - Abstract
This paper describes an efficient generation of large-scale dataset of human depth images with body part labels. The size of image datasets has recently been increasingly important as it is shown to be strongly related to the performance of learning-based classifiers. In human pose recognition, many datasets for ordinary poses like standing, walking, and doing gestures have already been developed and effectively utilized. However, those for unusual ones like lying fainted and crouching do not exist. Pose recognition for such cases may have a large potential applicability to various assistive scenarios. Moreover, locating each body part could also be important for an accurate care and diagnosis or anomaly detection. We therefore develop a method of generating body part-annotated depth images in various body shapes and poses, which are handled by a flexible human body model and a motion capture system, respectively. We constructed a dataset of 10,076 images with eight body types for various sitting poses. The effectiveness of generated dataset is verified by part labeling tasks with a fully convolutional network (FCN) for synthetic and real test data. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
243. A further result on consensus problems of second-order multi-agent systems with directed graphs, a moving mode and multiple delays.
- Author
-
Li, Xue, Gao, Kai, Lin, Peng, and Mo, Lipo
- Subjects
MULTIAGENT systems ,DIRECTED graphs ,INTELLIGENT agents ,ALGORITHMS ,COMPUTER science - Abstract
This paper considers a consensus problem of a class of second-order multi-agent systems with a moving mode and multiple delays on directed graphs. Using local information, a distributed algorithm is adopted to make all agents reach a consensus while moving together with a constant velocity in the presence of delays. To study the effects of the coexistence of the moving mode and delays on the consensus convergence, a frequency domain approach is employed through analyzing the relationship between the components of the eigenvector associated with the eigenvalue on imaginary axis. Then based on the continuity of the system function, an upper bound for the delays is given to ensure the consensus convergence of the system. A numerical example is included to illustrate the obtained theoretical results. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
244. Distributed synchronization of networked drive-response systems: A nonlinear fixed-time protocol.
- Author
-
Zhao, Wen, Liu, Gang, Ma, Xi, He, Bing, and Dong, Yunfeng
- Subjects
COMPUTER networks ,COMPUTER simulation ,COMPUTER science ,SYNCHRONIZATION ,UNDIRECTED graphs - Abstract
The distributed synchronization of networked drive-response systems is investigated in this paper. A novel nonlinear protocol is proposed to ensure that the tracking errors converge to zeros in a fixed-time. By comparison with previous synchronization methods, the present method considers more practical conditions and the synchronization time is not dependent of arbitrary initial conditions but can be offline pre-assign according to the task assignment. Finally, the feasibility and validity of the presented protocol have been illustrated by a numerical simulation. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
245. Observer-based consensus of networked thrust-propelled vehicles with directed graphs.
- Author
-
Cang, Weiye, Li, Zhongkui, and Wang, Hanlei
- Subjects
COMPUTER networks ,DIRECTED graphs ,TELECOMMUNICATION systems ,COMPUTER simulation ,COMPUTER science - Abstract
In this paper, we investigate the consensus problem for networked underactuated thrust-propelled vehicles (TPVs) interacting on directed graphs. We propose distributed observer-based consensus protocols, which avoid the reliance on the measurements of translational velocities and accelerations. Using the input-output analysis, we present necessary and sufficient conditions to ensure that the observer-based protocols can achieve consensus for both the cases without and with constant communication delays, provided that the communication graph contains a directed spanning tree. Simulation examples are finally provided to illustrate the effectiveness of the control schemes. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
246. Observer-based distributed adaptive fault-tolerant containment control of multi-agent systems with general linear dynamics.
- Author
-
Ye, Dan, Chen, Mengmeng, and Li, Kui
- Subjects
MULTIAGENT systems ,ACTUATORS ,INTELLIGENT agents ,AIRPLANES ,COMPUTER science - Abstract
In this paper, we consider the distributed containment control problem of multi-agent systems with actuator bias faults based on observer method. The objective is to drive the followers into the convex hull spanned by the dynamic leaders, where the input is unknown but bounded. By constructing an observer to estimate the states and bias faults, an effective distributed adaptive fault-tolerant controller is developed. Different from the traditional method, an auxiliary controller gain is designed to deal with the unknown inputs and bias faults together. Moreover, the coupling gain can be adjusted online through the adaptive mechanism without using the global information. Furthermore, the proposed control protocol can guarantee that all the signals of the closed-loop systems are bounded and all the followers converge to the convex hull with bounded residual errors formed by the dynamic leaders. Finally, a decoupled linearized longitudinal motion model of the F-18 aircraft is used to demonstrate the effectiveness. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
247. Distributed adaptive asymptotically consensus tracking control of uncertain Euler-Lagrange systems under directed graph condition.
- Author
-
Wang, Wei, Wen, Changyun, Huang, Jiangshuai, and Fan, Huijin
- Subjects
EULER-Lagrange equations ,DIRECTED graphs ,LAPLACIAN matrices ,INTEGRAL functions ,COMPUTER science - Abstract
In this paper, a backstepping based distributed adaptive control scheme is proposed for multiple uncertain Euler-Lagrange systems under directed graph condition. The common desired trajectory is allowed totally unknown by part of the subsystems and the linearly parameterized trajectory model assumed in currently available results is no longer needed. To compensate the effects due to unknown trajectory information, a smooth function of consensus errors and certain positive integrable functions are introduced in designing virtual control inputs. Besides, to overcome the difficulty of completely counteracting the coupling terms of distributed consensus errors and parameter estimation errors in the presence of asymmetric Laplacian matrix, extra information transmission of local parameter estimates are introduced among linked subsystem and adaptive gain technique is adopted to generate distributed torque inputs. It is shown that with the proposed distributed adaptive control scheme, global uniform boundedness of all the closed-loop signals and asymptotically output consensus tracking can be achieved. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
248. Gene expression based cancer classification.
- Author
-
Tarek, Sara, Abd Elwahab, Reda, and Shoman, Mahmoud
- Subjects
GENE expression ,CANCER ,DATA mining ,STATISTICAL mechanics ,MEAN field theory - Abstract
Cancer classification based on molecular level investigation has gained the interest of researches as it provides a systematic, accurate and objective diagnosis for different cancer types. Several recent researches have been studying the problem of cancer classification using data mining methods, machine learning algorithms and statistical methods to reach an efficient analysis for gene expression profiles. Studying the characteristics of thousands of genes simultaneously offered a deep insight into cancer classification problem. It introduced an abundant amount of data ready to be explored. It has also been applied in a wide range of applications such as drug discovery, cancer prediction and diagnosis which is a very important issue for cancer treatment. Besides, it helps in understanding the function of genes and the interaction between genes in normal and abnormal conditions. That is done by monitoring the behavior of genes -gene expression data- under different conditions. In this paper, an effective ensemble approach is proposed. Ensemble classifiers increase not only the performance of the classification, but also the confidence of the results. The motivations beyond using ensemble classifiers are that the results are less dependent on peculiarities of a single training set and because the ensemble system outperforms the performance of the best base classifier in the ensemble. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
249. RULEM: A novel heuristic rule learning approach for ordinal classification with monotonicity constraints.
- Author
-
Verbeke, Wouter, Martens, David, and Baesens, Bart
- Subjects
MONOTONE operators ,HEURISTIC ,SOFT computing ,DECISION trees ,COMPUTER science - Abstract
In many real world applications classification models are required to be in line with domain knowledge and to respect monotone relations between predictor variables and the target class, in order to be acceptable for implementation. This paper presents a novel heuristic approach, called RULEM, to induce monotone ordinal rule based classification models. The proposed approach can be applied in combination with any rule- or tree-based classification technique, since monotonicity is guaranteed in a post-processing step. RULEM checks whether a rule set or decision tree violates the imposed monotonicity constraints and existing violations are resolved by inducing a set of additional rules which enforce monotone classification. The approach is able to handle non-monotonic noise, and can be applied to both partially and totally monotone problems with an ordinal target variable. Two novel justifiability measures are introduced which are based on RULEM and allow to calculate the extent to which a classification model is in line with domain knowledge expressed in the form of monotonicity constraints. An extensive benchmarking experiment and subsequent statistical analysis of the results on 14 public data sets indicates that RULEM preserves the predictive power of a rule induction technique while guaranteeing monotone classification. On the other hand, the post-processed rule sets are found to be significantly larger which is due to the induction of additional rules. E.g., when combined with Ripper a median performance difference was observed in terms of PCC equal to zero and an average difference equal to −0.66%, with on average 5 rules added to the rule sets. The average and minimum justifiability of the original rule sets equal respectively 92.66% and 34.44% in terms of the RULEMF justifiability index, and 91.28% and 40.1% in terms of RULEMS, indicating the effective need for monotonizing the rule sets. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
250. Adaptive inverse position control of switched reluctance motor.
- Author
-
Wang, Jia-Jun
- Subjects
SWITCHED reluctance motors ,FUZZY neural networks ,SOFT computing ,PROGRAMMABLE controllers ,COMPUTER science - Abstract
In this paper, adaptive inverse position control is applied to switched reluctance motor (SRM) with simplified interval type-2 fuzzy neural networks (SIT2FNNs). The proposed adaptive inverse position control scheme for the SRM can be divided into the design of two control loops. The first loop is used for the position control, which is designed based on the adaptive inverse control (AIC). And the AIC is constructed with two SIT2FNNs, which are applied to identification and control for the SRM, respectively. The second loop is used for the current control, which is realized with the current-sharing method (CSM). Simulation results certify the effectiveness of the proposed control scheme in the achievement on high position control precision and perfect dynamic performance for the SRM. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.