88 results
Search Results
2. Security-preserving social data sharing methods in modern social big knowledge systems.
- Author
-
Chen, Xuan
- Subjects
- *
SOCIAL computing , *COMPUTER systems , *COMPUTER science , *DATA privacy , *INFORMATION sharing , *DATA protection , *DATA security failures - Abstract
In recent decades, the development of social computing systems has realized the efficient information exchange between large groups of people. Nowadays, social computing systems are rather complex platforms supported by not only traditional sociology theory but also computer science and big data based applications. With the increase of the social computing systems' complexities, serious issues of social digital security and privacy have shown up since, in recent years, more and more social data leakage incidents are happening. This fact is due to reasons on many different aspects since there are many sources threatening the security and privacy of the social data in such a complex social computing system. In this paper, we improve the traditional social data protection schemes by combining the information fragmentation concepts with the distributed system architectures to build a novel social data protection scheme. We use social photo protection as the fundamental scenario and deploy our novel scheme to illustrate the improvement on the protection level with the protection analysis in detail. A security analysis of practically realizing such a scheme is also evaluated in this paper. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
3. Improving formal analysis of state machines with particular emphasis on and-cross transitions.
- Author
-
Adesina, Opeyemi O., Lethbridge, Timothy C., Somé, Stéphane S., Abdelzad, Vahdat, and Belle, Alvine Boaye
- Subjects
- *
VIRTUAL machine systems , *MATHEMATICAL models , *COMPUTER science , *MACHINE theory , *COMPUTER systems - Abstract
Highlights • And-cross transitions are a useful abstraction that can make some constructs easier to specify for the modeler. • They can be transformed to and from alternative modeling solutions that require a greater number of transitions. • With fewer transitions, the time taken and memory required to certify a model to be deterministic using formal methods can be much reduced compared to other approaches. This is related to the time and memory required to compute the set of pairs of potentially conflicting transitions. Abstract In this paper, we present an approach to formally encode state machines expressed in Umple for symbolic verification. We illustrate this with a real-world modeling example that encodes and analyzes and-cross transitions. This paper discusses a formal description of our approach to represent state machine systems under analysis (SSUAs); a systematic approach to certifying that SSUAs are deterministic; and an evaluation of performance (memory usage and execution time) on the case study. Method We describe a formalization of state machines in Umple that enables their translation to model checking tools and also to code that is consistent with this. We present three alternative modeling solutions for a sample problem and a solution based on the use of and-cross transitions. State machine models corresponding to these solutions are represented in Umple, a model-oriented programming language. These are automatically transformed to SMV, the input language of the nuXmv (or NuSMV) model checker. By cleanly separating concerns, we systematically integrate components of hierarchical state systems as opposed to the traditional flattening approach, yet manage the complexity introduced by concurrency and and-crossing. We then compose and verify a set of requirements (e.g., correctness, safety, liveliness, etc.) on the resulting systems of all the modeling approaches to empirically compare the different modeling alternatives with the use of and-cross transitions. Results We can encode and formally analyse complex state machines with and-cross transition(s). We observed a large reduction in the number of required transitions for encoding the SSUA, as opposed to the alternative approaches. We asserted that solutions derived from the approaches are identical behavior-wise even though each approach models the SSUA differently. Each of the approaches yielded the same result for potentially conflicting pairs which is a false positive (i.e., the SSUAs are deterministic). We observe that each approach maintains the same global state-space irrespective of the variations in their number of transitions. Furthermore, we observe that it is untrue that a more abstract method applied to an SSUA outperforms its less abstract counterpart whenever parameters (such as execution time, memory usage and the number of Binary Decision Diagrams - BDDs) are the factors under consideration. Contributions A systematic approach to encode state machines with and-cross transitions (including unusual transitions). An enhanced but fully automated approach to discovering nondeterminism in state machines even in the presence of unbounded domains and multiple and-cross transitions within the same enclosing orthogonal state. An empirical study of the impact of abstraction on some performance parameters. We also present an extended formalization of Umple state machines. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
4. Time-optimal symbolic control of a changeover process based on an approximately bisimilar symbolic model.
- Author
-
Fakhroleslam, Mohammad, Pola, Giordano, De Santis, Elena, and Di Benedetto, Maria Domenica
- Subjects
- *
CHEMICAL process control , *STATE feedback (Feedback control systems) , *CHEMICAL processes , *SCIENTIFIC literature , *COMPUTER systems , *COMPUTER science - Abstract
• An approximately bisimilar symbolic model is constructed for a safe changeover process. • An automatic controller is designed for safe changeover process for the first time. • The synthesis of the proposed controller in a finite-state space is very fast and flexible. • The error bounds of the proposed controller are adjustable as design parameters. • The effectiveness of the symbolic controller is investigated via numerical simulation. Many process control problems with complex qualitative specifications cannot be addressed via conventional control design methods. Examples of such specifications include logic specifications expressed in the design of start-up, shut-down, changeover, and emergency shutdown operating procedures. In recent years, it has been shown in the control systems and computer science communities that symbolic models provide convenient and powerful mechanisms to synthesize controllers enforcing such qualitative specifications. The use of symbolic models reduces the synthesis of the controllers to a fixed-point computation problem over a finite-state abstract system. In this paper, after explaining the notion of approximate bisimulation for incrementally globally asymptotically stable (δ -GAS) nonlinear control systems, the construction of approximately bisimilar symbolic models for such systems is presented. Then synthesis of time-optimal symbolic controller for this class of systems is performed based on results from the computer science literature. As a benchmark chemical process control problem, an approximately bisimilar symbolic model is constructed for a safe changeover process. Then a symbolic controller is designed and it is refined to a controller to be applied to the original process. Simulation results show the effectiveness of the symbolic controller. Although the construction of the symbolic model may be complex, the synthesis of the controller in a finite-state space is fast and most importantly the error bounds are adjustable as design parameters. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
5. Robustness of stochastic bandit policies.
- Author
-
Salomon, Antoine and Audibert, Jean-Yves
- Subjects
- *
ROBUST control , *STOCHASTIC analysis , *PROBABILITY theory , *DISTRIBUTION (Probability theory) , *COMPUTER systems , *COMPUTER science - Abstract
Abstract: This paper studies the deviations of the regret in a stochastic multi-armed bandit problem. When the total number of plays n is known beforehand by the agent, Audibert et al. [2] exhibit a policy such that with probability at least , the regret of the policy is of order log n. They have also shown that such a property is not shared by the popular ucb1 policy of Auer et al. [3]. This work first answers an open question: it extends this negative result to any anytime policy (i.e. any policy that does not take the number of plays n into account). Another contribution of this paper is to design robust anytime policies for specific multi-armed bandit problems in which some restrictions are put on the set of possible distributions of the different arms. We also show that, for any policy (i.e. even when the number of plays n is known), the regret is of order log n with probability at least , so that the policy of Audibert et al. has the best possible deviation properties. [Copyright &y& Elsevier]
- Published
- 2014
- Full Text
- View/download PDF
6. Measuring the significance of inconsistency in the Viewpoints framework.
- Author
-
Mu, Kedian, Jin, Zhi, Liu, Weiru, Zowghi, Didar, and Wei, Bo
- Subjects
- *
COMPUTER software development , *PROTOTYPES , *COMPUTER systems , *COMPUTER science , *PROOF theory , *COMPARATIVE studies - Abstract
Abstract: Measuring inconsistency is crucial to effective inconsistency management in software development. A complete measurement of inconsistency should focus on not only the degree but also the significance of inconsistency. However, most of the approaches available only take the degree of inconsistency into account. The significance of inconsistency has not yet been given much needed consideration. This paper presents an approach for measuring the significance of inconsistency arising from different viewpoints in the Viewpoints framework. We call an individual set of requirements belonging to different viewpoints a combined requirements collection in this paper. We argue that the significance of inconsistency arising in a combined requirements collection is closely associated with global priority levels of requirements involved in the inconsistency. Here we assume that the global priority level of an individual requirement captures the relative importance of every viewpoint including this requirement as well as the local priority level of the requirement within the viewpoint. Then we use the synthesis of global priority levels of all the requirements in a combined collection to measure the significance of the collection. Following this, we present a scoring matrix function to measure the significance of inconsistency in an inconsistent combined requirements collection, which describes the contribution made by each subset of the requirements collection to the significance of the set of requirements involved in the inconsistency. An ordering relationship between inconsistencies of two combined requirements collections, termed more significant than, is also presented by comparing their significance scoring matrix functions. Finally, these techniques were implemented in a prototype tool called IncMeasurer, which we developed as a proof of concept. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
7. The complexity of the stamp folding problem.
- Author
-
Umesato, Takuya, Saitoh, Toshiki, Uehara, Ryuhei, Ito, Hiro, and Okamoto, Yoshio
- Subjects
- *
COMPUTATIONAL complexity , *POLYNOMIAL time algorithms , *COMPUTER algorithms , *COMBINATORIAL optimization , *COMPUTER systems , *COMPUTER science - Abstract
Abstract: There are many folded states consistent with a given mountain–valley pattern of equidistant creases on a long paper strip. We would like to fold a paper strip such that the number of paper layers between each pair of hinged paper segments, called the crease width at the crease point, is minimized. This problem is called the stamp folding problem and there are two variants of this problem: minimization of the maximum crease width, and minimization of the total crease width. This optimization problem was recently introduced and investigated from the viewpoint of the counting problem. However, its computational complexity is not known. In this paper, we first show that the minimization problem of the maximum crease width is strongly NP-complete. Hence we cannot solve the problem in polynomial time unless P=NP. We next propose an algorithm that solves the minimization problem. The algorithm itself is a straightforward one, but its analysis is not trivial. We show that this algorithm runs in time where is the number of creases and is the total crease width. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
8. ARREST: From work practices to redesign for usability
- Author
-
Iqbal, Rahat, Shah, Nazaraf, James, Anne, and Duursma, Jacob
- Subjects
- *
ENGINEERING design , *USER-centered system design , *SMALL business , *COMPUTER science , *SOCIAL support , *AUTOMATION , *CASE studies , *COMPUTER systems - Abstract
Abstract: In this paper we discuss the redesign of a support management system deployed in a small and medium sized enterprise (SME) in the UK. The original system was not fulfilling its needs as it had not captured work practices in a way that was recognizable to the users. The advantages of the redesign included: improved usefulness; improved efficiency and productivity; reduced learning time; improved usability; and increased acceptance among users. The system is used to support complex and distributed cooperative activities taking place in an SME. We evaluated the current system and analysed work practices using a user-centred design and evaluation philosophy. In this paper we discuss how user needs are incorporated into the enhanced design of the support management system. The user-centred design techniques used in this research include interviews, questionnaires, observations and user tests. We present comparative evaluation results that show significant improvement in performance of user tasks using the redesigned support management system. The contribution of this paper is the presentation of a case study to show how a user-centred design and evaluation philosophy can lead to better requirements capture resulting in systems that more accurately capture the users’ conceptual models. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
9. Testable design of AND–EXOR logic networks with universal test sets
- Author
-
Rahaman, Hafizur, Das, Debesh K., and Bhattacharya, Bhargab B.
- Subjects
- *
LOGIC design , *COMPUTER networks , *ELECTRONIC circuit design , *ROUTING (Computer network management) , *TREE graphs , *AUTOMATIC test equipment , *COMPUTER science , *COMPUTER systems - Abstract
Abstract: This paper presents a testability enhancement technique suited for AND–EXOR based logic networks that facilitates easy detection of stuck-at and bridging faults by a universal test set. Both cascaded and tree implementations of the EXOR-part are considered. The AND–EXOR based circuit implemented with a cascaded EXOR-part requires a universal test set of size (2n +6) for an n-variable function implementation. For Generalized Reed–Muller (GRM) implementation, this test set detects all single stuck-at and bridging faults (both OR-type and AND-type) and also large number of multiple bridging faults. For an Exclusive-OR Sum-of-Products (ESOP) implementation, a few single bridging faults may remain untested under this test set, occurrence of which can be minimized by employing an appropriate design and layout technique. Next, it is shown that an AND–EXOR network with a tree-based EXOR-part can be tested for similar faults by a universal test set of size (2n +8). This paper also solves an open problem of designing a universal test for a tree-based AND–EXOR circuit. Since the EXOR-tree has depth of (), where s is the number of product terms in the given AND–EXOR expression, this tree-based design reduces the circuit delay significantly compared to cascaded EXOR implementation. In both the cases, the test set can be stored in a ROM on-chip for built-in self-test (BIST) purposes. For several benchmark circuits, the universal test set is found to be much smaller in size than the ATPG-generated test sets. [Copyright &y& Elsevier]
- Published
- 2009
- Full Text
- View/download PDF
10. A note on models for graph representations
- Author
-
Korman, Amos and Kutten, Shay
- Subjects
- *
REPRESENTATIONS of graphs , *COMPUTER simulation , *GRAPH labelings , *COMPUTER systems , *DISTRIBUTED algorithms , *ROUTING (Computer network management) , *COMPUTER science - Abstract
Abstract: This paper is intended more to ask questions than give answers. Specifically, we consider models for labeling schemes, and discuss issues regarding the number of labels consulted vs. the sizes of the labels. Recently, quite a few papers studied methods for representing network properties by assigning informative labels to the vertices of a network. Consider a graph function on pairs of vertices (for example, can be the distance function). In an -labeling scheme, the labels are constructed in such a way so that given the labels of any two vertices and , one can compute the function (e.g. the graph distance between and ) just by looking at these two labels. Some very involved lower bounds for the sizes of the labels were proven. Also, some highly sophisticated labeling schemes were developed to ensure short labels. In this paper, we demonstrate that such lower bounds are very sensitive to the number of vertices consulted. That is, we show several constructions of such labeling schemes that beat the lower bounds by large margins. Moreover, as opposed to the strong technical skills that were needed to develop the traditional labeling schemes, most of our schemes are almost trivial. The catch is that in our model, one needs to consult the labels of three vertices instead of two. That is, a query about vertices and can access also the label of some third vertex ( is determined by the labels of and ). More generally, we address the model in which a query about vertices and can access also the labels of other vertices. We term our generalized model labeling schemes with queries. The main importance of this model is theoretical. Specifically, this paper may serve as a first step towards investigating different tradeoffs between the amount of labels consulted and the amount of information stored at each vertex. As we show, if all vertices can be consulted then the problem almost reduces to the corresponding sequential problem. On the other hand, consulting just the labels of and (or even just the label of ) reduces the problem to a purely distributed one. Therefore, in a sense, our model spans a range of intermediate notions between the sequential and the distributed settings. In addition to the theoretical interest, we also show cases that schemes constructed for our model can be translated to the traditional model or to the sequential model, thus, simplifying the construction for those models as well. For implementing query labeling schemes in a distributed environment directly, we point at a potential usage for some new paradigms that became common recently, such as P2P and overlay networks. [Copyright &y& Elsevier]
- Published
- 2009
- Full Text
- View/download PDF
11. Standard-compliant, but incompatible?!
- Author
-
Egyedi, Tineke M.
- Subjects
- *
INFORMATION technology , *COMPUTER input-output equipment , *COMPUTER networks , *COMPUTER science , *COMPUTER systems , *INFORMATION resources management , *COMPUTER network protocols - Abstract
Abstract: This paper addresses the question why standard-compliant IT products often do not interoperate. The findings are based on an institutional analysis, three case studies, and a debate among experts. The paper concludes that some dilemmas cannot be resolved easily. However, many causes can be addressed, in particular those in the area of standard development. Where interoperability is concerned, standard development and implementation issues cannot be meaningfully separated. [Copyright &y& Elsevier]
- Published
- 2007
- Full Text
- View/download PDF
12. A uniform solution to SAT using membrane creation
- Author
-
Gutiérrez-Naranjo, Miguel A., Pérez-Jiménez, Mario J., and Romero-Campero, Francisco J.
- Subjects
- *
POLYNOMIALS , *COMPUTATIONAL complexity , *COMPUTER science , *AUTOPOIESIS , *COMPUTER systems - Abstract
Abstract: In living cells, new membranes are produced basically through two processes: mitosis and autopoiesis. These two processes have inspired two variants of cell-like membrane systems, namely P systems with active membranes and P systems with membrane creation. In this paper, we provide the first uniform, efficient solution to the SAT problem in the framework of recogniser P systems with membrane creation using dissolution rules. Recently the authors have proved that if the dissolution rules are not allowed to be used, then the polynomial complexity class associated with this variant of P systems is the standard complexity class P. This result, together with the main result of this paper, shows the surprising role of the apparently “innocent” operation of membrane dissolution. The use of this type of rule establishes the difference between efficiency and non-efficiency for P systems with membrane creation, and provides a barrier between P and NP (assuming ). [Copyright &y& Elsevier]
- Published
- 2007
- Full Text
- View/download PDF
13. On-line assurance of interpretability criteria in evolving fuzzy systems – Achievements, new concepts and open issues.
- Author
-
Lughofer, Edwin
- Subjects
- *
FUZZY systems , *COMPUTATIONAL complexity , *COMPUTER systems , *DATA analysis , *ELECTRONIC data processing , *COMPUTER science - Abstract
Abstract: In this position paper, we are discussing achievements and open issues in the interpretability of evolving fuzzy systems (EFS). In addition to pure on-line complexity reduction approaches, which can be an important direction for increasing the transparency of the evolved fuzzy systems, we examine the state-of-the-art and provide further investigations and concepts regarding the following interpretability aspects: distinguishability, simplicity, consistency, coverage and completeness, feature importance levels, rule importance levels and interpretation of consequents. These are well-known and widely accepted criteria for the interpretability of expert-based and standard data-driven fuzzy systems in batch mode. So far, most have been investigated only rudimentarily in the context of evolving fuzzy systems, trained incrementally from data streams: EFS have focussed mainly on precise modeling, aiming for models of high predictive quality. Only in a few cases, the integration of complexity reduction steps has been handled. This paper thus seeks to close this gap by pointing out new ways of making EFS more transparent and interpretable within the scope of the criteria mentioned above. The role of knowledge expansion, a peculiar concept in EFS, will be also addressed. One key requirement in our investigations is the availability of all concepts for on-line usage, which means they should be incremental or at least allow fast processing. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
14. Small (purely) catalytic P systems simulating register machines.
- Author
-
Sosík, Petr and Langer, Miroslav
- Subjects
- *
SIMULATION methods & models , *BIOLOGICALLY inspired computing , *COMPUTATIONAL complexity , *MATHEMATICAL analysis , *COMPUTER science , *COMPUTER systems - Abstract
The paper contributes to the topic of (purely) catalytic P systems. Catalytic P systems represent the original and likely the simplest class of membrane computing models. It is known that (purely) catalytic P systems with two (respectively three) catalysts and one membrane can simulate any Minsky register machine and, hence, they are computationally complete. However, the problem of minimal size of such a universal catalytic P system remains open for about ten years. We improve known results about small catalytic P systems simulating register machines in three different modes (generating, accepting, computing functions). Together with some specific universal register machine [7] , one could eventually construct a small universal catalytic P system. As a consequence, we also improve the previous construction of a minimal catalytic P system generating a non-semilinear set, diminishing the number of necessary rules from 29 to 24. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
15. Embedding meshes into locally twisted cubes
- Author
-
Han, Yuejuan, Fan, Jianxi, Zhang, Shukui, Yang, Jiwen, and Qian, Peide
- Subjects
- *
PARALLEL computers , *COMPUTER systems , *COMPUTER science , *COMPUTER engineering , *COMPUTER networks , *NUMERICAL grid generation (Numerical analysis) , *OPERATOR theory , *CUBES - Abstract
Abstract: As a newly introduced interconnection network for parallel computing, the locally twisted cube possesses many desirable properties. In this paper, mesh embeddings in locally twisted cubes are studied. Let LTQ n (V, E) denote the n-dimensional locally twisted cube. We present three major results in this paper: (1) For any integer n ⩾1, a 2×2 n−1 mesh can be embedded in LTQ n with dilation 1 and expansion 1. (2) For any integer n ⩾4, two node-disjoint 4×2 n−3 meshes can be embedded in LTQ n with dilation 1 and expansion 2. (3) For any integer n ⩾3, a 4 ×(2 n−2 −1) mesh can be embedded in LTQ n with dilation 2. The first two results are optimal in the sense that the dilations of all embeddings are 1. The embedding of the 2×2 n−1 mesh is also optimal in terms of expansion. We also present the analysis of 2p ×2q mesh embedding in locally twisted cubes. [Copyright &y& Elsevier]
- Published
- 2010
- Full Text
- View/download PDF
16. A description based on languages of the final non-deterministic automaton.
- Author
-
Ballester-Bolinches, A., Cosme-Llópez, E., and Esteban-Romero, R.
- Subjects
- *
PROGRAMMING languages , *MACHINE theory , *COMPUTER systems , *ALGEBRA , *INFORMATION theory , *COMPUTER science - Abstract
Abstract: The study of the behaviour of non-deterministic automata has traditionally focused on the languages which can be associated to the different states. Under this interpretation, the different branches that can be taken at every step are ignored. However, we can also take into account the different decisions which can be made at every state, that is, the branches that can be taken, and these decisions might change the possible future behaviour. In this case, the behaviour of the automata can be described with the help of the concept of bisimilarity. This is the kind of description that is usually obtained when the automata are regarded as labelled transition systems or coalgebras. Contrarily to what happens with deterministic automata, it is not possible to describe the behaviour up to bisimilarity of states of a non-deterministic automaton by considering just the languages associated to them. In this paper we present a description of a final object for the category of non-deterministic automata, regarded as labelled transition systems, with the help of some structures defined in terms of languages. As a consequence, we obtain a characterisation of bisimilarity of states of automata in terms of languages and a method to minimise non-deterministic automata with respect to bisimilarity of states. This confirms that languages can be considered as the natural objects to describe the behaviour of automata. [Copyright &y& Elsevier]
- Published
- 2014
- Full Text
- View/download PDF
17. Syntactic N-grams as machine learning features for natural language processing.
- Author
-
Sidorov, Grigori, Velasquez, Francisco, Stamatatos, Efstathios, Gelbukh, Alexander, and Chanona-Hernández, Liliana
- Subjects
- *
MACHINE learning , *NATURAL language processing , *ATTRIBUTION of authorship , *SUPPORT vector machines , *COMPUTER systems , *COMPUTER science - Abstract
Abstract: In this paper we introduce and discuss a concept of syntactic n-grams (sn-grams). Sn-grams differ from traditional n-grams in the manner how we construct them, i.e., what elements are considered neighbors. In case of sn-grams, the neighbors are taken by following syntactic relations in syntactic trees, and not by taking words as they appear in a text, i.e., sn-grams are constructed by following paths in syntactic trees. In this manner, sn-grams allow bringing syntactic knowledge into machine learning methods; still, previous parsing is necessary for their construction. Sn-grams can be applied in any natural language processing (NLP) task where traditional n-grams are used. We describe how sn-grams were applied to authorship attribution. We used as baseline traditional n-grams of words, part of speech (POS) tags and characters; three classifiers were applied: support vector machines (SVM), naive Bayes (NB), and tree classifier J48. Sn-grams give better results with SVM classifier. [Copyright &y& Elsevier]
- Published
- 2014
- Full Text
- View/download PDF
18. The complexity of manipulative attacks in nearly single-peaked electorates.
- Author
-
Faliszewski, Piotr, Hemaspaandra, Edith, and Hemaspaandra, Lane A.
- Subjects
- *
COMPUTATIONAL complexity , *POLYNOMIAL time algorithms , *COMPUTER systems , *COMPUTER science , *ELECTRONIC data processing , *MACHINE theory - Abstract
Abstract: Many electoral control and manipulation problems—which we will refer to in general as “manipulative actions” problems—are NP-hard in the general case. It has recently been noted that many of these problems fall into polynomial time if the electorate is single-peaked, i.e., is polarized along some axis/issue. However, real-world electorates are not truly single-peaked. There are usually some mavericks, and so real-world electorates tend merely to be nearly single-peaked. This paper studies the complexity of manipulative-action algorithms for elections over nearly single-peaked electorates. We do this for many notions of nearness and for a broad range of election systems. We provide instances where even one maverick jumps the manipulative-action complexity up to NP-hardness, but we also provide many instances where some number of mavericks can be tolerated without increasing the manipulative-action complexity. [Copyright &y& Elsevier]
- Published
- 2014
- Full Text
- View/download PDF
19. Relating constraint answer set programming languages and algorithms.
- Author
-
Lierler, Yuliya
- Subjects
- *
CONSTRAINT programming , *SET theory , *PROGRAMMING languages , *COMPUTER algorithms , *COMPUTER science , *COMPUTER systems - Abstract
Abstract: Recently a logic programming language AC was proposed by Mellarkod et al. [1] to integrate answer set programming and constraint logic programming. Soon after that, a clingcon language integrating answer set programming and finite domain constraints, as well as an ezcsp language integrating answer set programming and constraint logic programming were introduced. The development of these languages and systems constitutes the appearance of a new AI subarea called constraint answer set programming. All these languages have something in common. In particular, they aim at developing new efficient inference algorithms that combine traditional answer set programming procedures and other methods in constraint programming. Yet, the exact relation between the constraint answer set programming languages and the underlying systems is not well understood. In this paper we address this issue by formally stating the precise relation between several constraint answer set programming languages – AC, clingcon, ezcsp – as well as the underlying systems. [Copyright &y& Elsevier]
- Published
- 2014
- Full Text
- View/download PDF
20. Environmental impact assessment based on D numbers.
- Author
-
Deng, Xinyang, Hu, Yong, Deng, Yong, and Mahadevan, Sankaran
- Subjects
- *
ENVIRONMENTAL impact analysis , *NUMBER theory , *INFORMATION theory , *COMPUTER systems , *INFORMATION science , *COMPUTER science - Abstract
Highlights: [•] A new method for environmental impact assessment is proposed. [•] The proposed method is more effective by considering various types of uncertainty. [•] A new representation of uncertain information, called D numbers, is presented. [•] This paper provides a new framework for the environmental impact assessment. [Copyright &y& Elsevier]
- Published
- 2014
- Full Text
- View/download PDF
21. General time consistent discounting.
- Author
-
Lattimore, Tor and Hutter, Marcus
- Subjects
- *
COMPUTER science , *ECONOMICS , *TIME series analysis , *EQUILIBRIUM , *COMPUTER systems - Abstract
Abstract: Modeling inter-temporal choice is a key problem in both computer science and economic theory. The discounted utility model of Samuelson is currently the most popular model for measuring the global utility of a time-series of local utilities. The model is limited by not allowing the discount function to change with the age of the agent. This is despite the fact that many agents, in particular humans, are best modelled with age-dependent discount functions. It is well known that discounting can lead to time-inconsistent behaviour where agents change their preferences over time. In this paper we generalise the discounted utility model to allow age-dependent discount functions. We then extend previous work in time-inconsistency to our new setting, including a complete characterisation of time-(in)consistent discount functions, the existence of sub-game perfect equilibrium policies where the discount function is time-inconsistent and a continuity result showing that “nearly” time-consistent discount rates lead to “nearly” time-consistent behaviour. [Copyright &y& Elsevier]
- Published
- 2014
- Full Text
- View/download PDF
22. Supervised learning and Co-training.
- Author
-
Darnstädt, Malte, Simon, Hans Ulrich, and Szörényi, Balázs
- Subjects
- *
MACHINE learning , *DATA analysis , *COMPUTER systems , *PRESUPPOSITION (Logic) , *COMPUTER science - Abstract
Abstract: Co-training under the Conditional Independence Assumption is among the models which demonstrate how radically the need for labeled data can be reduced if a huge amount of unlabeled data is available. In this paper, we explore how much credit for this saving must be assigned solely to the extra assumptions underlying the Co-training Model. To this end, we compute general (almost tight) upper and lower bounds on the sample size needed to achieve the success criterion of PAC-learning in the realizable case within the model of Co-training under the Conditional Independence Assumption in a purely supervised setting. The upper bounds lie significantly below the lower bounds for PAC-learning without Co-training. Thus, Co-training saves labeled data even when not combined with unlabeled data. On the other hand, the saving is much less radical than the known savings in the semi-supervised setting. [Copyright &y& Elsevier]
- Published
- 2014
- Full Text
- View/download PDF
23. A test-suite reduction approach to improving fault-localization effectiveness.
- Author
-
Dandan, Gong, Tiantian, Wang, Xiaohong, Su, and Peijun, Ma
- Subjects
- *
DEBUGGING , *DISTRIBUTION (Probability theory) , *COMPUTER programming , *COMPUTER systems , *INFORMATION storage & retrieval systems , *COMPUTER science - Abstract
Abstract: In order to improve the effectiveness of fault localization, researchers are interested in test-suite reduction to provide suitable test-suite inputs. Different test-suite reduction approaches have been proposed. However, the results are usually not ideal. Reducing the test-suite improperly or excessively can even negatively affect fault-localization effectiveness. In this paper, we propose a two-step test-suite reduction approach to remove the test cases which have little or no effect on fault localization, and improve the distribution evenness of concrete execution paths of test cases. This approach consists of coverage matrix based reduction and path vector based reduction, so it analyzes not only the test cases coverage but also the concrete path information. We design and implement experiments to verify the effect of our approach. The experimental results show that our reduced test-suite can improve fault-localization effectiveness. On average, our approach can reduce the size of a test-suite in 47.87% (for Siemens programs) and 23.03% (for space program). At the same time, on average our approach can improve the fault-localization effectiveness, 2.12 on Siemens programs and 0.13 on space program by Tarantula approach. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
24. On the migrativity of triangular subnorms.
- Author
-
Wu, Limin and Ouyang, Yao
- Subjects
- *
TRIANGULAR norms , *SUBNORMAL operators , *SEMIGROUPS of operators , *PROTOTYPES , *COMPUTER systems , *COMPUTER science - Abstract
Abstract: The -migrative triangular subnorms, where T stands for the three prototype triangular norms (namely the product, the Łukasiewicz t-norm and the minimum), are investigated in detail. The paper gives necessary and sufficient conditions under which a triangular subnorm with a continuous additive generator is -migrative with respect to any of the three prototype triangular norms. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
25. Convex linear T–S functions: A generalization of Frank's equation.
- Author
-
Calvo, T., Martín, J., and Mayor, G.
- Subjects
- *
LINEAR systems , *GENERALIZATION , *PROBLEM solving , *FUNCTIONAL equations , *COMPUTER systems , *COMPUTER science - Abstract
Abstract: The main purpose of this paper is to solve the functional equation , for and a pair of a t-norm T and a t-conorm S. This equation arises when we consider a convex linear combination of a t-norm and a t-conorm, and set out the problem of finding the intersection of the segments determined by the pairs and . [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
26. On the vertical threshold generation method of fuzzy implication and its properties.
- Author
-
Massanet, Sebastia and Torrens, Joan
- Subjects
- *
FUZZY systems , *COMPUTER systems , *COMPUTER science , *SYSTEM analysis , *FUZZY logic - Abstract
Abstract: In recent years, some new construction methods of fuzzy implications from other given ones have been proposed. One of them, the so-called threshold generation method, preserves important properties such as the exchange principle or the law of importation under some minimal conditions. This method is based on an adequate scaling on the second variable of the two initial fuzzy implications. In this paper, we propose a new method to generate fuzzy implications from two given ones in the same spirit of the threshold generation method but now through an adequate scaling on the first variable of the given fuzzy implications. The new implications, called vertical threshold generated implications, are deeply studied focusing on the preservation of the most common properties of fuzzy implications from the initial ones to the generated implication. Moreover, they are fully characterized by means of the -vertical section of the implication. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
27. Incremental construction of systems: An efficient characterization of the lacking sub-system.
- Author
-
Santone, Antonella, Vaglini, Gigliola, and Villani, Maria Luisa
- Subjects
- *
COMPUTER systems , *SOFTWARE engineering , *COMPUTER software , *AUTOMATION , *SCALABILITY , *COMPUTER science - Abstract
Abstract: Software engineering research is driven by the aim of making software development more dynamic, flexible and evolvable. Nowadays the emphasis is on the evolution of pre-existing sub-systems and component and service-based development, where often only a part of the system is totally under control of the designer, most components being remotely operated by external vendors. In this context, we tackle the following problem: given the formal specification of the (incomplete) system, say it , already built, how to characterize collaborators of to be selected, based on a given communication interface , so that a given property is satisfied. Using properties described by temporal logic formulae and systems by CCS processes, if is the formula to be satisfied by the complete system, an efficient and automatic procedure is defined to identify a formula such that, for each existing process satisfying , the process satisfies . Important features of this result are simplicity of the derived property , compared to the original one, and scalability of the verification process. Such characteristics are necessary for applying the method to both incremental design and system evolution scenarios where is already in place, and one needs to understand the specification of the functionality of the new component that should correctly interact with . Indeed, in general, finding a suitable partner for is easier than finding a complete system satisfying the global property. Moreover, in this paper it is shown how can be used also to select a set of possible candidate processes through a property-directed and structural heuristic. From the verification point of view, the description of the lacking component through a logic formula guarantees correctness of the integration with of any process that exhibits a behaviour compliant with the inferred formula. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
28. Closed form fuzzy interpolation.
- Author
-
Yang, Longzhi and Shen, Qiang
- Subjects
- *
FUZZY systems , *COMPUTER science , *COMPUTER systems , *ROBUST control , *COMPUTATIONAL complexity , *FUZZY sets - Abstract
Abstract: Fuzzy interpolation enhances the robustness of fuzzy systems and helps to reduce systems complexity. Although a number of important fuzzy rule interpolation approaches have been proposed in the literature, most of these approaches cannot be expressed in a closed form. This is usually caused by the effort to avoid possible invalid interpolated results. This paper proposes a different fuzzy rule interpolation approach. It not only can be represented in a closed form but also guarantees that the interpolated results are valid fuzzy sets. This approach is based on a direct use of the extension principle which has been widely utilised for the development of a variety of fuzzy systems. The mathematical properties of the proposed approach are analysed by taking the advantage of the closed form representation. This approach has been applied to a practical problem of predicting diarrhoeal disease rates in remote villages. The results demonstrate the potential of the proposed work in enhancing the robustness of fuzzy interpolation. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
29. State complexity of star of union and square of union on k regular languages.
- Author
-
Gao, Yuan and Kari, Lila
- Subjects
- *
COMPUTATIONAL complexity , *PROGRAMMING languages , *COMPUTER science , *COMPUTER systems , *MATHEMATICAL models , *ELECTRONIC data processing - Abstract
Abstract: In this paper, we investigate the state complexities of and , where , , are regular languages. We establish exact bounds for both of these general combined operations and show that they are much lower than the mathematical compositions of the state complexities of their basic individual component operations, but have similar forms with the state complexities of some participating combined operations. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
30. A complete proof system for propositional projection temporal logic.
- Author
-
Duan, Zhenhua, Zhang, Nan, and Koutny, Maciej
- Subjects
- *
COMPUTER systems , *SYNTAX in programming languages , *SEMANTICS , *AXIOMS , *PROOF theory , *COMPUTER science - Abstract
Abstract: The paper presents a proof system for Propositional Projection Temporal Logic (PPTL) with projection-plus. The syntax, semantics, and logical laws of PPTL are introduced together with an axiom system consisting of axioms and inference rules. To facilitate proofs, some of the frequently used theorems are proved. A normal form of PPTL formulas is presented, and the soundness and completeness of the proof system are demonstrated. To show how the axiom system works, a full omega regular property for the mutual exclusion problem is specified by a PPTL formula and then a deductive proof of the property is performed. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
31. Approximation algorithms for a bi-level knapsack problem.
- Author
-
Chen, Lin and Zhang, Guochuan
- Subjects
- *
APPROXIMATION algorithms , *KNAPSACK problems , *PROBABILITY theory , *SET theory , *COMPUTER science , *COMPUTER systems - Abstract
Abstract: In this paper, we consider a variant of the knapsack problem. There are two knapsacks with probably different capacities, owned by two agents respectively. Given a set of items, each with a fixed size and a profit. The two agents select items and pack them into their own knapsacks under the capacity constraint. Same items can be packed simultaneously to different knapsacks. However, in this case the profit of such items may vary. One agent packs items into his knapsack to maximize the total profit, while another agent can only pack items into his knapsack as well but he cares about the total profits of items packed into two knapsacks. The latter agent is a leader while the former is a follower. We aim at designing an approximation algorithm for the leader assuming that the follower is selfish. For different settings we provide approximation results. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
32. A novel image-based implicit password authentication system (IPAS) for mobile and non-mobile devices.
- Author
-
Almuairfi, Sadiq, Veeraraghavan, Prakash, and Chilamkurti, Naveen
- Subjects
- *
COMPUTER passwords , *COMPUTER access control , *COMPUTER systems , *MOBILE apps , *BIOMETRIC identification , *COMPUTER science - Abstract
Abstract: Authentication is the first line of defense against compromising confidentiality and integrity. Though traditional login/password-based schemes are easy to implement, they have been subjected to several attacks. As an alternative, token and biometric-based authentication systems were introduced. However, they have not improved substantially to justify the investment. Thus, a variation to the login/password scheme, viz. graphical scheme was introduced. But it also suffered due to shoulder-surfing and screen-dump attacks. In this paper, we introduce a framework of our proposed (IPAS) Implicit Password Authentication System, which is immune to the common attacks suffered by other authentication schemes. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
33. Semi-online scheduling problems on two uniform machines under a grade of service provision.
- Author
-
Lu, Xinrong and Liu, Zhaohui
- Subjects
- *
COMPUTER scheduling , *MATHEMATICAL optimization , *COMPUTER science , *COMPUTER systems , *COMPUTER algorithms , *SEQUENTIAL scheduling - Abstract
Abstract: This paper studies the semi-online scheduling on two uniform machines under a grade of service (GoS) provision, where one machine is available for all jobs and the other one is only available for partial jobs. The objective is to minimize the makespan. We consider three variants, where the optimal makespan, the total size of jobs, and the largest job size are known in advance respectively, and design optimal algorithms for all of them. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
34. On parallel complexity of analytic functions.
- Author
-
Yu, Fuxiang and Ko, Ker-I
- Subjects
- *
COMPUTATIONAL complexity , *INTEGRALS , *LOGIC circuits , *LOGARITHMS , *COMPUTER science , *COMPUTER systems - Abstract
Abstract: In this paper, we study the parallel complexity of analytic functions. We investigate the complexity of computing the derivatives, integrals, and zeros of or logarithmic-space computable analytic functions, where denotes the complexity class of sets acceptable by polynomial-size, polylogarithmic-depth, uniform Boolean circuits. It is shown that the derivatives and integrals of (or logarithmic-space) computable analytic functions remain (or, respectively, logarithmic-space) computable. We also study the problem of finding all zeros of an computable analytic function inside an computable Jordan curve, and show that, under a uniformity condition on the function values on the Jordan curve, all zeros can be found in . [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
35. On the gap between schedulability tests and an automotive task model.
- Author
-
Anssi, Saoussen, Kuntz, Stefan, Gérard, Sébastien, and Terrier, François
- Subjects
- *
COMPUTER scheduling , *COMPUTER systems , *APPLICATION software , *CASE studies , *COMPUTER science , *COMPUTER networks - Abstract
Abstract: In this paper, we study the adequacy of available schedulability tests for monoprocessor fixed-priority systems to enable performing scheduling analysis for automotive applications. We show that, in spite of the work carried out during the last decade to enhance these tests in order to support more realistic task model, a gap still exists between the task model considered in these tests and the usual automotive task model. However, we claim that an extension of these tests is possible to support some of the uncovered automotive features. The aim of this study is to raise discussion and make researchers involved in the development of such schedulability tests be aware of the effort needed to bridge the gap between current schedulability tests and automotive task model mostly used. The study is illustrated by showing the concrete challenges faced when applying scheduling analysis to a case study derived from a real engine control application. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
36. Information flow in systems with schedulers, Part II: Refinement.
- Author
-
van der Meyden, Ron and Zhang, Chenyi
- Subjects
- *
INFORMATION storage & retrieval systems , *COMPUTER systems , *COMPUTER scheduling , *MACHINE theory , *COMPUTER security , *COMPUTER science - Abstract
Abstract: Refinement is a relation on system models: a concrete model is a refinement of a more abstract model if it has fewer behaviors. When properties of the abstract model are guaranteed to be preserved in the concrete model, refinement supports a top-down development process. This paper considers preservation of a range of information flow security properties in synchronous systems with schedulers, when these schedulers are refined. Notions of refinement are defined for both an abstract notion of scheduler as well as for their concrete representation as automata. The security properties that are preserved by refinement over schedulers are then characterized. The results are applied to characterize a number of scheduler independent security properties, which state that a system is secure with respect to all schedulers. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
37. Permanence of a stage-structured predator–prey system.
- Author
-
Chen, Fengde, Chen, Wanlin, Wu, Yumin, and Ma, Zhaozhi
- Subjects
- *
COMPUTER systems , *DISCRETE systems , *COMPUTER science , *MATHEMATICS theorems , *COMPARATIVE studies , *MATHEMATICAL models - Abstract
Abstract: A stage-structured predator–prey system (stage structure for both predator and prey) with discrete delay is studied in this paper. By introducing a new lemma and applying the standard comparison theorem, a set of novel criteria which ensure the permanence of the system is obtained. Our result supplements the main results of Chen, Xie and Li [Partial survival and extinction of a delayed predator–prey model with stage structure, Applied Mathematics and Computation, 219 (8) (2012) 4157–4162]. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
38. On-site investigation methodology for incident response in Windows environments.
- Author
-
Lee, Keungi, Lee, Changhoon, and Lee, Sangjin
- Subjects
- *
METHODOLOGY , *COMPUTER science , *COMPUTER systems , *COMPUTER hackers , *PROBABILITY theory - Abstract
Abstract: In recent years, various computers have been compromised through several paths. In particular, the attack patterns and paths are becoming more various than in the past. Furthermore, systems damaged by hackers are used as zombie systems to attack other web servers or personal computers, so there is a high probability to spread secondary damage such as DDoS. Also, previously, hacking and malicious code were carried out for self-display or simple curiosity, but recently they are related to monetary extortion. In order to respond to incidents correctly, it is important to measure the damage to a system rapidly and determine the attack paths. This paper will discuss an on-site investigation methodology for incident response and also describe the limitations of this methodology. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
39. Practical parallel key-insulated encryption with multiple helper keys.
- Author
-
Ren, Yanli, Wang, Shuozhong, and Zhang, Xinpeng
- Subjects
- *
DATA encryption , *PARALLEL computers , *DATA security , *COMPUTER systems , *COMPUTER science - Abstract
Abstract: Parallel key-insulated encryption (PKIE) usually allows two independent helper keys to be alternately used in temporary secret key update operations. At least half of temporary secret keys would be exposed and at least half of ciphertexts could be decrypted if one of the helper keys is exposed. In this paper, we propose a new PKIE scheme with helper keys, where . If one of the helper keys is exposed, only temporary secret keys would be exposed and ciphertexts could be decrypted, so the new PKIE scheme can greatly decrease loss due to key-exposure. The scheme is provably secure without random oracles based on a bilinear group of composite order. Most important, the scheme is practical and much more efficient than the extended ones from the previous PKIE schemes. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
40. Online game bot detection based on party-play log analysis.
- Author
-
Kang, Ah Reum, Woo, Jiyoung, Park, Juyong, and Kim, Huy Kang
- Subjects
- *
VIDEO games , *VIRTUAL reality , *STATISTICS , *COMPUTER science , *COMPUTER systems - Abstract
Abstract: As online games become popular and the boundary between virtual and real economies blurs, cheating in games has proliferated in volume and method. In this paper, we propose a framework for user behavior analysis for bot detection in online games. Specifically, we focus on party play which reflects the social activities among gamers: in a Massively Multi-user Online Role Playing Game (MMORPG), party play is a major activity that game bots exploit to keep their characters safe and facilitate the acquisition of cyber assets in a fashion very different from that of normal humans. Through a comprehensive statistical analysis of user behaviors in game activity logs, we establish threshold levels for the activities that allow us to identify game bots. Based on this, we also build a knowledge base of detection rules, which are generic. We apply our rule reasoner to AION, a popular online game serviced by NCsoft, Inc., a leading online game company based in Korea. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
41. Formal virtualization requirements for the ARM architecture.
- Author
-
Penneman, Niels, Kudinskas, Danielius, Rawsthorne, Alasdair, De Sutter, Bjorn, and De Bosschere, Koen
- Subjects
- *
ARCHITECTURE , *VIRTUAL storage (Computer science) , *INTERRUPTS (Computer systems) , *MACHINERY , *EXTENSIONS , *COMPUTER systems , *ELECTRONICS , *COMPUTER science - Abstract
Abstract: We present an analysis of the virtualizability of the ARMv7-A architecture carried out in the context of the seminal paper published by Popek and Goldberg 38years ago. Because their definitions are dated, we first extend their machine model to modern architectures with paged virtual memory, IO and interrupts. We then use our new model to show that ARMv7-A is not classically virtualizable. Insights such as binary translation enable efficient virtualization beyond the original criteria. Companies are also making their architectures virtualizable through extensions. We analyse both approaches for ARM and conclude that both have their use in future systems. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
42. An ontological analysis of the notion of community in the RM-ODP enterprise language
- Author
-
Almeida, João Paulo A. and Guizzardi, Giancarlo
- Subjects
- *
SOCIAL reality , *COMPUTER systems , *PROGRAMMING languages , *INFORMATION technology , *COMPUTER science , *SOCIAL groups , *GROUP identity , *INTERPERSONAL relations - Abstract
Abstract: In our past work, we have shown that a number of theories from conceptual modeling and ontological analysis can be used to clarify the definitions of role-related and goal-related concepts in the RM-ODP [1,2]. This paper builds up on our earlier efforts by providing an ontology-based account for the notion of communities in the reference model''s Enterprise Language [38]. We address issues regarding the composition of communities, the filling of roles in communities, the decomposition of a community''s objective into sub-objectives (delegated to community members). The use of an ontology that deals with aspects of social reality and intentionality [30] plays an important role in this account, revealing the intentionality of communities and enterprise objects; the social relations between communities and enterprise objects in the community; the social relations between objects in the community; the social relations between communities; the normative character of a community''s contract, etc. The analysis allows us to propose well-founded recommendations for clarifications and identify potential amendments to the standard as well as issues for further investigation. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
43. Semantic web services matchmaking: Semantic distance-based approach.
- Author
-
Farrag, Tamer A., Saleh, Ahmed I., and Ali, H.A.
- Subjects
- *
SEMANTICS , *WEB services , *DATING services , *COMPUTER algorithms , *COMPUTER systems , *COMPUTER science - Abstract
Abstract: The semantic web services (SWSs) discovery is the process of finding a service that can possibly satisfy the user requirements, choosing between several services, and composing services to form a single service. The matchmaking between the user request and SWSs is the main task of any SWSs discovery mechanism. In this paper, a semantic distance-based matchmaking algorithm (SDMA) is introduced. The idea behind SDMA is to use the measurement of the semantic distance between the user request and the tested service as an indicator of the degree of relevance between them. In addition to many proposals, a new concepts tree is proposed to perform the process of computing the semantic distance between two concepts. SDMA bases on many concept-to-concept semantic distance measures that are modified and adapted. A deep evaluation process is performed to validate the great impact of using semantic distance measures in the field of SWSs matchmaking. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
44. Using the complementary nature of node joining and leaving to handle churn problem in P2P networks.
- Author
-
Meng, Xianfu, Chen, Xiaoling, and Ding, Yalin
- Subjects
- *
PEER-to-peer architecture (Computer networks) , *COMPUTER simulation , *COMPUTER networks , *COMPUTER science , *COMPUTER systems , *ELECTROMECHANICAL analogies , *MATHEMATICAL models - Abstract
Abstract: Churn is a basic and inherent problem in P2P networks. A lot of relevant studies have been carried out, but all lack versatility. In this paper, a general solution is proposed which makes a peer-to-peer (P2P) network need not pay much attention to churn problem by introducing a logic layer named Dechurn, and most of churn could be eliminated in the Dechurn layer. For utilizing the complementary nature of node joining and leaving, a network scheme, named Constellation, for handling churn is designed on the Dechurn layer through which the resources cached in a node for its spouse node who has left network would be succeeded by a node in latent period. The simulation results indicate that the proposed solution is effective and efficient in handling churn and easy to put into practice. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
45. An iterative approach to synthesize business process templates from compliance rules
- Author
-
Awad, Ahmed, Goré, Rajeev, Hou, Zhe, Thomson, James, and Weidlich, Matthias
- Subjects
- *
ITERATIVE methods (Mathematics) , *PROCESS control systems , *DATA mining , *LINEAR systems , *COMPUTER software execution , *COMPUTER science , *COMPUTER systems , *INFORMATION storage & retrieval systems - Abstract
Abstract: Companies have to adhere to compliance requirements. The compliance analysis of business operations is typically a joint effort of business experts and compliance experts. Those experts need to create a common understanding of business processes to effectively conduct compliance management. In this paper, we present a technique that aims at supporting this process. We argue that process templates generated out of compliance requirements provide a basis for negotiation among business and compliance experts. We introduce a semi-automated and iterative approach to the synthesis of such process templates from compliance requirements expressed in Linear Temporal Logic (LTL). We show how generic constraints related to business process execution are incorporated and present criteria that point at underspecification. Further, we outline how such underspecification may be resolved to iteratively build up a complete specification. For the synthesis, we leverage existing work on process mining and process restructuring. However, our approach is not limited to the control-flow perspective, but also considers direct and indirect data-flow dependencies. Finally, we elaborate on the application of the derived process templates and present an implementation of our approach. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
46. Improving the Hadamard extractor
- Author
-
Bouda, Jan, Pivoluska, Matej, and Plesch, Martin
- Subjects
- *
DISTRIBUTION (Probability theory) , *UNIFORM distribution (Probability theory) , *INPUT-output analysis , *COMPARATIVE studies , *COMPUTER systems , *COMPUTER science - Abstract
Abstract: In this paper we construct a strong randomness extractor with two independent -bit input distributions with min entropies (the probability of any particular output is upper bounded by and , respectively). For , our extractor produces one bit which is by the factor of closer to the uniform distribution, when compared to the Hadamard extractor. What is more, this distance drops to zero if at least one of the min entropies raises to . This is in sharp contrast to the Hadamard extractor which fails to produce even a single unbiased bit, even if one of the input distributions is uniform. We also extend our construction to produce bits of output with a bias that is by the factor of smaller than that of the corresponding Hadamard extractor and retains the ability to produce unbiased bits if one of the input distributions is uniform. The strongness property of the extractor is maintained in both cases, however, in the multi-bit scenario the bias is increased by the factor of . [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
47. Anytime coalition structure generation in multi-agent systems with positive or negative externalities
- Author
-
Rahwan, Talal, Michalak, Tomasz, Wooldridge, Michael, and Jennings, Nicholas R.
- Subjects
- *
COALITIONS , *MULTIAGENT systems , *COMPUTATIONAL complexity , *COMPUTER algorithms , *COMPUTER science , *COMPUTER systems , *INFORMATION technology , *INTELLIGENT agents - Abstract
Abstract: Much of the literature on multi-agent coalition formation has focused on Characteristic Function Games, where the effectiveness of a coalition is not affected by how the other agents are arranged in the system. In contrast, very little attention has been given to the more general class of Partition Function Games, where the emphasis is on how the formation of one coalition could influence the performance of other co-existing coalitions in the system. However, these inter-coalitional dependencies, called externalities from coalition formation, play a crucial role in many real-world multi-agent applications where agents have either conflicting or overlapping goals. Against this background, this paper is the first computational study of coalitional games with externalities in the multi-agent system context. We focus on the Coalition Structure Generation (CSG) problem which involves finding an exhaustive and disjoint division of the agents into coalitions such that the performance of the entire system is optimized. While this problem is already very challenging in the absence of externalities, due to the exponential size of the search space, taking externalities into consideration makes it even more challenging as the size of the input, given n agents, grows from to . Our main contribution is the development of the first CSG algorithm for coalitional games with either positive or negative externalities. Specifically, we prove that it is possible to compute upper and lower bounds on the values of any set of disjoint coalitions. Building upon this, we prove that in order to establish a worst-case guarantee on solution quality it is necessary to search a certain set of coalition structures (which we define). We also show how to progressively improve this guarantee with further search. Since there are no previous CSG algorithms for games with externalities, we benchmark our algorithm against other state-of-the-art approaches in games where no externalities are present. Surprisingly, we find that, as far as worst-case guarantees are concerned, our algorithm outperforms the others by orders of magnitude. For instance, to reach a bound of 3 given 24 agents, the number of coalition structures that need to be searched by our algorithm is only 0.0007% of that needed by Sandholm et al. (1999) , and 0.5% of that needed by Dang and Jennings (2004) . This is despite the fact that the other algorithms take advantage of the special properties of games with no externalities, while ours does not. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
48. Transfer of trust in event-based reputation systems
- Author
-
Nielsen, Mogens and Krukow, Karl
- Subjects
- *
COMPUTER science , *INFORMATION technology , *RECOMMENDER systems , *SECURITY systems , *COMPUTER systems , *ELECTRONIC data processing - Abstract
Abstract: In the Global Computing scenario, trust-based systems have been proposed and studied as an alternative to traditional security mechanisms. A promising line of research concerns the so-called reputation-based computational trust. The approach here is that trust in a computing agent is defined in terms of evidence of future behaviour based on interactions in the past with its environment. We have previously argued how concepts and models from concurrency theory can answer some fundamental challenges in the representation of such interaction behaviour over time, using event structures as our choice of model from concurrency theory. In this paper, we continue this line of research, addressing the problem on how to transfer trust from one behavioural context to another. Our proposed frameworks build on morphisms between event structures, and we prove some generic results guaranteeing formal properties of transfers in the frameworks. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
49. Narrative-based taxonomy distillation for effective indexing of text collections
- Author
-
Cataldi, Mario, Candan, K. Selçuk, and Sapino, Maria Luisa
- Subjects
- *
COMPUTER networks , *COMPUTER algorithms , *COMPUTER engineering , *COMPUTER science , *ELECTRONIC systems , *COMPUTER systems , *COMPUTER programming - Abstract
Abstract: Taxonomies embody formalized knowledge and define aggregations between concepts/categories in a given domain, facilitating the organization of the data and making the contents easily accessible to the users. Since taxonomies have significant roles in data annotation, search and navigation, they are often carefully engineered. However, especially in domains, such as news, where content dynamically evolves, they do not necessarily reflect the content knowledge. Thus, in this paper, we ask and answer, in the positive, the following question: “is it possible to efficiently and effectively adapt a given taxonomy to a usage context defined by a corpus of documents?” In particular, we recognize that the primary role of a taxonomy is to describe or narrate the natural relationships between concepts in a given document corpus. Therefore, a corpus-aware adaptation of a taxonomy should essentially distill the structure of the existing taxonomy by appropriately segmenting and, if needed, summarizing this narrative relative to the content of the corpus. Based on this key observation, we propose A Narrative Interpretation of Taxonomies for their Adaptation (ANITA) for re-structuring existing taxonomies to varying application contexts and we evaluate the proposed scheme using different text collections. Finally we provide user studies that show that the proposed algorithm is able to adapt the taxonomy in a new compact and understandable structure. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
50. Contexts, refinement and determinism
- Author
-
Reeves, Steve and Streader, David
- Subjects
- *
COMPUTER engineering , *DESIGN techniques , *INSIGHT , *COMPUTER systems , *COMPUTER programming , *COMPUTER science - Abstract
Abstract: In this paper we have been influenced by those who take an “engineering view” of the problem of designing systems, i.e. a view that is motivated by what someone designing a real system will be concerned with, and what questions will arise as they work on their design. Specifically, we have borrowed from the testing work of Hennessy, de Nicola and van Glabbeek, e.g. Hennessy, 1988 , de Nicola , de Nicola, 1992 and van Glabbeek, 2001, 1990 . Here we concentrate on one fundamental part of the engineering view and where consideration of it leads. The aspects we are concerned with are computational entities in contexts, observed by users. This leads to formalising design steps that are often left informal, and that in turn gives insights into non-determinism and ultimately leads to being able to use refinement in situations where existing techniques fail. [Copyright &y& Elsevier]
- Published
- 2011
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.