972 results
Search Results
2. Panel: 'Why are object-oriented folks producing systems, while deductive folks are producing papers?'
- Author
-
François Bancilhon, Constantino Thanos, and Dennis Tsichritzis
- Subjects
Object-oriented programming ,Computer science ,Programming language ,computer.software_genre ,computer - Published
- 2005
3. Introduction to the Paper by H. U. Lemke et al, 'Applications of Picture Processing, Image Analysis and Computer Graphics Techniques to Cranial CT Scans'
- Author
-
Heinz U. Lemke
- Subjects
Computer graphics ,Radiological and Ultrasound Technology ,Computer science ,Cranial ct ,Computer graphics (images) ,Picture processing ,Radiology, Nuclear Medicine and imaging ,Data mining ,computer.software_genre ,computer ,Article ,Computer Science Applications ,Image (mathematics) - Published
- 2003
4. Network immunization and virus propagation in email networks: experimental evaluation and analysis
- Author
-
Jiming Liu, Chao Gao, and Ning Zhong
- Subjects
Operations research ,Computer science ,Network topology ,computer.software_genre ,Immunization strategies ,Virus ,Electronic mail ,Computer virus ,Enron ,Betweenness centrality ,Human dynamics ,Artificial Intelligence ,Virus propagation ,Regular Paper ,Email networks ,business.industry ,Immunization (finance) ,Human-Computer Interaction ,Key factors ,Hardware and Architecture ,business ,computer ,Software ,Information Systems ,Computer network - Abstract
Network immunization strategies have emerged as possible solutions to the challenges of virus propagation. In this paper, an existing interactive model is introduced and then improved in order to better characterize the way a virus spreads in email networks with different topologies. The model is used to demonstrate the effects of a number of key factors, notably nodes’ degree and betweenness. Experiments are then performed to examine how the structure of a network and human dynamics affects virus propagation. The experimental results have revealed that a virus spreads in two distinct phases and shown that the most efficient immunization strategy is the node-betweenness strategy. Moreover, those results have also explained why old virus can survive in networks nowadays from the aspects of human dynamics.
- Published
- 2010
5. Caching Trust Rather Than Content
- Author
-
Mahadev Satyanarayanan
- Subjects
CPU cache ,Computer science ,Wireless network ,Compromise ,media_common.quotation_subject ,Distributed computing ,Champion ,Wearable computer ,Storage management ,Computer security ,computer.software_genre ,Application domain ,Microcomputer ,Server ,General Earth and Planetary Sciences ,Position paper ,Data content ,Cache ,Latency (engineering) ,Mobile device ,computer ,General Environmental Science ,media_common - Abstract
Caching, one of the oldest ideas in computer science, often improves performance and sometimes improves availability [1, 3]. Previous uses of caching have focused on data content. It is the presence of a local copy of data that reduces access latency and masks server or network failures. This position paper puts forth the idea that it can sometimes be useful to merely cache knowledge sufficient to recognize valid data. In other words, we do not have a local copy of a data item, but possess a substitute that allows us to verify the content of that item if it is offered to us by an untrusted source. We refer to this concept as caching trust.Mobile computing is a champion application domain for this concept. Wearable and handheld computers are constantly under pressure to be smaller and lighter. However, the potential volume of data that is accessible to such devices over a wireless network keeps growing. Something has to give. In this case, it is the assumption that all data of potential interest can be hoarded on the mobile client [1, 2, 6]. In other words, such clients have to be prepared to cope with cache misses during normal use. If they are able to cache trust, then any untrusted site in the fixed infrastructure can be used to stage data for servicing cache misses --- one does not have to go back to a distant server, nor does one have to compromise security. The following scenario explores this in more detail.
- Published
- 2006
6. The KCM system: Speeding-up logic programming through hardware support
- Author
-
Jacques Noyé
- Subjects
Computer science ,business.industry ,Programming language ,Short paper ,computer.software_genre ,Prolog ,Logic synthesis ,Software ,Computer architecture ,business ,Logic Control ,computer ,Logic programming ,Computer hardware ,Logic optimization ,Register-transfer level ,computer.programming_language - Abstract
The aim of the KCM (Knowledge Crunching Machine) project was to study how to speed-up Prolog, and more generally Logic Programming, through hardware support at the processor level. An experimental approach was taken, which resulted in the design and implementation of a real system, hardware and software. This short paper outlines the key features of the system as well as the main conclusions which can be drawn from the project.
- Published
- 2005
7. Improving SIEM for critical SCADA water infrastructures using machine learning
- Author
-
David Brosset, Hanan Hindy, Amar Seeam, Ethan Bayne, Xavier Bellekens, Katsikas, Sokratis K., Cuppens, Frédéric, Cuppens, Nora, Lambrinoudakis, Costas, Antón, Annie, Gritzalis, Stefanos, Mylopoulos, John, Kalloniatis, Christos, Institut de Recherche de l'Ecole Navale (IRENAV), Université de Bordeaux (UB)-Institut Polytechnique de Bordeaux-Centre National de la Recherche Scientifique (CNRS)-Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement (INRAE)-Arts et Métiers Sciences et Technologies, HESAM Université (HESAM)-HESAM Université (HESAM), University of Mauritius, and Middlesex University
- Subjects
QA75 ,021110 strategic, defence & security studies ,Spoofing attack ,Computer science ,Process (engineering) ,Event (computing) ,business.industry ,Anomaly (natural sciences) ,0211 other engineering and technologies ,02 engineering and technology ,Machine learning ,computer.software_genre ,SCADA ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Anomaly detection ,[INFO]Computer Science [cs] ,Artificial intelligence ,business ,computer ,Implementation - Abstract
International audience; Network Control Systems (NAC) have been used in many industrial processes. They aim to reduce the human factor burden and efficiently handle the complex process and communication of those systems. Supervisory control and data acquisition (SCADA) systems are used in industrial, infrastructure and facility processes (e.g. manufacturing, fabrication, oil and water pipelines, building ventilation, etc.) Like other Internet of Things (IoT) implementations, SCADA systems are vulnerable to cyber-attacks, therefore, a robust anomaly detection is a major requirement. However, having an accurate anomaly detection system is not an easy task, due to the difficulty to differentiate between cyber-attacks and system internal failures (e.g. hardware failures). In this paper, we present a model that detects anomaly events in a water system controlled by SCADA. Six Machine Learning techniques have been used in building and evaluating the model. The model classifies different anomaly events including hardware failures (e.g. sensor failures), sabotage and cyber-attacks (e.g. DoS and Spoofing). Unlike other detection systems, our proposed work helps in accelerating the mitigation process by notifying the operator with additional information when an anomaly occurs. This additional information includes the probability and confidence level of event(s) occurring. The model is trained and tested using a real-world dataset.
- Published
- 2019
8. Comparative study of AR versus video tutorials for minor maintenance operations
- Author
-
Juan M. Orduña, M. Carmen Juan, Pedro Morillo, Marcos Fernández, and Inmaculada García-García
- Subjects
Augmented Reality ,Multimedia ,Computer Networks and Communications ,business.industry ,Computer science ,Equipment maintenance ,020207 software engineering ,Usability ,02 engineering and technology ,Minor (academic) ,computer.software_genre ,Multimedia-based learning ,Hardware and Architecture ,Real user study ,0202 electrical engineering, electronic engineering, information engineering ,Media Technology ,Augmented reality ,Comparative study ,business ,computer ,LENGUAJES Y SISTEMAS INFORMATICOS ,Software - Abstract
[EN] Augmented Reality (AR) has become a mainstream technology in the development of solutions for repair and maintenance operations. Although most of the AR solutions are still limited to specific contexts in industry, some consumer electronics companies have started to offer pre-packaged AR solutions as alternative to video-based tutorials (VT) for minor maintenance operations. In this paper, we present a comparative study of the acquired knowledge and user perception achieved with AR and VT solutions in some maintenance tasks of IT equipment. The results indicate that both systems help users to acquire knowledge in various aspects of equipment maintenance. Although no statistically significant differences were found between AR and VT solutions, users scored higher on the AR version in all cases. Moreover, the users explicitly preferred the AR version when evaluating three different usability and satisfaction criteria. For the AR version, a strong and significant correlation was found between the satisfaction and the achieved knowledge. Since the AR solution achieved similar learning results with higher usability scores than the video-based tutorials, these results suggest that AR solutions are the most effective approach to substitute the typical paper-based instructions in consumer electronics., This work has been supported by Spanish MINECO and EU ERDF programs under grant RTI2018-098156-B-C55.
- Published
- 2020
9. Utilizing geospatial information to implement SDGs and monitor their Progress
- Author
-
Ali Kharrazi, Ram Avtar, Tonni Agustiono Kurniawan, Ridhika Aggarwal, and Pankaj Kumar
- Subjects
Earth observation ,Geospatial analysis ,Geographic information system ,010504 meteorology & atmospheric sciences ,United Nations ,Computer science ,Geospatial data and techniques ,And indicators ,Big data ,Sustainable development goals ,Continuous planning ,010501 environmental sciences ,Management, Monitoring, Policy and Law ,computer.software_genre ,01 natural sciences ,Citizen science ,Adaptation ,0105 earth and related environmental sciences ,General Environmental Science ,Sustainable development ,business.industry ,Member states ,General Medicine ,Sustainable Development ,Remote sensing ,Pollution ,Data science ,business ,computer ,Goals ,Environmental Monitoring - Abstract
It is more than 4 years since the 2030 agenda for sustainable development was adopted by the United Nations and its member states in September 2015. Several efforts are being made by member countries to contribute towards achieving the 17 Sustainable Development Goals (SDGs). The progress which had been made over time in achieving SDGs can be monitored by measuring a set of quantifiable indicators for each of the goals. It has been seen that geospatial information plays a significant role in measuring some of the targets, hence it is relevant in the implementation of SDGs and monitoring of their progress. Synoptic view and repetitive coverage of the Earth's features and phenomenon by different satellites is a powerful and propitious technological advancement. The paper reviews robustness of Earth Observation data for continuous planning, monitoring, and evaluation of SDGs. The scientific world has made commendable progress by providing geospatial data at various spatial, spectral, radiometric, and temporal resolutions enabling usage of the data for various applications. This paper also reviews the application of big data from earth observation and citizen science data to implement SDGs with a multi-disciplinary approach. It covers literature from various academic landscapes utilizing geospatial data for mapping, monitoring, and evaluating the earth's features and phenomena as it establishes the basis of its utilization for the achievement of the SDGs.
- Published
- 2020
10. A novel stateless authentication protocol
- Author
-
Chris J. Mitchell, Christianson, Bruce, Malcolm, James A, Matyas, Vashek, and Roe, Michael
- Subjects
Stateless protocol ,Computer science ,Faculty of Science\Mathematics ,ComputerSystemsOrganization_COMPUTER-COMMUNICATIONNETWORKS ,Research Groups and Centres\Information Security\ Information Security Group ,Denial-of-service attack ,Context (language use) ,Mutual authentication ,Cryptographic protocol ,Computer security ,computer.software_genre ,Authentication protocol ,State (computer science) ,Protocol (object-oriented programming) ,computer - Abstract
The value of authentication protocols which minimise (or even eliminate) the need for stored state in addressing DoS attacks is well-established — the seminal paper of Aura and Nikander [1] is of particular importance in this context. However, although there is now a substantial literature on this topic, it would seem that many aspects of stateless security protocols remain to be explored. In this paper we consider the design of a novel stateless authentication protocol which has certain implementation advantages. Specifically, neither party needs to maintain significant stored state. The protocol is developed as a series of refinements, at each step eliminating certain undesirable properties arising in previous steps.
- Published
- 2013
- Full Text
- View/download PDF
11. A neural network filtering approach for similarity-based remaining useful life estimation
- Author
-
Kai Goebel, Jeffrey Alun Jones, Oguz Bektas, Indranil Roychoudhury, and Shankar Sankararaman
- Subjects
Annan samhällsbyggnadsteknik ,0209 industrial biotechnology ,Computer science ,02 engineering and technology ,Machine learning ,computer.software_genre ,Similarity-based RUL calculation ,Data-driven prognostics ,Industrial and Manufacturing Engineering ,020901 industrial engineering & automation ,Similarity (psychology) ,C-MAPPS datasets ,Estimation ,Artificial neural network ,business.industry ,Mechanical Engineering ,Other Civil Engineering ,Computer Science Applications ,TA ,Control and Systems Engineering ,Prognostics ,Artificial intelligence ,Raw data ,business ,computer ,Software ,Neural networks - Abstract
The role of prognostics and health management is ever more prevalent with advanced techniques of estimation methods. However, data processing and remaining useful life prediction algorithms are often very different. Some difficulties in accurate prediction can be tackled by redefining raw data parameters into more meaningful and comprehensive health level indicators that will then provide performance information. Proper data processing has a significant importance on remaining useful life predictions, for example, to deal with data limitations or/and multi-regime operating conditions. The framework proposed in this paper considers a similarity-based prognostic algorithm that is fed by the use of data normalisation and filtering methods for operational trajectories of complex systems. This is combined with a data-driven prognostic technique based on feed-forward neural networks with multi-regime normalisation. In particular, the paper takes a close look at how pre-processing methods affect algorithm performance. The work presented herein shows a conceptual prognostic framework that overcomes challenges presented by short-term test datasets and that increases the prediction performance with regards to prognostic metrics. Validerad;2019;Nivå 2;2019-04-12 (johcin)
- Published
- 2019
12. On the effects of the fix geometric constraint in 2D profiles on the reusability of parametric 3D CAD models
- Author
-
Carmen González-Lluch, Pedro Company, Manuel Contero, David Pérez-López, and Jorge D. Camba
- Subjects
EXPRESION GRAFICA EN LA INGENIERIA ,Computer science ,media_common.quotation_subject ,Fix constraint ,0211 other engineering and technologies ,CAD ,02 engineering and technology ,computer.software_genre ,Automatic feedback tool ,Education ,2D profile ,Computer Aided Design ,Quality (business) ,021106 design practice & management ,Parametric statistics ,media_common ,Reusability ,DIBUJO ,05 social sciences ,General Engineering ,050301 education ,Model quality ,Constraint (information theory) ,Feature (computer vision) ,Metric (mathematics) ,Data mining ,0503 education ,computer - Abstract
[EN] In order to be reusable, history-based feature-based parametric CAD models must reliably allow for modifications while maintaining their original design intent. In this paper, we demonstrate that relations that fix the location of geometric entities relative to the reference system produce inflexible profiles that reduce model reusability. We present the results of an experiment where novice students and expert CAD users performed a series of modifications in different versions of the same 2D profile, each defined with an increasingly higher number of fix geometric constraints. Results show that the amount of fix constraints in a 2D profile correlates with the time required to complete reusability tasks, i.e., the higher the number of fix constraints in a 2D profile, the less flexible and adaptable the profile becomes to changes. In addition, a pilot software tool to automatically track this type of constraints was developed and tested. Results suggest that the detection of fix constraint overuse may result in a new metric to assess poor quality models with low reusability. The tool provides immediate feedback for preventing high semantic level quality errors, and assistance to CAD users. Finally, suggestions are introduced on how to convert fix constraints in 2D profiles into a negative metric of 3D model quality., The authors would like to thank Raquel Plumed for her support in the statistical analysis. This work has been partially funded by Grant UJI-A02017-15 (Universitat Jaume I) and DPI201784526-R (MINECO/AEI/FEDER, UE), project CAL-MBE. The authors also wish to thank the editor and reviewers for their valuable comments and suggestions that helped us improve the quality of the paper.
- Published
- 2019
- Full Text
- View/download PDF
13. Sparse analytic hierarchy process: an experimental analysis
- Author
-
Roberto Setola, Paolo Dell'Olmo, Gabriele Oliva, and Antonio Scala
- Subjects
0209 industrial biotechnology ,Process (engineering) ,Computer science ,Analytic hierarchy process ,Computational intelligence ,02 engineering and technology ,Machine learning ,computer.software_genre ,Theoretical Computer Science ,Task (project management) ,Body of knowledge ,020901 industrial engineering & automation ,0202 electrical engineering, electronic engineering, information engineering ,sparse information ,analytic hierarchy process ,decision-making ,Set (psychology) ,business.industry ,Aggregate (data warehouse) ,Rank (computer programming) ,020201 artificial intelligence & image processing ,Geometry and Topology ,Artificial intelligence ,business ,computer ,Software - Abstract
The aim of the sparse analytic hierarchy process (SAHP) problem is to rank a set of alternatives based on their utility/importance; this task is accomplished by asking human decision-makers to compare selected pairs of alternatives and to specify relative preference information, in the form of ratios of utilities. However, such an information is often affected by subjective biases or inconsistencies. Moreover, there is no general consent on the best approach to accomplish this task, and in the literature several techniques have been proposed. Finally, when more than one decision-maker is involved in the process, there is a need to provide adequate methodologies to aggregate the available information. In this view, the contribution of this paper to the SAHP body of knowledge is twofold. From one side, it develops a novel methodology to aggregate sparse data given by multiple sources of information. From another side, the paper undertakes an experimental validation of the most popular techniques to solve the SAHP problem, discussing the strength points and shortcomings of the different methodology with respect to a real case study.
- Published
- 2019
14. Proxy-based near real-time TV content transmission in mobility over 4G with MPEG-DASH transcoding on the cloud
- Author
-
Salvador Ferrairó, Román Belda, Ismael de Fez, Juan Carlos Guerri, and Pau Arce
- Subjects
Computer Networks and Communications ,Computer science ,Real-time computing ,ITU-T P.1203 ,Cloud computing ,Buffering ,02 engineering and technology ,Transcoding ,computer.software_genre ,Quality of experience ,TV ,Dynamic Adaptive Streaming over HTTP ,Dynamic adaptive streaming over HTTP (DASH) ,Digital Video Broadcasting ,TEORIA DE LA SEÑAL Y COMUNICACIONES ,0202 electrical engineering, electronic engineering, information engineering ,Media Technology ,4G ,Proxy (statistics) ,Video streaming ,business.industry ,020207 software engineering ,INGENIERIA TELEMATICA ,Proxy ,Transmission (telecommunications) ,Handover ,Hardware and Architecture ,business ,computer ,Software - Abstract
[EN] This paper presents and evaluates a system that provides TV and radio services in mobility using 4G communications. The system has mainly two blocks, one on the cloud and another on the mobile vehicle. On the cloud, a DVB (Digital Video Broadcasting) receiver obtains the TV/radio signal and prepares the contents to be sent through 4G. Specifically, contents are transcoded and packetized using the DASH (Dynamic Adaptive Streaming over HTTP) standard. Vehicles in mobility use their 4G connectivity to receive the flows transmitted by the cloud. The key element of the system is an on-board proxy that manages the received flows and offers them to the final users in the vehicle. The proxy contains a buffer that helps reduce the number of interruptions caused by hand over effects and lack of coverage. The paper presents a comparison between a live transmission using 4G connecting the clients directly with the cloud server and a near real-time transmission based on an on-board proxy. Results prove that the use of the proxy reduces the number of interruptions considerably and, thus, improves the Quality of Experience of users at the expense of slightly increasing the delay., This work is supported by the Centro para el Desarrollo Tecnologico Industrial (CDTI) from the Government of Spain under the project "Plataforma avanzada de conectividad en movilidad" (CDTI IDI-20150126) and the project "Desarrollo de nueva plataforma de entretenimiento multimedia para entornos nauticos" (CDTI TIC-20170102).
- Published
- 2019
- Full Text
- View/download PDF
15. Limits to anonymity when using credentials
- Author
-
Chris J. Mitchell and Andreas Pashalidis
- Subjects
Focus (computing) ,business.industry ,Computer science ,Faculty of Science\Mathematics ,Research Groups and Centres\Information Security\ Information Security Group ,Internet privacy ,Computer security ,computer.software_genre ,Credential ,Timing attack ,business ,Heuristics ,computer ,Anonymity - Abstract
This paper identifies certain privacy threats that apply to anonymous credential systems. The focus is on timing attacks that apply even if the system is cryptographically secure. The paper provides some simple heuristics that aim to mitigate the exposure to the threats and identifies directions for further research.
- Published
- 2006
16. Exploring Functional Acceleration of OpenCL on FPGAs and GPUs Through Platform-Independent Optimizations
- Author
-
Umar Ibrahim Minhas, Georgios Karakonstantis, and Roger Woods
- Subjects
Design space exploration ,Computer science ,02 engineering and technology ,Parallel computing ,computer.software_genre ,Theoretical Computer Science ,Software portability ,0202 electrical engineering, electronic engineering, information engineering ,Code (cryptography) ,SDG 7 - Affordable and Clean Energy ,Field-programmable gate array ,Throughput (business) ,FPGA ,energy efficiency ,OpenCL ,05 social sciences ,050301 education ,020206 networking & telecommunications ,GPU accelerators ,Multiplication ,Compiler ,0503 education ,computer ,Efficient energy use ,Computer Science(all) - Abstract
OpenCL has been proposed as a means of accelerating functional computation using FPGA and GPU accelerators. Although it provides ease of programmability and code portability, questions remain about the performance portability and underlying vendor’s compiler capabilities to generate efficient implementations without user-defined, platform specific optimizations. In this work, we systematically evaluate this by formalizing a design space exploration strategy using platform-independent micro-architectural and application-specific optimizations only. The optimizations are then applied across Altera FPGA, NVIDIA GPU and ARM Mali GPU platforms for three computing examples, namely matrix-matrix multiplication, binomial-tree option pricing and 3-dimensional finite difference time domain. Our strategy enables a fair comparison across platforms in terms of throughput and energy efficiency by using the same design effort. Our results indicate that FPGA provides better performance portability in terms of achieved percentage of device’s peak performance (68%) compared to NVIDIA GPU (20%) and also achieves better energy efficiency (up to 1.4\(\times \)) for some of the considered cases without requiring in-depth hardware design expertise.
- Published
- 2018
17. An LSH-Based Model-Words-Driven Product Duplicate Detection Method
- Author
-
Max van Keulen, Diederik Mathol, Thomas van Noort, Aron Hartveld, Kim Schouten, Thomas Plaatsman, Flavius Frasincar, Econometrics, and Business Intelligence
- Subjects
Similarity (geometry) ,Computer science ,Minor (linear algebra) ,Process (computing) ,Binary number ,020206 networking & telecommunications ,02 engineering and technology ,computer.software_genre ,Locality-sensitive hashing ,Reduction (complexity) ,Product (mathematics) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Data mining ,Focus (optics) ,computer - Abstract
The online shopping market is growing rapidly in the 21st century, leading to a huge amount of duplicate products being sold online. An important component for aggregating online products is duplicate detection, although this is a time consuming process. In this paper, we focus on reducing the amount of possible duplicates that can be used as an input for the Multi-component Similarity Method (MSM), a state-of-the-art duplicate detection solution. To find the candidate pairs, Locality Sensitive Hashing (LSH) is employed. A previously proposed LSH-based algorithm makes use of binary vectors based on the model words in the product titles. This paper proposes several extensions to this, by performing advanced data cleaning and additionally using information from the key-value pairs. Compared to MSM, the MSMP+ method proposed in this paper leads to a minor reduction by \(6\%\) in the \(F_1\)-measure whilst reducing the number of needed computations by \(95\%\).
- Published
- 2018
18. A Multilevel Approach to Sentiment Analysis of Figurative Language in Twitter
- Author
-
Paolo Rosso, Sivaji Bandyopadhyay, Soumadeep Mazumdar, Dipankar Das, and Braja Gopal Patra
- Subjects
Irony ,Metaphor ,Computer science ,media_common.quotation_subject ,02 engineering and technology ,Figurative text ,computer.software_genre ,01 natural sciences ,Literal and figurative language ,Sentiment analysis ,Sentiment abruptness measure ,0202 electrical engineering, electronic engineering, information engineering ,0101 mathematics ,media_common ,Sarcasm ,business.industry ,010102 general mathematics ,Cosine similarity ,Variation (linguistics) ,020201 artificial intelligence & image processing ,Artificial intelligence ,InformationSystems_MISCELLANEOUS ,business ,computer ,LENGUAJES Y SISTEMAS INFORMATICOS ,Natural language processing ,Natural language ,Meaning (linguistics) - Abstract
[EN] Commendable amount of work has been attempted in the field of Sentiment Analysis or Opinion Mining from natural language texts and Twitter texts. One of the main goals in such tasks is to assign polarities (positive or negative) to a piece of text. But, at the same time, one of the important as well as difficult issues is how to assign the degree of positivity or negativity to certain texts. The answer becomes more complex when we perform a similar task on figurative language texts collected from Twitter. Figurative language devices such as irony and sarcasm contain an intentional secondary or extended meaning hidden within the expressions. In this paper we present a novel approach to identify the degree of the sentiment (fine grained in an 11-point scale) for the figurative language texts. We used several semantic features such as sentiment and intensifiers as well as we introduced sentiment abruptness, which measures the variation of sentiment from positive to negative or vice versa. We trained our systems at multiple levels to achieve the maximum cosine similarity of 0.823 and minimum mean square error of 2.170., The work reported in this paper is supported by a grant from the project “CLIA System Phase II” funded by Department of Electronics and Information Technology (DeitY), Ministry of Communications and Information Technology (MCIT), Government of India. The work of the fourth author is also supported by the SomEMBED TIN2015-71147-C2-1-P MINECO research project and by the Generalitat Valenciana under the grant ALMAPATER (PrometeoII/2014/030).
- Published
- 2018
- Full Text
- View/download PDF
19. Cross-language transfer of semantic annotation via targeted crowdsourcing: task design and evaluation
- Author
-
Marcos Calvo, Ioannis Klasinas, Arindam Ghosh, Evgeny A. Stepanov, Ali Orkan Bayer, Emilio Sanchis, Shammur Absar Chowdhury, and Giuseppe Riccardi
- Subjects
Linguistics and Language ,Computer science ,02 engineering and technology ,Library and Information Sciences ,Ontology (information science) ,Temporal annotation ,computer.software_genre ,Crowdsourcing ,Language and Linguistics ,Education ,Annotation ,0202 electrical engineering, electronic engineering, information engineering ,Evaluation ,Parsing ,Information retrieval ,Semantic annotation ,business.industry ,020206 networking & telecommunications ,Ontology ,Cross-language transfer ,020201 artificial intelligence & image processing ,Artificial intelligence ,Computational linguistics ,business ,computer ,LENGUAJES Y SISTEMAS INFORMATICOS ,Natural language processing ,Spoken language - Abstract
[EN] Modern data-driven spoken language systems (SLS) require manual semantic annotation for training spoken language understanding parsers. Multilingual porting of SLS demands significant manual effort and language resources, as this manual annotation has to be replicated. Crowdsourcing is an accessible and cost-effective alternative to traditional methods of collecting and annotating data. The application of crowdsourcing to simple tasks has been well investigated. However, complex tasks, like cross-language semantic annotation transfer, may generate low judgment agreement and/or poor performance. The most serious issue in cross-language porting is the absence of reference annotations in the target language; thus, crowd quality control and the evaluation of the collected annotations is difficult. In this paper we investigate targeted crowdsourcing for semantic annotation transfer that delegates to crowds a complex task such as segmenting and labeling of concepts taken from a domain ontology; and evaluation using source language annotation. To test the applicability and effectiveness of the crowdsourced annotation transfer we have considered the case of close and distant language pairs: Italian-Spanish and Italian-Greek. The corpora annotated via crowdsourcing are evaluated against source and target language expert annotations. We demonstrate that the two evaluation references (source and target) highly correlate with each other; thus, drastically reduce the need for the target language reference annotations., This research is partially funded by the EU FP7 PortDial Project No. 296170, FP7 SpeDial Project No. 611396, and Spanish contract TIN2014-54288-C4-3-R. The work presented in this paper was carried out while the author was affiliated with Universitat Politecnica de Valencia.
- Published
- 2018
20. Structural Feature Selection for Event Logs
- Author
-
Teemu Lehto, Markku Hinkka, Keijo Heljanko, Alexander Jung, Teniente, E, Weidlich, M, and Helsinki Institute for Information Technology
- Subjects
0301 basic medicine ,FOS: Computer and information sciences ,Business process ,Computer science ,Process mining ,Context (language use) ,Feature selection ,Machine Learning (stat.ML) ,02 engineering and technology ,Machine learning ,computer.software_genre ,Machine Learning (cs.LG) ,Computer Science - Software Engineering ,03 medical and health sciences ,Computer Science - Databases ,Statistics - Machine Learning ,0202 electrical engineering, electronic engineering, information engineering ,Automatic business process discovery ,Cluster analysis ,business.industry ,Event (computing) ,Process mining Prediction ,Databases (cs.DB) ,Classification ,113 Computer and information sciences ,Software Engineering (cs.SE) ,Computer Science - Learning ,Statistical classification ,030104 developmental biology ,Clustering Feature selection ,020201 artificial intelligence & image processing ,Artificial intelligence ,Root cause analysis ,business ,computer - Abstract
We consider the problem of classifying business process instances based on structural features derived from event logs. The main motivation is to provide machine learning based techniques with quick response times for interactive computer assisted root cause analysis. In particular, we create structural features from process mining such as activity and transition occurrence counts, and ordering of activities to be evaluated as potential features for classification. We show that adding such structural features increases the amount of information thus potentially increasing classification accuracy. However, there is an inherent trade-off as using too many features leads to too long run-times for machine learning classification models. One way to improve the machine learning algorithms' run-time is to only select a small number of features by a feature selection algorithm. However, the run-time required by the feature selection algorithm must also be taken into account. Also, the classification accuracy should not suffer too much from the feature selection. The main contributions of this paper are as follows: First, we propose and compare six different feature selection algorithms by means of an experimental setup comparing their classification accuracy and achievable response times. Second, we discuss the potential use of feature selection results for computer assisted root cause analysis as well as the properties of different types of structural features in the context of feature selection., Comment: Extended version of a paper published in the proceedings of the BPM 2017 workshops
- Published
- 2018
21. Discovering High-Utility Itemsets at Multiple Abstraction Levels
- Author
-
Giuseppe Ricupero, Luca Cagliero, Paolo Garza, and Silvia Chiusano
- Subjects
generalized itemset mining ,Computer science ,knowledge discovery ,Affinity analysis ,02 engineering and technology ,data mining ,Viewpoints ,computer.software_genre ,Synthetic data ,High-utility itemset mining, generalized itemset mining, data mining, knowledge discovery ,Knowledge extraction ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Profiling (information science) ,020201 artificial intelligence & image processing ,Granularity ,Data mining ,High-utility itemset mining ,computer - Abstract
High-Utility Itemset Mining (HUIM) is a relevant data mining task. The goal is to discover recurrent combinations of items characterized by high profit from transactional datasets. HUIM has a wide range of applications among which market basket analysis and service profiling. Based on the observation that items can be clustered into domain-specific categories, a parallel research issue is generalized itemset mining. It entails generating correlations among data items at multiple abstraction levels. The extraction of multiple-level patterns affords new insights into the analyzed data from different viewpoints. This paper aims at discovering a novel pattern that combines the expressiveness of generalized and High-Utility itemsets. According to a user-defined taxonomy items are first aggregated into semantically related categories. Then, a new type of pattern, namely the Generalized High-utility Itemset (GHUI), is extracted. It represents a combinations of items at different granularity levels characterized by high profit (utility). While profitable combinations of item categories provide interesting high-level information, GHUIs at lower abstraction levels represent more specific correlations among profitable items. A single-phase algorithm is proposed to efficiently discover utility itemsets at multiple abstraction levels. The experiments, which were performed on both real and synthetic data, demonstrate the effectiveness and usefulness of the proposed approach.
- Published
- 2017
22. Slicing concurrent constraint programs
- Author
-
Carlos Olarte, Catuscia Palamidessi, Maurizio Gabbrielli, Moreno Falaschi, Department of Mathematics and Computer Science / Dipartimento di Scienze Matematiche e Informatiche 'Roberto Magari' (DSMI), Università degli Studi di Siena = University of Siena (UNISI), Foundations of Component-based Ubiquitous Systems (FOCUS), Inria Sophia Antipolis - Méditerranée (CRISAM), Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Dipartimento di Informatica - Scienza e Ingegneria [Bologna] (DISI), Alma Mater Studiorum Università di Bologna [Bologna] (UNIBO)-Alma Mater Studiorum Università di Bologna [Bologna] (UNIBO), Department of Computer Science and Engineering [Bologna] (DISI), Alma Mater Studiorum Università di Bologna [Bologna] (UNIBO), Universidade Federal do Rio Grande do Norte [Natal] (UFRN), Microsoft Research - Inria Joint Centre (MSR - INRIA), Institut National de Recherche en Informatique et en Automatique (Inria)-Microsoft Research Laboratory Cambridge-Microsoft Corporation [Redmond, Wash.], Concurrency, Mobility and Transactions (COMETE), Inria Saclay - Ile de France, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Laboratoire d'informatique de l'École polytechnique [Palaiseau] (LIX), Centre National de la Recherche Scientifique (CNRS)-École polytechnique (X)-Centre National de la Recherche Scientifique (CNRS)-École polytechnique (X), Manuel V. Hermenegildo, Pedro Lopez-Garcia, Falaschi, Moreno, Gabbrielli, Maurizio, Olarte, Carlo, Palamidessi, Catuscia, Laboratoire d'informatique de l'École polytechnique [Palaiseau] (LIX), École polytechnique (X)-Centre National de la Recherche Scientifique (CNRS)-École polytechnique (X)-Centre National de la Recherche Scientifique (CNRS)-Inria Saclay - Ile de France, and Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)
- Subjects
FOS: Computer and information sciences ,Computer Science - Logic in Computer Science ,Computer science ,Concurrency ,media_common.quotation_subject ,02 engineering and technology ,computer.software_genre ,Task (project management) ,Theoretical Computer Science ,0202 electrical engineering, electronic engineering, information engineering ,Constraint programming ,Concurrent Constraint Programming, Program slicing, Debugging ,Program slicing ,media_common ,TRACE (psycholinguistics) ,Computer Science - Programming Languages ,[INFO.INFO-PL]Computer Science [cs]/Programming Languages [cs.PL] ,Programming language ,Computer Science (all) ,[INFO.INFO-LO]Computer Science [cs]/Logic in Computer Science [cs.LO] ,020207 software engineering ,Logic in Computer Science (cs.LO) ,Constraint (information theory) ,Debugging ,020201 artificial intelligence & image processing ,State (computer science) ,Concurrent Constraint Programming ,computer ,Programming Languages (cs.PL) - Abstract
Concurrent Constraint Programming (CCP) is a declarative model for concurrency where agents interact by telling and asking constraints (pieces of information) in a shared store. Some previous works have developed (approximated) declarative debuggers for CCP languages. However, the task of debugging concurrent programs remains difficult. In this paper we define a dynamic slicer for CCP and we show it to be a useful companion tool for the existing debugging techniques. Our technique starts by considering a partial computation (a trace) that shows the presence of bugs. Often, the quantity of information in such a trace is overwhelming, and the user gets easily lost, since she cannot focus on the sources of the bugs. Our slicer allows for marking part of the state of the computation and assists the user to eliminate most of the redundant information in order to highlight the errors. We show that this technique can be tailored to timed variants of CCP. We also develop a prototypical implementation freely available for making experiments., Comment: Pre-proceedings paper presented at the 26th International Symposium on Logic-Based Program Synthesis and Transformation (LOPSTR 2016), Edinburgh, Scotland UK, 6-8 September 2016 (arXiv:1608.02534)
- Published
- 2017
23. Evidence analysis method using Bloom filter for MANET forensics
- Author
-
Takashi Mishina, Yoh Shiraishi, and Osamu Takahashi
- Subjects
Network forensics ,Computer science ,business.industry ,ComputerSystemsOrganization_COMPUTER-COMMUNICATIONNETWORKS ,Evidence analysis ,Mobile ad hoc network ,Bloom filter ,Computer security ,computer.software_genre ,Forensic science ,ComputingMilieux_COMPUTERSANDEDUCATION ,business ,computer ,Computer network - Abstract
Lecture Notes in Computer Science, Various security weaknesses have been identified in mobile ad-hoc networks (MANET). The paper focuses on MANET forensics whereby a third party can prove there was attack by collecting and analyzing evidence about it. The paper describes such a MANET forensics analysis method using a Bloom filter.
- Published
- 2010
24. Feature-space transformation improves supervised segmentation across scanners
- Author
-
van Opbroek, Annegreet, Achterberg, Hakim C., de Bruijne, Marleen, Bhatia, Kanwal K., Lombaert, Herve, Radiology & Nuclear Medicine, and Medical Informatics
- Subjects
Computer science ,business.industry ,Feature vector ,Pattern recognition ,computer.software_genre ,Transformation (function) ,Voxel ,Feature (computer vision) ,Segmentation ,Computer vision ,Artificial intelligence ,Transfer of learning ,business ,computer - Abstract
Image-segmentation techniques based on supervised classification generally perform well on the condition that training and test samples have the same feature distribution. However, if training and test images are acquired with different scanners or scanning parameters, their feature distributions can be very different, which can hurt the performance of such techniques. We propose a feature-space-transformation method to overcome these differences in feature distributions. Our method learns a mapping of the feature values of training voxels to values observed in images from the test scanner. This transformation is learned from unlabeled images of subjects scanned on both the training scanner and the test scanner. We evaluated our method on hippocampus segmentation on 27 images of the Harmonized Hippocampal Protocol (HarP), a heterogeneous dataset consisting of 1.5T and 3T MR images. The results showed that our feature space transformation improved the Dice overlap of segmentations obtained with an SVM classifier from 0.36 to 0.85 when only 10 atlases were used and from 0.79 to 0.85 when around 100 atlases were used.
- Published
- 2015
25. Security Analysis of CRT-Based Cryptosystems
- Author
-
Katsuyuki Okeya and Tsuyoshi Takagi
- Subjects
Computer Networks and Communications ,business.industry ,Computer science ,Cryptography ,Computer security ,computer.software_genre ,Timing attack ,Power analysis ,Rabin cryptosystem ,Cryptosystem ,Hybrid cryptosystem ,Chosen-ciphertext attack ,Side channel attack ,Arithmetic ,Safety, Risk, Reliability and Quality ,business ,computer ,Software ,Information Systems - Abstract
A side channel attack (SCA) is a serious attack on the implementation of cryptosystems, which can break the secret key using side channel information such as timing, power consumption, etc. Recently, Boneh et al. showed that SSL is vulnerable to SCA if the attacker gets access to the local network of the server. Therefore, public-key infrastructure eventually becomes a target of SCA. In this paper, we investigate the security of RSA cryptosystem using the Chinese remainder theorem (CRT) in the sense of SCA. Novak first proposed a simple power analysis (SPA) against the CRT part using the difference of message modulo p and modulo q. In this paper, we apply Novak’s attack to the other CRT-based cryptosystems, namely Multi-Prime RSA, Multi-Exponent RSA, Rabin cryptosystem, and HIME(R) cryptosystem. Novak-type attack strictly depends on how to implement the CRT. We examine the operations related to CRT of these cryptosystems, and show that an extended Novak-type attack is effective on them. Moreover, we present a novel attack called zero-multiplication attack. The attacker tries to guess the secret prime by producing ciphertexts that cause a multiplication with zero during the decryption, which is easily detected by power analysis. Our experimental result shows that the timing with the zero multiplication is reduced about 10% from the standard one. Finally, we propose countermeasures against these attacks. The proposed countermeasures are based on the ciphertext blinding, but they require no inversion operation. The overhead of the proposed scheme is only about 1–5% of the whole decryption if the bit length of modulus is 1,024.
- Published
- 2006
26. Further Security Analysis of XTR
- Author
-
Dong-Guk Han, Tsuyoshi Takagi, and Jongin Lim
- Subjects
Exponentiation ,Computer science ,business.industry ,CEILIDH ,Computer security ,computer.software_genre ,Public-key cryptography ,Power analysis ,Collision attack ,XTR ,Side channel attack ,Arithmetic ,business ,computer ,Key size - Abstract
In Crypto 2000 and 2003, Lenstra-Verheul and Rubin- Silverberg proposed XTR public key system and torus based public key cryptosystem CEILIDH, respectively. The common main idea of XTR and CEILIDH is to shorten the bandwidth of transmission data. Due to the contribution of Granger et al., that is the comparison result of the performance of CEILIDH and XTR, XTR is an excellent alternative to either RSA or ECC in some applications, where computational power and memory capacity are both very limited, such as smart-cards. Among the family of XTR algorithm, Improved XTR Single Exponentiation (XTR-ISE) is the most efficient one, which computes single exponentiation. However, there are few papers investigating the side channel attacks of XTR-ISE, even though the memory constraint devices suffer most from vulnerability to side channel attacks. Chung-Hasan and Page-Stam tried to analyze XTR-ISE with the known simple power analysis, but unfortunately their approach were not practically feasible. Recently, Han et al. proposed new collision attack on it with analysis complexity O(240) when the key size is 160-bit. In this paper we analyze XTR-ISE from other point of view, namely differential power analysis (DPA). One straightforward result is that XTR-ISE can be free from the original DPA. However, a non-trivial result is that an enhancing DPA proposed in this paper threatens XTR-ISE. Furthermore, we show several weak points of the structure of XTR-ISE. From our simulation results, we show the proposed attack requires about 584 times queries to DPA_Oracle to detect the whole 160-bit secret value. This result shows that XTR-ISE is vulnerable to the proposed enhancing DPA.
- Published
- 2006
27. Artificial intelligent system for multimedia services in smart home environments
- Author
-
Jose M. Jimenez, Albert Rego, Pedro Luis Gonzalez Ramirez, and Jaime Lloret
- Subjects
Service (systems architecture) ,Computer Networks and Communications ,Computer science ,020209 energy ,02 engineering and technology ,computer.software_genre ,Field (computer science) ,User experience design ,Smart home ,Home automation ,Reinforcement learning ,0202 electrical engineering, electronic engineering, information engineering ,Computer communication networks ,Multimedia ,business.industry ,Deep learning ,020206 networking & telecommunications ,INGENIERIA TELEMATICA ,Classification ,Artificial intelligence ,business ,Internet of Things ,computer ,Software - Abstract
[EN] Internet of Things (IoT) has introduced new applications and environments. Smart Home provides new ways of communication and service consumption. In addition, Artificial Intelligence (AI) and deep learning have improved different services and tasks by automatizing them. In this field, reinforcement learning (RL) provides an unsupervised way to learn from the environment. In this paper, a new intelligent system based on RL and deep learning is proposed for Smart Home environments to guarantee good levels of QoE, focused on multimedia services. This system is aimed to reduce the impact on user experience when the classifying system achieves a low accuracy. The experiments performed show that the deep learning model proposed achieves better accuracy than the KNN algorithm and that the RL system increases the QoE of the user up to 3.8 on a scale of 10., This work has been partially supported by the "Ministerio de Economia y Competitividad" in the "Programa Estatal de Fomento de la Investigacion Cientifica y Tecnica de Excelencia, Subprograma Estatal de Generacion de Conocimiento" within the project under Grant TIN2017-84802-C2-1-P. This work has also been partially founded by the Universitat Polite`cnica de Vale`ncia through the postdoctoral PAID-10-20 program.
- Published
- 2022
28. Towards a framework for the development of adaptable service-based applications
- Author
-
Stephen Lane, Qing Gu, Patricia Lago, Ita Richardson, SFI, Software and Sustainability (S2), Network Institute, Software & Services, and Information Management & Software Engineering
- Subjects
software process ,Social software engineering ,Software Engineering Process Group ,Database ,Computer science ,business.industry ,Software as a service ,Service design ,Software development ,maintenance process ,computer.software_genre ,service-based application life-cycle ,Service virtualization ,Management Information Systems ,SDG 17 - Partnerships for the Goals ,Hardware and Architecture ,Personal software process ,service-based application adaptation ,Data as a service ,business ,Software engineering ,computer ,Software ,Information Systems - Abstract
Service-oriented computing is a promising computing paradigm which facilitates the composition of loosely coupled and adaptable applications. Unfortunately, this new paradigm does not lend itself easily to traditional software engineering methods and principles due to the decentralised nature of software services. The goal of this paper is to identify a set of engineering activities that can be used to develop adaptable service-based applications. Rather than focusing on the entire service-based application development life-cycle, this paper will focus on adaptation-specific processes and activities and map them to an existing high-level service-based application development life-cycle. Existing software engineering literature as well as research results from service engineering research is reviewed for relevant activities. The result is an adaptation framework that can guide software engineers in developing adaptable service-based applications. © 2013 Springer-Verlag London.
- Published
- 2013
29. Building general purpose security services on trusted computing
- Author
-
Chris J. Mitchell, Shaohua Tang, Chunhua Chen, Chen, Liqun, Yung, Moti, and Zhu, Liehuang
- Subjects
Cloud computing security ,Computer science ,Faculty of Science\Mathematics ,Research Groups and Centres\Information Security\ Information Security Group ,Trusted Computing ,Computer security model ,Trusted Network Connect ,Computer security ,computer.software_genre ,Security information and event management ,Security service ,Distributed System Security Architecture ,Security convergence ,computer - Abstract
The Generic Authentication Architecture (GAA) is a standardised extension to the mobile telephony security infrastructures (including the Universal Mobile Telecommunications System (UMTS) authentication infrastructure) that supports the provision of generic security services to network applications. In this paper we propose one possible means for extending the widespread Trusted Computing security infrastructure using a GAA-like framework. This enables an existing security infrastructure to be used as the basis of a general-purpose authenticated key establishment service in a simple and uniform way, and also provides an opportunity for trusted computing aware third parties to provide novel security services. We also discuss trust issues and possible applications of GAA services.
- Published
- 2012
30. A General-Purpose Virtualization Service for HPC on Cloud Computing: An Application to GPUs
- Author
-
Giuliano Laccetti, Giuseppe Coviello, Florin Isaila, Giulio Giunta, Raffaele Montella, Javier Garcia Blas, R. Wyrzykowski, J. Dongarra, K. Karczewski, J. Waniewski, Montella, R., G., Coviello, G., Giunta, Laccetti, Giuliano, F., Isaila, and J., Garcia Blas
- Subjects
Split Driver ,Application virtualization ,Computer science ,Full virtualization ,Hardware virtualization ,GPGPU ,Storage virtualization ,Cloud Computing ,computer.software_genre ,Virtualization ,Service virtualization ,Virtual machine ,HPC ,Operating system ,computer ,Data virtualization - Abstract
This paper describes the generic virtualization service GVirtuS (Generic Virtualization Service), a framework for development of split-drivers for cloud virtualization solutions. The main goal of GVirtuS is to provide tools for developing elastic computing abstractions for high-performance private and public computing clouds. In this paper we focus our attention on GPU virtualization. However, GVirtuS is not limited to accelerator-based architectures: a virtual high performance parallel file system and a MPI channel are ongoing projects based on our split driver virtualization technology.
- Published
- 2012
31. Evaluation of Link System between Repository and Researcher Database
- Author
-
Toshie Tanaka, Eisuke Ito, Sachio Hirokawa, Kensuke Baba, Masao Mori, and Emi Ishita
- Subjects
Service (systems architecture) ,Database ,access log ,Computer science ,library ,Institutional repository ,Reuse ,computer.software_genre ,Start up ,Metadata ,World Wide Web ,Web database ,Web system ,Link (knot theory) ,computer - Abstract
This paper evaluates the effect of a Web system which activates institutional repositories. Institutional repository is an important service of libraries in academic institutions. The authors developed a link system between the institutional repository and the researcher database of their university. The system reduces the efforts of researchers by reusing the metadata in the researcher database for registrations of their papers to the repository. The authors observed the access log of the repository before and after the start up of the link system. The result shows that the system increased the number of access, however there was no significant change on the number of registration of papers.
- Published
- 2011
32. Challenges and trends in wireless ubiquitous computing systems
- Author
-
Anis Koubaa, Elhadi M. Shakshuki, Abdelfettah Belghith, and Repositório Científico do Instituto Politécnico do Porto
- Subjects
Ubiquitous robot ,Context-aware pervasive systems ,Ubiquitous computing ,Computer science ,Wireless ad hoc network ,Mobile computing ,Management Science and Operations Research ,computer.software_genre ,Autonomic computing ,Wireless ,Group key ,Multimedia ,business.industry ,Quality of service ,User Interfaces and Human Computer Interaction ,Energy consumption ,Computer Science Applications ,Hardware and Architecture ,Computer Science ,The Internet ,Personal Computing ,business ,computer ,Wireless sensor network ,Computer network - Abstract
In the last decade, the Internet paradigm has been evolving toward a new frontier with the emergence of ubiquitous and pervasive systems, including wireless sensor networks, ad hoc networks, RFID systems, and wireless embedded systems. In fact, while the initial purpose of the Internet was to interconnect computers to share digital data at large scale, the current tendency is to enable ubiquitous and pervasive computing to control everything anytime and at a large scale. This new paradigm has given rise to a new generation of networked systems, commonly known as Internet-of-Things or Cyber-Physical Systems. The research community has actively investigated the underlying challenges pertaining to these systems, as they fundamentally differ from the classical problems due to their inherent constraints. This special issue presents six papers covering various topics in personal and ubiquitous computing. The first paper entitled ‘‘RiSeG: A Ring Based Secure Group Communication Protocol for Wireless Sensor Networks’’ deals with security in ubiquitous systems. The paper presents a new approach for secure group management in wireless sensor networks. The proposed approach is based on a logical ring architecture, which permits to alleviate the group controller’s task in updating the group key. The proposed scheme also provides backward and forward secrecy, addresses the node compromise attack, and gives a solution to detect and eliminate the compromised nodes. The authors evaluated their scheme in terms of storage, computation, and communication costs and compared its behavior against the Logical Key Hierarchy (LKH) scheme. It has been shown that RiSeG requires less storage cost and reduces computation and communication costs at the Group Controller as compared with LKH. The paper goes beyond theoretical work and proposes a realworld implementation, which first proved that RiSeG is applicable to WSNs and also showed that the performance results in terms of execution time, energy consumption, and memory consumption are satisfactory. The second paper entitled ‘‘Stability routing with constrained path length for improved routability in dynamic MANETs’’ addresses the problem of enhancing the routing validity in mobile wireless multi hop ad hoc networks. In proactive routing, routes are established and updated periodically and consequently loose their pertinence as time tics away from the start of their updating instants. Authors argue that routability, defined as the validity of established routes, stands among the most centric metric impacting the network efficiency. Indeed, in dynamic networks, traffic routed through invalid routes not only amounts to a waste of valuable resources but will never be delivered to its destination. In the quest of improving routability in dynamic networks, the authors considered a two-constrained QoS routing problem with one superlative constraint and one comparative constraint. In its general form, this is an NP hard problem. The authors proposed a novel exact algorithm and then instantiated the problem to solve the optimal integer weight-constrained path length A. Belghith (&) University of Manouba, Manouba, Tunisia e-mail: abdelfattah.belghith@ensi.rnu.tn
- Published
- 2011
33. Meet-in-the-Middle Attacks on Reduced-Round XTEA
- Author
-
Gautham Sekar, Nicky Mouha, Bart Preneel, Vesselin Velichkov, and Kiayias, A
- Subjects
Triple DES ,CBC-MAC ,Running key cipher ,Computer science ,Tiny Encryption Algorithm ,XTEA ,Computer security ,computer.software_genre ,computer ,Stream cipher ,cosic ,Transposition cipher ,Block cipher - Abstract
The block cipher XTEA, designed by Needham and Wheeler, was published as a technical report in 1997. The cipher was a result of fixing some weaknesses in the cipher TEA (also designed by Wheeler and Needham), which was used in Microsoft's Xbox gaming console. XTEA is a 64-round Feistel cipher with a block size of 64 bits and a key size of 128 bits. In this paper, we present meet-in-the-middle attacks on twelve variants of the XTEA block cipher, where each variant consists of 23 rounds. Two of these require only 18 known plaintexts and a computational effort equivalent to testing about 2 117 keys, with a success probability of 1-2 -1025 . Under the standard (single-key) setting, there is no attack reported on 23 or more rounds of XTEA, that requires less time and fewer data than the above. This paper also discusses a variant of the classical meet-in-the-middle approach. All attacks in this paper are applicable to XETA as well, a block cipher that has not undergone public analysis yet. TEA, XTEA and XETA are implemented in the Linux kernel. © 2011 Springer-Verlag Berlin Heidelberg. ispartof: pages:250-+ ispartof: TOPICS IN CRYPTOLOGY - CT-RSA 2011 vol:6558 pages:250-+ ispartof: Workshop on Current Trends in Cryptology (CTCrypt 2013) location:Ekaterinburg, Russia date:23 Jun - 24 Jun 2013 status: published
- Published
- 2011
34. Artificial evolution in computer aided design: from the optimization of parameters to the creation of assembly programs
- Author
-
Giovanni Squillero
- Subjects
Numerical Analysis ,Computer science ,business.industry ,Post-silicon verification ,Evolutionary algorithm ,CAD ,computer.software_genre ,Evolutionary computation. Microprocessors ,Speed paths ,Evolutionary computation ,Computer Science Applications ,Theoretical Computer Science ,Computational Mathematics ,Range (mathematics) ,Computational Theory and Mathematics ,Line (geometry) ,Computer Aided Design ,Software engineering ,business ,Heuristics ,computer ,Computer communication networks ,Software ,Simulation - Abstract
Evolutionary computation has been little, but steadily, used in the CAD community during the past 20 years. Nowadays, due to their overwhelming complexity, significant steps in the validation of microprocessors must be performed on silicon, i.e., running experiments on physical devices after tape-out. The scenario created new space for innovative heuristics. This paper shows a methodology based on an evolutionary algorithm that can be used to devise assembly programs suitable for a range of on-silicon activities. The paper describes how to take into account complex hardware characteristics and architectural details. The experimental evaluation performed on two high-end Intel microprocessors demonstrates the potentiality of this line of research.
- Published
- 2011
35. NTRULO: A Tunneling Architecture for Multimedia Conferencing over IP
- Author
-
L. Miniero, Alessandro Amirante, Simon Pietro Romano, Tobia Castaldi, R. Dunaytsev, Y. Koucheryavy, Romano, SIMON PIETRO, A., Amirante, T., Castaldi, and L., Miniero
- Subjects
Focus (computing) ,Multimedia ,Exploit ,business.industry ,Computer science ,Tunneling ,Multimedia Conferencing ,computer.software_genre ,NAT traversal ,Next-generation network ,The Internet ,Architecture ,business ,computer ,Network address translation ,Computer network - Abstract
In this paper we present a tunneling solution for multimedia conferencing in Next Generation Networks (NGNs). In particular, we focus on the problems restrictive Firewalls, Proxies and Network Address Translation entities may introduce in multimedia sessions over the Internet, and the need for HTTP-based tunneling when the standard conferencing protocols fail to travel across the network. The paper describes the client-server architecture we devised to achieve an HTTP-based tunneling of the several protocols involved in multimedia conferencing, explaining advantages and drawbacks of the proposed solution. Furthermore, we describe the additional efforts we needed to devote to the standard HTTP tunneling approach in order to let conferencing-related protocols work with it. Finally we exploit our standard XCON-compliant Meetecho conferencing platform as a complete implementation of our proposal, presenting the reader with some experimental results.
- Published
- 2010
36. Precise specification of design pattern structure and behaviour
- Author
-
Ashley Sterritt, Vinny Cahill, Siobhán Clarke, and SFI
- Subjects
Programming language ,Computer science ,Design pattern ,Specification language ,Loose coupling ,computer.software_genre ,Extensibility ,Specification pattern ,design-pattern specification languages ,Unified Modeling Language ,Sequence diagram ,Software design pattern ,Software system ,computer ,Software verification ,Object Constraint Language ,computer.programming_language - Abstract
peer-reviewed Applying design patterns while developing a software system can improve its non-functional properties, such as extensibility and loose coupling. Precise specification of structure and behaviour communicates the invariants imposed by a pattern on a conforming implementation and enables formal software verification. Many existing design-pattern specification languages (DPSLs) focus on class structure alone, while those that do address behaviour suffer from a lack of expressiveness and/or imprecise semantics. In particular, in a review of existing work, three invariant categories were found to be inexpressible in state-of-the-art DPSLs: dependency, object state and data-structure. This paper presents Alas: a precise specification language that supports design-pattern descriptions including these invariant categories. The language is based on UML Class and Sequence diagrams with modified syntax and semantics. In this paper, the meaning of the presented invariants is formalized and relevant ambiguities in the UML Standard are clarified. We have evaluated Alas by specifying the widely-used Gang of Four pattern catalog and identified patterns that benefitted from the added expressiveness and semantics of Alas.
- Published
- 2010
37. Do you trust your phone?
- Author
-
Alfredo De Santis, Aniello Castiglione, and Roberto De Prisco
- Subjects
Spoofing attack ,Voice over IP ,business.industry ,Computer science ,Internet privacy ,Internet security ,Computer security ,computer.software_genre ,Phishing ,Mobile malware ,Caller ID ,Identity theft ,Next-generation network ,The Internet ,business ,computer - Abstract
Despite the promising start, Electronic Commerce has not taken off mostly because of security issues with the communication infrastructures that are popping up threateningly undermining the perceived trustworthiness in Electronic Commerce. Some Internet security issues, like malware, phishing, pharming are well known to the Internet community. Such issues are being, however, transferred to the telephone networks thanks to the symbiotic relation between the two worlds. Such an interconnection is becoming so pervasive that we can really start thinking about a unique network, which, in this paper, we refer to as the Interphonet. The main goal of this paper is to analyze some of the Internet security issues that are being transferred to the Interphonet and also to identify new security issues of the Interphonet. In particular we will discuss about mobile phones malware and identity theft, phishing with SMS, telephone pharming, untraceability of phone calls that use VoIP and Caller ID spoofing. We will also briefly discuss about countermeasures.
- Published
- 2009
38. The Append-only Web Bulletin Board
- Author
-
James Heather and David Lundin
- Subjects
Scheme (programming language) ,Computer science ,business.industry ,Electronic voting ,Hash function ,Certification ,Computer security ,computer.software_genre ,World Wide Web ,Bulletin board ,Order (business) ,Publishing ,Verifiable secret sharing ,business ,computer ,GeneralLiterature_REFERENCE(e.g.,dictionaries,encyclopedias,glossaries) ,computer.programming_language - Abstract
A large number of papers on verifiable electronic voting that have appeared in the literature in recent years have relied heavily on the availability of an append-only web bulletin board . Despite this widespread requirement, however, the notion of an append-only web bulletin board remains somewhat vague, and no method of constructing such a bulletin board has been proposed. This paper fills the gap. We identify the required properties of an append-only web bulletin board, and introduce the concept of certified publishing of messages to the board. We show how such a board can be constructed in order to satisfy the properties we have identified. Finally, we consider how to extend the scheme to make the web bulletin board robust and able to offer assurance to writers of the inclusion of their messages. Although the work presented here has been inspired and motivated by the requirements of electronic voting systems, the web bulletin board is sufficiently general to allow use in other contexts.
- Published
- 2009
39. Securing Real-time Sessions in an IMS-based Architecture
- Author
-
Paolo Cennamo, Francesco Toro, A. L. Robustelli, Maurizio Longo, Fabio Postiglione, and Antonio Fresa
- Subjects
Multimedia ,business.industry ,Computer science ,Testbed ,IP Multimedia Subsystem ,Wireless Multimedia Extensions ,computer.software_genre ,Radio access technology ,Next-generation network ,Data Protection Act 1998 ,The Internet ,business ,Key management ,computer ,Computer network - Abstract
The emerging all-IP mobile network infrastructures based on 3rd Generation IP Multimedia Subsystem philosophy are characterised by radio access technology independence and ubiquitous connectivity for mobile users. Currently, great focus is being devoted to security issues since most of the security threats presently affecting the public Internet domain, and the upcoming ones as well, are going to be suffered by mobile users in the years to come. While a great deal of research activity, together with standardisation efforts and experimentations, is carried out on mechanisms for signalling protection, very few integrated frameworks for real-time multimedia data protection have been proposed in a context of IP Multimedia Subsystem, and even fewer experimental results based on testbeds are available. In this paper, after a general overview of the security issues arising in an advanced IP Multimedia Subsystem scenario, a comprehensive infrastructure for real-time multimedia data protection, based on the adoption of the Secure Real-Time Protocol, is proposed; then, the development of a testbed incorporating such functionalities, including mechanisms for key management and cryptographic context transfer, and allowing the setup of Secure Real-Time Protocol sessions is presented; finally, experimental results are provided together with quantitative assessments and comparisons of system performances for audio sessions with and without the adoption of the Secure Real-Time Protocol framework.
- Published
- 2008
40. UML-Based Modeling for What-If Analysis
- Author
-
Stefano Rizzi, Matteo Golfarelli, SONG, EDER, NGUYEN, M. Golfarelli, and S. Rizzi
- Subjects
Computer science ,business.industry ,Commercial area ,Complex system ,Applications of UML ,Activity diagram ,Machine learning ,computer.software_genre ,Data warehouse ,Formalism (philosophy of mathematics) ,Unified Modeling Language ,Business intelligence ,Artificial intelligence ,Software engineering ,business ,computer ,computer.programming_language - Abstract
In order to be able to evaluate beforehand the impact of a strategical or tactical move, decision makers need reliable previsional systems. What-if analysis satisfies this need by enabling users to simulate and inspect the behavior of a complex system under some given hypotheses. A crucial issue in the design of what-if applications in the context of business intelligence is to find an adequate formalism to conceptually express the underlying simulation model. In this experience paper we report on how this can be accomplished by extending UML 2 with a set of stereotypes. Our proposal is centered on the use of activity diagrams enriched with object flows, aimed at expressing functional, dynamic, and static aspects in an integrated fashion. The paper is completed by examples taken from a real case study in the commercial area.
- Published
- 2008
41. BPMN: How Much Does It Cost? An Incremental Approach
- Author
-
Danilo Montesi, Matteo Magnani, M. Magnani, and D. Montesi
- Subjects
Computational model ,business.industry ,Computer science ,Artifact-centric business process model ,Business process ,Business process modeling ,computer.software_genre ,Notation ,XPDL ,Business Process Model and Notation ,Business process management ,Data mining ,Software engineering ,business ,computer - Abstract
In this paper we propose some extensions of the business process modeling notation (BPMN) to be able to evaluate the overall cost of business process diagrams. The BPMN is very expressive, and a general treatment of this problem is very complex. Therefore, it seems reasonable to define classes of business process diagrams capturing real processes and to develop efficient analysis methods for these classes. In the paper we define some relevant subsets of the BPMN, extend them with the concept of cost, and provide computational models for each class, in most cases reducing them to existing problems for which efficient solutions already exist.
- Published
- 2007
42. Structural patterns for descriptive documents
- Author
-
Fabio Vitali, Antonina Dattolo, Angelo Di Iorio, Silvia Duca, Antonio Angelo Feliziani, Dattolo A., Di Iorio A., Duca S., Feliziani A.A., and Vitali F.
- Subjects
Patterns grammars descriptive schemas completeness ,computer.internet_protocol ,Programming language ,Structured content ,Computer science ,business.industry ,computer.software_genre ,Philosophy of language ,Rule-based machine translation ,ComputingMethodologies_DOCUMENTANDTEXTPROCESSING ,Artificial intelligence ,business ,computer ,Formal representation ,Natural language processing ,XML ,Coding (social sciences) - Abstract
Combining expressiveness and plainness in the design of web documents is a difficult task. Validation languages are very powerful and designers are tempted to over-design specifications. This paper discusses an offbeat approach: describing any structured content of any document by only using a very small set of patterns, regardless of the format and layout of that document. The paper sketches out a formal analysis of some patterns, based on grammars and language theory. The study has been performed on XML languages and DTDs and has a twofold goal: coding empirical patterns in a formal representation, and discussing their completeness.
- Published
- 2007
43. Multi-agent and Data Mining Technologies for Situation Assessment in Security-related Applications
- Author
-
Vladimir Samoilov, Oleg Karsaev, and Vladimir Gorodetsky
- Subjects
Data model ,Computer science ,Data mining ,Architecture ,computer.software_genre ,computer ,Task (project management) ,Situation analysis - Abstract
The paper considers one of the topmost security related problems that is situation assessment. Specific classification and data mining issues associated with this task and methods of their solution are the subjects of the paper. In particular, the paper discusses situation assessment data model specifying situation, approach to learning of situation assessment, generic architecture of multi-agent situation assessment systems and software engineering issues. Detection of abnormal use of computer network is a case study used for demonstration of the main research results.
- Published
- 2006
44. The Efficiency of the Rules’ Classification Based on the Cluster Analysis Method and Salton’s Method
- Author
-
Alicja Wakulicz-Deja and Agnieszka Nowak
- Subjects
Basis (linear algebra) ,Computer science ,business.industry ,Computer Science::Information Retrieval ,Document classification ,Process (computing) ,Inference ,computer.software_genre ,Set (abstract data type) ,Documentation ,Knowledge base ,Similarity (network science) ,Data mining ,business ,computer - Abstract
The aim of this paper is the comparison of the document classification method based on distance analysis (cluster analysis) and based on coefficients of similarity (Salton's method). We are interested if the received groups of documents are identical (similar) and if new cases are similarly classificated. Getting the positive results will give us the basis to use the method based on coefficients of similarity to classify the set of rules. Classification based on the coefficient of similarity is used to create documentation structural based and retrieval in search engines. The cluster analysis is used to facts and rules classification in knowledge base. It seems that using the mechanism based on the coefficient of similarity can be very comfortable to inference's process. Our paper includes the first issue, it means the comparison of the rules's process classification based on the cluster analysis method and Salton's method.
- Published
- 2006
45. Feature Selection by Combining Multiple Methods
- Author
-
Barak Chizi, Oded Maimon, and Lior Rokach
- Subjects
Computer science ,business.industry ,Dimensionality reduction ,Feature selection ,Pattern recognition ,Multiple methods ,Machine learning ,computer.software_genre ,Ensemble learning ,Random subspace method ,Artificial intelligence ,business ,computer ,Classifier (UML) ,Curse of dimensionality - Abstract
Feature selection is the process of identifying relevant features in the dataset and discarding everything else as irrelevant and redundant. Since feature selection reduces the dimensionality of the data, it enables the learning algorithms to operate more effectively and rapidly. In some cases, classification performance can be improved; in other instances, the obtained classifier is more compact and can be easily interpreted. There is much work done on feature selection methods for creating ensemble of classifiers. Thus, these works examine how feature selection can help ensemble of classifiers to gain diversity. This paper examines a different direction, i.e. whether ensemble methodology can be used for improving feature selection performance. In this paper we present a general framework for creating several feature subsets and then combine them into a single subset. Theoretical and empirical results presented in this paper validate the hypothesis that this approach can help finding a better feature subset.
- Published
- 2006
46. Taming the Curse of Dimensionality in Kernels and Novelty Detection
- Author
-
Mark J. Embrechts, Paul F. Evangelista, and Boleslaw K. Szymanski
- Subjects
Computer Science::Machine Learning ,Clustering high-dimensional data ,Computer science ,business.industry ,Supervised learning ,Pattern recognition ,Intrusion detection system ,Semi-supervised learning ,Machine learning ,computer.software_genre ,Novelty detection ,Support vector machine ,symbols.namesake ,ComputingMethodologies_PATTERNRECOGNITION ,Gaussian function ,symbols ,Unsupervised learning ,Artificial intelligence ,business ,computer ,Subspace topology ,Curse of dimensionality - Abstract
The curse of dimensionality is a well known but not entirely well-understood phenomena. Too much data, in terms of the number of input variables, is not always a good thing. This is especially true when the problem involves unsupervised learning or supervised learning with unbalanced data (many negative observations but minimal positive observations). This paper addresses two issues involving high dimensional data: The first issue explores the behavior of kernels in high dimensional data. It is shown that variance, especially when contributed by meaningless noisy variables, confounds learning methods. The second part of this paper illustrates methods to overcome dimensionality problems with unsupervised learning utilizing subspace models. The modeling approach involves novelty detection with the one-class SVM.
- Published
- 2006
47. Ontology-Based Automatic Classification of Web Pages
- Author
-
Seong-Bae Park, Dong-Jin Kang, Sang-Jo Lee, Mu Hee Song, and Sooyeon Lim
- Subjects
Vocabulary ,Information retrieval ,Computer science ,Document classification ,media_common.quotation_subject ,Static web page ,Ontology (information science) ,computer.software_genre ,Terminology ,Web mining ,Web query classification ,Web page ,ComputingMethodologies_DOCUMENTANDTEXTPROCESSING ,Ontology ,Library classification ,Web mapping ,Semantic Web Stack ,computer ,Data Web ,media_common - Abstract
The use of ontology in order to provide a mechanism to enable machine reasoning has continuously increased during the last few years.This paper suggests an automated method for document classification using an ontology, which expresses terminology information and vocabulary contained in Web documents by way of a hierarchical structure. Ontology-based document classification involves determining document features that represent the Web documents most accurately, and classifying them into the most appropriate categories after analyzing their contents by using at least two pre-defined categories per given document features. In this paper, Web documents are classified in real time not with experimental data or a learning process, but by similar calculations between the terminology information extracted from Web texts and ontology categories. This results in a more accurate document classification since the meanings and relationships unique to each document are determined.
- Published
- 2006
48. An Adaptive PC to Mobile Web Contents Transcoding System Based on MPEG-21 Multimedia Framework
- Author
-
DaeHyuck Park, Kun-Jung Sim, Euisun Kang, Kim Jong Keun, and Young-Hwan Lim
- Subjects
Mobile identification number ,Multimedia ,Computer science ,Mobile station ,Mobile computing ,Mobile database ,Mobile search ,Mobile technology ,Mobile Web ,Web service ,computer.software_genre ,computer - Abstract
The purpose of this paper is to supply web contents for PC to various multi platform device as PDA or portable device. The conventional studies could not consider these devices, so it created mobile contents beforehand, and then transmitted to the limited mobile device. In this point, the critical problem is to generate mobile contents which are suitable for all kinds of mobile device from PC Web content. This paper propose a service system for transmitting wire web content to various portable device by using MPEG-21 Multimedia Framework. It does not create mobile contents for each device. It just uses DIDL of MEPG-21 as intermediate language to express the structure, resource and description of mobile contents. In DIDL, the multimedia resource is transcoded in off-line previously. The description part is converted in real time as soon as service is requested by end-user. Mobile contents integrate adapted resource with appropriate description and then are transmitted. In addition, this paper proposes the Multi-level caching for reusing mobile contents and describes the result in experiment system.
- Published
- 2006
49. Introducing COGAIN — Communication by Gaze Interaction
- Author
-
Kari-Jouko Räihä, Howell Istance, Richard Bates, John Paulin Hansen, and M. Donegan
- Subjects
Multimedia ,Computer Networks and Communications ,Computer science ,Control (management) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,computer.software_genre ,Communications system ,Gaze ,Human-Computer Interaction ,InformationSystems_MODELSANDPRINCIPLES ,Human–computer interaction ,Assistive technology ,Control system ,Eye tracking ,Network of excellence ,Visual difficulty ,computer ,Software ,Information Systems - Abstract
This paper introduces the work of the COGAIN “communication by gaze interaction” European Network of Excellence that is working toward giving people with profound disabilities the opportunity to communicate and control their environment by eye gaze control. It shows the need for developing eye gaze based communication systems, and illustrates the effectiveness of newly developed COGAIN eye gaze control systems with a series of case studies, each showing differing aspects of the benefits offered by gaze control. Finally, the paper puts forward a strong case for users, professionals and researchers to collaborate towards developing gaze based communication systems to enable and empower people with disabilities.
- Published
- 2006
50. Fault tolerance in distributed UNIX
- Author
-
Wolfgang Graetsch, Wolfgang Oberle, Wolfgang Blau, and Anita Borg
- Subjects
Unix ,File server ,Transmission (telecommunications) ,Computer science ,Software fault tolerance ,Operating system ,Fault tolerance ,Crash ,Fault (power engineering) ,computer.software_genre ,computer - Abstract
An initial design for a fault tolerant, distributed version of UNIX was presented in an earlier paper [2]. That design left a number of open questions in two particular areas: Fault tolerance for server processes through which peripherals are accessed; recovery after a crash including the re-backup of processes. Since then, the fundamental design involving three-way message transmission has remained unchanged. However, server fault tolerance has been redesigned and is now more consistent with the fault tolerance of normal user processes. Recovery and re-backup have been completed in a more efficient manner than previously envisioned. In addition, important changes in the implementation have occurred. In this paper, we review the original design, borrowing heavily from the earlier paper in sections 1–3, and explain additions and modifications in later sections.
- Published
- 2006
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.