52 results on '"Cérin"'
Search Results
2. The EcoIndex metric, reviewed from the perspective of Data Science techniques
- Author
-
Cérin, Christophe, primary, Trystram, Denis, additional, and Menouer, Tarek, additional
- Published
- 2023
- Full Text
- View/download PDF
3. Towards a Methodology for the Characterization of IoT Data Sets of the Smart Building Sector
- Author
-
Closson, Louis, Cérin, Christophe, Donsez, Didier, Trystram, Denis, Data Aware Large Scale Computing (DATAMOVE ), Inria Grenoble - Rhône-Alpes, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Laboratoire d'Informatique de Grenoble (LIG), Centre National de la Recherche Scientifique (CNRS)-Université Grenoble Alpes (UGA)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP ), Université Grenoble Alpes (UGA)-Centre National de la Recherche Scientifique (CNRS)-Université Grenoble Alpes (UGA)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP ), Université Grenoble Alpes (UGA), Efficient and Robust Distributed Systems (ERODS ), Laboratoire d'Informatique de Grenoble (LIG), Laboratoire d'Informatique de Paris-Nord (LIPN), Centre National de la Recherche Scientifique (CNRS)-Université Sorbonne Paris Nord, CIFRE grant (reference 2021/1336) andpartially supported by the Multi-disciplinary Institute on Artificial IntelligenceMIAI at Grenoble Alpes (ANR-19-P3IA-0003), and ANR-19-P3IA-0003,MIAI,MIAI @ Grenoble Alpes(2019)
- Subjects
Enabling technologies for the IoT ,Smart Building data sets analyses IoT data sources characterization Enabling technologies for the IoT Building Information Modelling ,Smart Building data sets analyses ,Building Information Modelling ,IoT data sources characterization ,[INFO.INFO-AI]Computer Science [cs]/Artificial Intelligence [cs.AI] - Abstract
International audience; The long-term objective of the paper aims to provide decision aid support to a technical smart buildings manager to potentially reduce the emission of data produced by sensors inside a building and, more generally, to acquire knowledge on the data produced in the facility. As the first step, the paper proposes to characterize the smart-building ecosystem's Internetof-things (IoT) data sets. The description and the construction of learning models over data sets are crucial in engineering studies to advance critical analysis and serve diverse researchers' communities, such as architects or data scientists. We examine two data sets deployed in one location in the Grenoble area in France. We assume that the building is an autonomic computing system. Thus, the underlying model we deal with is the wellknown MAPE-K methodology introduced by IBM. The paper mainly addresses the analysis component and the adjacent connector component of the MAPE-K model. The content of this layer, and its organization, constitutes the methodological point we put forward. Consequently, we automatically provide a complete set of practices and methods to pass to the planning component of the MAPE-K model. We also sketch a semiautomatic way of reducing the number of measures done by sensors. In the background of our study, we aim to reduce the operational cost of making measures with a much more sober approach than the current one. We also discuss in profound the main findings of our work. Finally, we provide insights and open questions for future outcomes based on our experience.
- Published
- 2022
- Full Text
- View/download PDF
4. Scheduling Service Function Chains with Dependencies in the Cloud
- Author
-
Mohammed Chahbar, Christophe Cérin, Amina Khedimi, and Tarek Menouer
- Subjects
Service (systems architecture) ,Schedule ,Computer science ,business.industry ,Distributed computing ,Quality of service ,020206 networking & telecommunications ,Cloud computing ,Provisioning ,02 engineering and technology ,Virtualization ,computer.software_genre ,Scheduling (computing) ,03 medical and health sciences ,0302 clinical medicine ,0202 electrical engineering, electronic engineering, information engineering ,Software-defined networking ,business ,computer ,030215 immunology - Abstract
Cloud services are now well established, thanks to some providers' pioneering work that currently offer on-premise the advantage of the predictability, continuity, and quality of service delivered by virtualization technologies. In this context, SDN (Software Defined Networking) aims to provide tenant-controlled management of forwarding and different abstractions of the underlying network infrastructure to the applications. The scheduling and placement of network functions in the cloud is a challenging task. One reason is that it also requires tedious provisioning and configuration steps. Even if we consider in this paper only the placement of network functions and not their configurations, we are faced with the general problem of defining, in an 'optimal' way, the placement of network functions to be executed so that some criteria are preserved. In this paper, we formulate an approach to schedule network functions according to their dependencies.
- Published
- 2020
- Full Text
- View/download PDF
5. Smart Network Slices Scheduling in Cloud
- Author
-
Christophe Cérin, Tarek Menouer, and Amina Khedimi
- Subjects
Random access memory ,Conflicting objectives ,business.industry ,Computer science ,Service level ,Smart network ,Cloud computing ,business ,Computer security ,computer.software_genre ,computer ,Terminology ,Scheduling (computing) - Abstract
Faced to the diversity of the public in using clouds, computer scientists should take care to usages and facilitate access to non-experts i.e. to hiding internal artifacts and complexity of such distributed systems. At the opposite, the cloud provider wants to optimize resource usage, for instance for billing or to keep the cloud as green as possible. These two conflicting objectives are addressed in this paper. We combine a smart view for specifying the Service Level Agreements (SLA) of a user with a strategy of placing the user’s jobs, which represents the cloud provider’s point of view. We put in place a multi-criteria approach and the experimental results demonstrate the potential of our approach under a different scenario.
- Published
- 2020
- Full Text
- View/download PDF
6. Towards Pervasive Containerization of HPC Job Schedulers
- Author
-
Nicolas Greneche, Christophe Cérin, and Tarek Menouer
- Subjects
Computer science ,business.industry ,Distributed computing ,Reservation ,Containerization ,020206 networking & telecommunications ,Provisioning ,Workload ,Cloud computing ,02 engineering and technology ,Supercomputer ,020204 information systems ,Management system ,0202 electrical engineering, electronic engineering, information engineering ,business ,Drawback - Abstract
In cloud computing, elasticity is defined as "the degree to which a system is able to adapt to workload changes by provisioning and de-provisioning resources in an autonomic manner, such that at each point in time the available resources match the current demand as closely as possible". Adding elasticity to HPC (High Performance Computing) clusters management systems remains challenging even if we deploy such HPC systems in today's cloud environments. This difficulty is caused by the fact that HPC jobs scheduler needs to rely on a fixed set of resources. Every change of topology (adding or removing computing resources) leads to a global restart of the HPC jobs scheduler. This phenomenon is not a major drawback because it provides a very effective way of sharing a fixed set of resources but we think that it could be complemented by a more elastic approach. Moreover, the elasticity issue should not be reduced to the scaling of resources issues. Clouds also enable access to various technologies that enhance the services offer to users. In this paper, our approach is to use containers technology to instantiate a tailored HPC environment based on the user's reservation constraints. We claim that the introduction and use of containers in HPC job schedulers allow better management of resources, in a more economical way. From the use case of SLURM, we release a methodology for 'containerization' of HPC jobs schedulers which is pervasive i.e. spreading widely throughout any layers of job schedulers. We also provide initial experiments demonstrating that our containerized SLURM system is operational and promising.
- Published
- 2020
- Full Text
- View/download PDF
7. NESSMA: Network Slice Subnet Management Framework
- Author
-
Mohammed Chahbar, Abdulhalim Dandoush, Gladys Diaz, Christophe Cérin, and Kamal Ghoumid
- Subjects
020210 optoelectronics & photonics ,Standardization ,Computer science ,business.industry ,media_common.quotation_subject ,0202 electrical engineering, electronic engineering, information engineering ,02 engineering and technology ,Orchestration (computing) ,Function (engineering) ,business ,Subnet ,media_common ,Computer network - Abstract
Network Slicing is becoming a mature technology in recent years due to the continuous efforts conducted by the standards organizations, academic units and operators. However, current Network Slice (NS) demonstrations are not fully aligned with the ongoing standardization activities, particularly with the Network Slice Subnet Management Function (NSSMF) defined by 3GPP. This paper provides a novel Framework for NS Subnet Management, called NESSMA, that satisfies a list of requirements derived from an exhaustive analysis of up-to-date NS standards. The Framework is designed to jointly manage NS Subnets and their supported services having in mind its integration with NFV-MANO.
- Published
- 2019
- Full Text
- View/download PDF
8. SAFC: Scheduling and Allocation Framework for Containers in a Cloud Environment
- Author
-
Christophe Cérin, Tarek Menouer, Congfeng Jiang, and Jonathan Rivalan
- Subjects
050101 languages & linguistics ,Cloud resources ,Computer science ,On the fly ,business.industry ,Distributed computing ,05 social sciences ,Testbed ,Cloud computing ,02 engineering and technology ,Scheduling (computing) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,0501 psychology and cognitive sciences ,Economic model ,business - Abstract
Currently, the intrinsic nature of cloud computing offers new opportunities for scheduling unit of works as containers. In this paper we present a new Scheduling and Allocation Framework for Containers (SAFC) based on an economic model. The novelty of our framework is to gives the possibility to specify ranges on the resources demand for containers instead of specifying a fixed amount of resource. Thus, the goal and the new developments in this paper are to propose a general framework which dynamically schedules containers and decides, on the fly, the number of resources that must be allocated to optimize the use of cloud resources and to maximize the number of containers executed. The key idea of the paper is in relaxing strict constraints on the requested number of resources and on letting the system to adjust the amount of resources according to the execution context, such as the peaks of activities. The SAFC framework is evaluated using the Linux Containers (LXC) inside the Grid’5000 testbed.
- Published
- 2019
- Full Text
- View/download PDF
9. Energy-Efficient Strategy for Placement of Online Services on Dynamic Availability Resources in Volunteer Cloud
- Author
-
Hazem Fkaier, Mohamed Jemni, Christophe Cérin, and Omar Ben Maaouia
- Subjects
020203 distributed computing ,business.industry ,Computer science ,Distributed computing ,Cloud computing ,02 engineering and technology ,Energy consumption ,computer.software_genre ,Elasticity (cloud computing) ,Virtual machine ,Software deployment ,Server ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,business ,computer ,Live migration ,Efficient energy use - Abstract
Cloud computing is regarded as the best generation of computation that relies on shared computing resources rather than having local servers or personal devices to handle different applications. Knowing that, from one side, computing resources can be dynamically allocated according to consumer requirements and from the other side, virtual machine deployment preferences has an important role in Cloud computing for improving resource use. In this paper, we propose an online placement strategy of services on dynamic availability resources in order to reduce energy consumption. Dynamic deployment of services is a key issue of Cloud computing; there are still many areas to be studied, especially for dynamic environments. Unlike the infrastructure topology of the traditional Clouds, i.e. the data centers to which users are connected, in the volunteer Cloud, users and data centers can process multiple requests. Indeed, the volunteer nodes are useful for the Cloud elasticity. However, they are not usually available. The dynamic availability is one of the biggest concerns for the consumers. In addition, live migration of services is a widely used technique for dynamically allocating resources in a volunteer Cloud to overcome such issue. In this work, we investigate the problem of energy efficient online task allocation in volunteer Cloud environment, where the dynamic services can be allocated to a set of dynamic availability resources. We propose through this paper an online heuristic called Dynamic Shortest Path Strategy (DSPS) in order to generate a good planning that minimizes the energy consumption of applications. Our contribution respects various constraints such as availability, capacity of machines and duplications factor of applications. A series of experiments are illustrated to validate the potential of our approach.
- Published
- 2018
- Full Text
- View/download PDF
10. Accelerating the Computation of Multi-Objectives Scheduling Solutions for Cloud Computing
- Author
-
Mustapha Lebbah, Tarek Menouer, and Christophe Cérin
- Subjects
Computer science ,business.industry ,Distributed computing ,Computation ,Random projection ,Hash function ,Novelty ,Cloud computing ,High dimensional ,Cluster analysis ,business ,Scheduling (computing) - Abstract
This paper presents two practical Large Scale Multi-Objectives Scheduling (LSMOS) strategies, proposed for Cloud Computing environments. The goal is to address the problems of companies that manage a large cloud infrastructure with thousands of nodes, and would like to optimize the scheduling of several requests submitted online by users. In our context, requests submitted by users are configured according to multi-objectives criteria, as the number of used CPUs and the used memory size, to take an example. The novelty of our strategies is to select effectively, from a large set of nodes forming the Cloud Computing platform, a node that execute the user request such that this node has a good compromise among a large set of multi-objectives criteria. In this paper, first we show the limit, in terms of performance, of exact solutions. Second, we introduce approximate algorithms in order to deal with high dimensional problems in terms of nodes number and criteria number. The proposed two scheduling strategies are based on exact Kung multi-objectives decision algorithm and k-means clustering algorithm or LSH hashing (random projection based) algorithm. The experiments of our new strategies demonstrate the potential of our approach under different scenarios.
- Published
- 2018
- Full Text
- View/download PDF
11. EASE: Energy Efficiency and Proportionality Aware Virtual Machine Scheduling
- Author
-
Congfeng Jiang, Bing Luo, Dongyang Ou, Youhuizi Li, Yeliang Qiu, Yumei Wang, Weisong Shi, Jian Wan, and Christophe Cérin
- Subjects
Computer science ,business.industry ,Virtual machine scheduling ,Workload ,computer.software_genre ,Scheduling (computing) ,Working range ,Virtual machine ,Server ,Operating system ,Data center ,business ,computer ,Efficient energy use - Abstract
Servers have different energy efficiency and energy proportionality (EP) due to their hardware configuration (i.e., CPU generation and memory installation) and workload. However, current virtual machine (VM) scheduling in virtualized environments will saturate servers without considering their energy efficiency and EP differences. This article will discuss EASE, the energy efficiency and proportionality aware VM scheduling approach. EASE first executes customized computing intensive, memory intensive, and hybrid benchmarks to calculate a server's energy efficiency and EP. Then it schedules VMs to servers to keep them working at their peak energy efficiency point (or optimal working range). This step improves the overall energy efficiency of the cluster and the data center. For performance guarantee, EASE migrates VMs from servers under highly contending conditions. The experimental results on real clusters show that power consumption can be saved 37.07% ~ 49.98% in the homogeneous cluster. The average completion time of the computing intensive VMs increases only 0.31 % ~ 8.49%. In the heterogeneous nodes, the power consumption of the computing intensive VMs can be reduced by 44.22 %. The job completion time can be saved by 53.80%.
- Published
- 2018
- Full Text
- View/download PDF
12. Towards a Two Layers Based Scheduling Schema for Data Aware Strategies
- Author
-
Christophe Cérin, Walid Saad, Tarek Menouer, and Souha Bejaoui
- Subjects
Computer science ,business.industry ,Distributed computing ,Data management ,020206 networking & telecommunications ,Cloud computing ,02 engineering and technology ,Dynamic priority scheduling ,Scheduling (computing) ,Service-level agreement ,Workflow ,Data access ,020204 information systems ,Service level ,0202 electrical engineering, electronic engineering, information engineering ,business - Abstract
The paper discusses various aspects of interest regarding the data management in the context of executing scientific workflows in the cloud. We offer a proper service-based solution e.g. Workflow as a Service, considering additional factors such as the control of data propagation and service level agreements. One important aspect is the optimization of resource allocation, and to accomplish this a dynamic allocation approach, based on certain heuristics, is adopted and tested. The aim of this paper is to introduce the basic concepts and some implementation details for a two layers based scheduling system that globally takes into account data access and space requirements. At the cloud level, we propose a scheduling strategy based on different Service Level Agreement (SLA) classes. The novelty of our strategy consists in using the SLA class of the user to provision a container that will execute the service, based on a dynamic computation of the number of resources. The user do not specify the exact amount of resources but a range. The key idea of the paper is in relaxing strict constraints on the number of resources for data management and on letting the "System" to adjust the resources number according to the execution context. The first and second layers are validated through simulation and emulation conducted on the Grid'5000 testbed and for an heterogeneous context. The results of our experiments demonstrate the potential of our approaches, as general approaches for a better control of data movement and placement in cloud to execute scientific workflows. We also provide with insights to coupling the two proposed layers in a coherent way.
- Published
- 2018
- Full Text
- View/download PDF
13. Return of experience on the mean-shift clustering for heterogeneous architecture use case
- Author
-
Mustapha Lebbah, Fouste Yuehgoh, Jean-Luc Gaudiot, and Christophe Cérin
- Subjects
Set (abstract data type) ,Software ,Memory management ,business.industry ,Computer science ,Distributed computing ,Big data ,Hardware acceleration ,Algorithm design ,Architecture ,Field-programmable gate array ,business - Abstract
The exponential increment in data size poses new challenges for computer scientists, giving rise to a new set of methodologies under the term Big Data. Many efficient algorithms for machine learning have been proposed, facing up time and memory requirements. Nevertheless, with hardware acceleration, multiple software instructions can be integrated and executed into a single hardware die. Current researches aim at eliminating the burden for the user in using multiple processor types. In this paper we propose our return of experience on a new way of implementing machine learning algorithms on heterogeneous hardware. To explore our vision, we use a parallel Mean-shift algorithm, developed at LIPN as our case study to investigate issues in building efficient Machine Learning libraries for heterogeneous systems. The ultimate goal is to provide a core set of building blocks for Machine Learning programming that could serve either to build new applications on heterogeneous architectures or to control the evolution of the underlying platform. We thus examine the difficulties encountered during the implementation of the algorithm with the aim to discover methodologies for building systems based on heterogeneous hardware. We also discover issues and building blocks for solving concrete machine learning (ML) problems on the Chisel software stack we use for this purpose.
- Published
- 2017
- Full Text
- View/download PDF
14. A Novel Optimization Technique for Mastering Energy Consumption in Cloud Data Center
- Author
-
Mohamed Jemni, Christophe Cérin, Hazem Fkaier, and Omar Ben Maaouia
- Subjects
Consumption (economics) ,business.industry ,Computer science ,Heuristic (computer science) ,Distributed computing ,020206 networking & telecommunications ,Cloud computing ,02 engineering and technology ,Energy consumption ,chemistry.chemical_compound ,chemistry ,Software deployment ,Server ,Carbon dioxide ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Electric power ,business ,Efficient energy use - Abstract
Cloud computing is one of the most popular technologies nowadays because of its wide utilities and various benefits in several IT companies all over the world. However, in front of the increasing users' requests for computing services, cloud providers are encouraged to deploy large data centers, which consumes very large amount of energy and contribute to,,,, high operational costs. Among the effects, carbon dioxide emission rate is growing each day due to the huge amount of power consumption. This energy efficiency is an important issue in cloud computing, mainly due to the required electrical power to run these systems and to cool them. Therefore, energy consumption has become a major concern for the widespread deployment of Cloud data centers. The growing importance for parallel applications in the Cloud introduces significant challenges in,,,, reducing energy consumption from hosted servers. This paper addresses the problem of placing independent applications on the physical servers (hosts) of a Cloud infrastructure. We proposed a novel heuristic to allocate applications so that total energy consumption is reduced. Our proposal respects various constraints e.g. the machines availability, capability and the duplication of applications. Experiments are illustrated to validate the potential of our approach.
- Published
- 2017
- Full Text
- View/download PDF
15. Scheduling and Resource Management Allocation System Combined with an Economic Model
- Author
-
Christophe Cérin and Tarek Menouer
- Subjects
020203 distributed computing ,Operations research ,Computer science ,Bin packing problem ,Service level ,0202 electrical engineering, electronic engineering, information engineering ,Processor scheduling ,Economic model ,Resource management ,02 engineering and technology ,Scheduling (computing) - Abstract
This paper presents a new scheduling and resource management allocation system based on an economic model related to different classes for SLAs (Service Level Agreements). The goal is to address the problems of companies that manage a private infrastructure of machines, and would like to optimize the scheduling of several requests submitted online by users. Each request is an application which is executed using a set of computing resources. Our economic model has two SLAs classes (a qualitative one and a quantitative one). The qualitative class represents the satisfaction time criteria, i.e. the user waiting time before the execution of its requests. Moreover, the quantitative class represents the number of resources that must be allocated to execute the user request. As a first contribution, our system allocates dynamically, for each selected request, a set of computing cores according to the quantitative SLA class and the load of the parallel machines across the infrastructure. To choose the machine that will execute a selected request, we propose to use a Bin Packing heuristic to minimize the number of used machines and reduce the cost of the infrastructure. As a second contribution, simulations of our system are conducted on Prezi and Google Cloud Data traces and they demonstrate the potential of our approach under different scenario.
- Published
- 2017
- Full Text
- View/download PDF
16. Automated Planner for Energy-Efficient Buildings
- Author
-
Yanik Ngoko and Christophe Cérin
- Subjects
Emulation ,Computer science ,020209 energy ,020208 electrical & electronic engineering ,Collective intelligence ,02 engineering and technology ,Optimal control ,Planner ,Industrial engineering ,Server ,Scalability ,0202 electrical engineering, electronic engineering, information engineering ,Coordination game ,computer ,computer.programming_language ,Efficient energy use - Abstract
In this paper, we consider the energy-efficient coordination of a set of appliances in a smart-building. We introduce a theoretical formulation of the coordination problem and an Integer Linear Programming model for its resolution. Our formalization is complemented by an analysis of the properties and limits of the model. We also define a practical smart-building setting in which our formalization holds. Finally, we conduct several experiments with realistic data and we make a scalability and sensitivity analysis. The experimental results correspond to emulation on an industrial infrastructure for smart-buildings, the one of Qarnot computing. They are obtained in using the API of the company. They show that we can quickly solve the problem for small and medium-sized buildings and for realistic settings. They also open interesting questions regarding the optimal model of control in future intelligent buildings. Should residents let a collective intelligence to decide on the optimal control of its appliance or is it more appropriate that the user decides by itself?
- Published
- 2017
- Full Text
- View/download PDF
17. A New Docker Swarm Scheduling Strategy
- Author
-
Walid Saad, Wiem Ben Abdallah, Christophe Cérin, and Tarek Menouer
- Subjects
Multi-core processor ,Service-level agreement ,Emulation ,Computer science ,Computation ,Distributed computing ,Load modeling ,Novelty ,Swarm behaviour ,Scheduling (computing) - Abstract
This paper presents our initial ideas for a new scheduling strategy integrated in the Docker Swarm scheduler. The aim of this paper is to introduce the basic concepts and the implementation details of a new scheduling strategy based on different Service Level Agreement (SLA) classes. This strategy is proposed to answer to the problems of companies that manage a private infrastructure of machines, and would like to optimize the scheduling of several requests submitted online by users. Each request is a demand of creating a container. Currently, Docker Swarm has three basic scheduling strategies (spread, binpack and random), each one executes a container with a fixed number of resources. However, the novelty of our new strategy consists in using the SLA class of the user to provision a container that must execute the service, based on a dynamic computation of the number of CPU cores that must be allocated to the container according to the user SLA class and the load of the parallel machines in the infrastructure. Testing of our new strategy is conducted, by emulation, on different part of our general framework and it demonstrates the potential of our approach for further development.
- Published
- 2017
- Full Text
- View/download PDF
18. GC-CR: A Decentralized Garbage Collector Component for Checkpointing in Clouds
- Author
-
Christophe Cérin, Thouraya Louati, Mohamed Jemni, and Heithem Abbes
- Subjects
020203 distributed computing ,business.industry ,Computer science ,Replica ,Distributed computing ,020206 networking & telecommunications ,Context (language use) ,Fault tolerance ,Cloud computing ,02 engineering and technology ,computer.software_genre ,Virtualization ,Replication (computing) ,Container (abstract data type) ,0202 electrical engineering, electronic engineering, information engineering ,Operating system ,business ,computer ,Garbage collection - Abstract
Infrastructure-as-a-Service container-based virtualization technology is gaining significant interest in industry as an alternative platform for running distributed applications. With increasing scale of Cloud Computing architectures, faults are becoming a frequent occurrence. Checkpoint-Restart is a key method to survive to failures in this context. However, there is a need to reduce the amount of checkpointing data as the Cloud is based on the pay-as-you-go model. This paper addresses the issue of garbage collection in LXCloud-CR and contributes with a novel decentralized garbage collection component GC-CR. LXCloud-CR, a decentralized Checkpoint-Restart model, is able to take snapshots of Linux Container instances and it uses replication to increase snapshots availability. LXCloud-CR contains a versioning scheme for each replica. The disadvantage refers to snapshots availability issues with versioning as the number of useless files grows. GC-CR is a decentralized garbage collector (checkpoint deletion) component that attempts to identify and eliminate old snapshots versions from the system in order to free storage space. Large scale experiments on the Grid5000 testbed demonstrate the benefits of our proposal. Obtained results validate our model and show significant reduction of storage space consumption
- Published
- 2017
- Full Text
- View/download PDF
19. Challenges of Translating HPC Codes to Workflows for Heterogeneous and Dynamic Environments
- Author
-
Christophe Cérin, Imad Kissami, Fayssal Benkhaldoun, and Walid Saad
- Subjects
FOS: Computer and information sciences ,Computer science ,business.industry ,Distributed computing ,Cloud computing ,02 engineering and technology ,010502 geochemistry & geophysics ,Supercomputer ,01 natural sciences ,Workflow engine ,Scheduling (computing) ,Workflow ,Computer Science - Distributed, Parallel, and Cluster Computing ,0202 electrical engineering, electronic engineering, information engineering ,Code (cryptography) ,Graph (abstract data type) ,020201 artificial intelligence & image processing ,Distributed, Parallel, and Cluster Computing (cs.DC) ,business ,Edge computing ,0105 earth and related environmental sciences - Abstract
In this paper we would like to share our experience for transforming a parallel code for a Computational Fluid Dynamics (CFD) problem into a parallel version for the RedisDG workflow engine. This system is able to capture heterogeneous and highly dynamic environments, thanks to opportunistic scheduling strategies. We show how to move to the field of "HPC as a Service" in order to use heterogeneous platforms. We mainly explain, through the CFD use case, how to transform the parallel code and we exhibit challenges to 'unfold' the task graph dynamically in order to improve the overall performance (in a broad sense) of the workflow engine. We discuss in particular of the impact on the workflow engine of such dynamic feature. This paper states that new models for High Performance Computing are possible, under the condition we revisit our mind in the direction of the potential of new paradigms such as cloud, edge computing.
- Published
- 2017
- Full Text
- View/download PDF
20. An Edge Computing Platform for the Detection of Acoustic Events
- Author
-
Christophe Cérin and Yanik Ngoko
- Subjects
Engineering ,ALARM ,business.industry ,Server ,Embedded system ,Real-time computing ,Reference architecture ,business ,Edge computing ,Data modeling - Abstract
We introduce the Qarnot platform, a new edge computing platform for the design of smart-buildings. Our proposition is based on a new model of servers that also serve as heaters. As a use case, we consider the recognition of acoustic events. We describe a reference architecture for the processing of acoustic flows in the Qarnot platform. We also present some experimental results on the recognition of alarm sounds.
- Published
- 2017
- Full Text
- View/download PDF
21. Reducing the Number of Comatose Servers: Automatic Tuning as an Opportunistic Cloud-Service
- Author
-
Yanik Ngoko and Christophe Cérin
- Subjects
060201 languages & linguistics ,Service (systems architecture) ,Exploit ,business.industry ,Computer science ,Software as a service ,Cloud computing ,06 humanities and the arts ,02 engineering and technology ,computer.software_genre ,Software ,Green computing ,Server ,0602 languages and literature ,0202 electrical engineering, electronic engineering, information engineering ,Operating system ,020201 artificial intelligence & image processing ,Computational problem ,business ,computer ,Computer network - Abstract
This paper deals with the reduction of the number of comatose servers. The characteristic of such a server is to consume electricity while not delivering useful information services. According to recent studies, up to 30% of the servers (including those in datacenters) are comatose. The existence of these servers lowers the interest in clouds for green computing. Our paper assumes a cloud provider whose services are operated on a minimal number of dedicated servers in a datacenter, we also assume that this provider delivers software as a service (SaaS). In order to reduce the number of dedicated servers that could be comatose, we propose to automatically generate and run auto-tuning tasks that, in servers idle time, will learn to calibrate the execution of the software delivered by the provider. Our proposition follows the goal of delivering auto-tuning as an opportunistic cloud-service. For this purpose we introduce an opportunistic auto-tuning service (QTuning) that exploits servers idle time. The service is based on a theoretical model that includes two computational problems: an exploration and exploitation problem. In the former problem, the goal is to evaluate the performance of a software in different configurations. In the latter, assuming a set of performance results for a given software, the goal is to compute the best configurations to run the software with. Our second contribution is to propose and evaluate solutions for the exploitation problem.
- Published
- 2017
- Full Text
- View/download PDF
22. Distributed and in-Situ Machine Learning for Smart-Homes and Buildings: Application to Alarm Sounds Detection
- Author
-
Christophe Cérin, Amaury Durand, and Yanik Ngoko
- Subjects
Point (typography) ,business.industry ,Computer science ,Distributed computing ,Real-time computing ,Machine learning ,computer.software_genre ,Data modeling ,030507 speech-language pathology & audiology ,03 medical and health sciences ,ALARM ,Workflow ,Utility computing ,Server ,Orchestration ,Artificial intelligence ,Orchestration (computing) ,0305 other medical science ,business ,computer - Abstract
We consider the implementation of an in-situ machine learning system with the computing model promoted by Qarnot computing. Qarnot introduced an utility computing model in which servers are distributed in homes and offices where they serve as heaters. The Qarnot servers also embed several sensors for temperature, humidity, CO 2 etc. Qarnot offers an adequate platform to develop in-situ workflows for smart-homes problems. To demonstrate this point, we consider a typical problem: the detection of alarm sounds. Our paper introduces a new orchestration system for in-situ workflows, in the Qarnot platform. We also consider a general parallel framework for training alarm sound classifiers and decline an implementation that makes use of our orchestrator. Finally, we evaluate the implemented framework on different aspects including: the accuracy (of the resulting classifiers) and the runtime gain of the parallelization.
- Published
- 2017
- Full Text
- View/download PDF
23. Towards optimizing energy consumption in Cloud
- Author
-
Mohamed Jemni, Omar Ben Maaouia, Hazem Fkaier, and Christophe Cérin
- Subjects
Exploit ,business.industry ,Computer science ,Software deployment ,Quality of service ,Distributed computing ,Server ,Shortest path problem ,Information technology ,Cloud computing ,Energy consumption ,business - Abstract
The rapid growth in demand for Information Technologies (IT) resources has led to the creation of large-scale data centers. These latter consume an enormous amount of electrical energy, resulting in high operating costs and carbon dioxide emissions. Moreover, modern Cloud computing environments should provide a high quality of service (QoS) for their customers. Thus, energy consumption has become a major concern for the widespread deployment of Cloud data centers. The growing importance for parallel applications in the Cloud introduces significant challenges in reducing energy consumption from hosted servers. With Cloud, customers can transparently access to virtually unlimited resources. The volunteer computing paradigm has become increasingly important, where the saved resources on each personal machine are shared thanks to the will of their customers. Cloud and volunteer paradigms have recently been seen as complementary technologies to better exploit the use of local resources. This paper presents the most well-known research in the literature on the minimization of energy consumption in traditional Cloud computing as well as in the volunteer one. A novel approach, based on searching shortest path in a graph, is also introduced.
- Published
- 2017
- Full Text
- View/download PDF
24. Message from DataCom 2015 Chairs
- Author
-
Geoffrey Fox, Yeh-Ching Chung, Hao Wang, Beniamino Di Martino, Christophe Cérin, Weizhe Zhang, DataCom 2015, Fox, Geoffrey, Chung, Yeh Ching, DI MARTINO, Beniamino, Cérin, Christophe, Zhang, Weizhe, and Wang, Hao
- Subjects
Signal processing ,Sociology and Political Science ,Multimedia ,Computer science ,Urban studies ,Information and Computer Science ,Computer Science Applications1707 Computer Vision and Pattern Recognition ,Information System ,computer.software_genre ,Urban Studies ,Modeling and simulation ,Computer Networks and Communication ,Human–computer interaction ,Modeling and Simulation ,Signal Processing ,Media Technology ,Information system ,computer - Published
- 2015
- Full Text
- View/download PDF
25. Parallelization of the ADAPT 3D Streamer Propagation Code
- Author
-
Christophe Cérin, Gilles Scarella, Imad Kissami, and Fayssal Benkhaldoun
- Subjects
Partial differential equation ,business.industry ,Computer science ,Linear system ,010103 numerical & computational mathematics ,Parallel computing ,Computational fluid dynamics ,Solver ,01 natural sciences ,Computational science ,010101 applied mathematics ,Flow (mathematics) ,Direct methods ,0101 mathematics ,Poisson's equation ,business - Abstract
In order to run Computational Fluid Dynamics (CFD) codes on large scale infrastructures, parallel computing has to be used because of the computational intensive nature of the problems. In this paper we investigate the 3D version of the ADAPT platform where we do a coupling between flow partial differential equations and a Poisson equation. This coupling leads to a linear system that we solve using direct methods. The implementation is conducted with the MUMPS parallel multi-frontal direct solver and with METIS for mesh partitioning methods to improve the overall performance of the framework. We also investigate, specifically in this paper, how the mesh partitioning methods are able to optimize the mesh cell distribution of the ADAPT solver.
- Published
- 2016
- Full Text
- View/download PDF
26. Improving the Quality of Online Search Services: On the Service Multi-selection Problem
- Author
-
Yanik Ngoko, Alfredo Goldman, and Christophe Cérin
- Subjects
Service (business) ,Operations research ,Computer science ,media_common.quotation_subject ,Parallel algorithm ,Reservation ,02 engineering and technology ,computer.software_genre ,020204 information systems ,Online search ,0202 electrical engineering, electronic engineering, information engineering ,Quality (business) ,Data mining ,Special case ,Integer programming ,computer ,Selection (genetic algorithm) ,media_common - Abstract
The global objective of this study is to propose solutions for improving the quality of online search services. We consider the special case where the processing of requests submitted to such services consists of: the querying of several types of sub-services followed by the composition of the output produced the sub-services. This could be the case for instance of a booking service that proposes travel packs including: hotel reservation, car rental and flight booking. We propose to improve the quality of such services with the multi-selection problem. The goal in this problem is to select the subset of sub-services of each type to use in the the processing of search request. The selection must ensure that we maximize the quality of the results we could expect from the search. The multi-selection problem is close to the service selection problem. However, while in the latter, we are interested in a unique sub-service per type, in the former, we want to choose a subset of sub-services. Our paper introduces a theoretical formulation of the problem and demonstrates its NP-hardness. We also propose two approaches for the resolution. The first approach is based on Integer Linear Programming. The second approach combines parallel algorithm portfolio and sampling techniques. Finally, we make a comparative evaluation of the approaches based on practical scenarios.
- Published
- 2016
- Full Text
- View/download PDF
27. An Automatic Tuning System for Solving NP-Hard Problems in Clouds
- Author
-
Denis Trystram, Christophe Cérin, Valentin Reis, Yanik Ngoko, Laboratoire d'Informatique de Paris-Nord (LIPN), Université Paris 13 (UP13)-Institut Galilée-Université Sorbonne Paris Cité (USPC)-Centre National de la Recherche Scientifique (CNRS), Qarnot Computing [Montrouge], Data Aware Large Scale Computing (DATAMOVE ), Inria Grenoble - Rhône-Alpes, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Laboratoire d'Informatique de Grenoble (LIG ), Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Centre National de la Recherche Scientifique (CNRS)-Université Grenoble Alpes [2016-2019] (UGA [2016-2019])-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Centre National de la Recherche Scientifique (CNRS)-Université Grenoble Alpes [2016-2019] (UGA [2016-2019]), Université Grenoble Alpes [2016-2019] (UGA [2016-2019]), and Université Sorbonne Paris Cité (USPC)-Institut Galilée-Université Paris 13 (UP13)-Centre National de la Recherche Scientifique (CNRS)
- Subjects
Theoretical computer science ,Active learning (machine learning) ,Computer science ,Process (computing) ,0102 computer and information sciences ,02 engineering and technology ,Resolution (logic) ,01 natural sciences ,Random search ,Statistical classification ,010201 computation theory & mathematics ,Active learning ,0202 electrical engineering, electronic engineering, information engineering ,Benchmark (computing) ,020201 artificial intelligence & image processing ,Algorithm design ,[INFO.INFO-DC]Computer Science [cs]/Distributed, Parallel, and Cluster Computing [cs.DC] ,Random search algorithm ,Boolean satisfiability problem - Abstract
International audience; Traditional automatic tuning systems are based on an exploration-exploitation tradeoff that consists of: learning the behavior of the algorithm to tune on several benchmarks (exploration) and then using the learned behavior for solving new problem instances. On NP-hard algorithms, this vision is questionable because of the potential huge runtime of the exploration phase. In this paper, we introduce QTuning, a new automatic tuning system specially designed for NP-hard algorithms. Like traditional tuning systems, QTuning uses benchmarks. But, during the learning process, new benchmark entries can always be introduced or existing ones removed. Moreover, the system mixes the exploration and exploitation phases. The main contribution of this paper is to formulate the learning process in QTuning within an active learning framework. The framework is based on a classical observation made in optimization: namely, the efficiency of random search in regret minimization. We improve our random search algorithm in including a machine learning classification approach and a set intersection problem. Finally, we discuss the experimental evaluation of the framework for the resolution of the satisfiability problem.
- Published
- 2016
- Full Text
- View/download PDF
28. The Promethee Method for Cloud Brokering with Trust and Assurance Criteria
- Author
-
Christophe Cérin, Christian Toinard, Yanik Ngoko, and Timothee Ravier
- Subjects
Service (systems architecture) ,Optimization problem ,Operations research ,Process (engineering) ,business.industry ,Computer science ,Quality of service ,Context (language use) ,Cloud computing ,Computer security ,computer.software_genre ,Data modeling ,Software deployment ,business ,computer ,Constraint (mathematics) ,Decision model - Abstract
In this paper we deal with the cloud brokering problem in the context of a multi-cloud infrastructure. The problem is by nature a multi-criterion optimization problem. The focus is put mainly (but not only) on the security/trust criterion which is rarely considered in the litterature. We use the well known Promethee method to solve the problem which is original in the context of cloud brokering. In other words, if we give a high priority to the secure deployment of a service, are we still able to satisfy all of the others required QoS constraints? Reciprocally, if we give a high priority to the RTT (Round-Trip Time) constraint to access the Cloud, are we still able to ensure a weak/medium/strong 'security level'? We decided to stay at a high level of abstraction for the problem formulation and to conduct experiments using 'real' data. We believe that the design of the solution and the simulation tool we introduce in the paper are practical, thanks to the Promethee approach that has been used for more than 25 years but never, to our knowledge, for solving Cloud optimization problems. We expect that this study will be a first step to better understand, in the future, potential constraints in terms of control over external cloud services in order to implement them in a simple manner. The contributions of the paper are the modeling of an optimization problem with security constraints, the problem solving with the Promethee method and an experimental study aiming to play with multiple constraints to measure the impact of each constraint on the solution. During this process, we also provide a sensitive analysis of the Consensus Assessments Initiative Questionnaire by the Cloud Security Alliance (CSA). The analysis deals with the variety, balance and disparity of the questionnaire answers.
- Published
- 2015
- Full Text
- View/download PDF
29. Wide Area BonjourGrid as a Data Desktop Grid: Modeling and Implementation on Top of Redis
- Author
-
Mohamed Jemni, Leila Abidi, Heithem Abbes, Christophe Cérin, and Walid Saad
- Subjects
Computer science ,business.industry ,Distributed computing ,Data management ,Testbed ,Web-oriented architecture ,Cloud computing ,computer.software_genre ,Grid ,Grid computing ,Operating system ,Architecture ,business ,computer ,Protocol (object-oriented programming) - Abstract
Desktop Grid is among the success stories during last years by using volunteers nodes participating into projects. Now, with the emergence of Cloud Computing, the questions become where to take resources? and how to coordinate the resources? Our assumption is that Desktop Grid will continue to survive if we are able to transform the old-fashioned client/server architecture to new web oriented architecture to deliver services on demand. This paper revisits and extends the coordination protocol of BonjourGrid, a decentralized desktop grid system, based on the Publish-Subscribe paradigm and including a new tier for data management. The new protocol is designed according to a formal modelling using colored Petri nets. The protocol is veried and proved by CPN-Tools and implemented with Redis, a polpular net technology. We conducted out experiments on the Grid'5000 testbed using 300 nodes. We analyze the Redis performance and we demonstrate that the extended version of BonjourGrid system is fully operational.
- Published
- 2014
- Full Text
- View/download PDF
30. HPDIC Introduction and Committees
- Author
-
Yuqing Gao, Christophe Cérin, R.K. Shyamasundar, and Congfeng Jiang
- Subjects
World Wide Web ,Metadata ,Data visualization ,Database ,Distributed database ,Computer science ,Group method of data handling ,business.industry ,computer.software_genre ,business ,Throughput (business) ,computer - Published
- 2014
- Full Text
- View/download PDF
31. Towards Energy Efficient Allocation for Applications in Volunteer Cloud
- Author
-
Congfeng Jiang, Jian Wan, Yanik Ngoko, Paolo Gianessi, and Christophe Cérin
- Subjects
Energy management ,business.industry ,Computer science ,Distributed computing ,Provisioning ,Energy modeling ,Cloud computing ,Energy consumption ,Elasticity (cloud computing) ,Volunteer computing ,Resource allocation ,Unavailability ,business ,Efficient energy use - Abstract
We can view the topology of classical clouds infrastructures as data centers to which are connected user machines. In these architectures the computations are centered on a subset of machines (the data centers) among the possible ones. In our study, we propose to consider an alternative view of clouds where both users machines and data centers are used for servicing requests. We refer to these clouds as volunteer clouds. Volunteer clouds offer potential advantages in elasticity and energy savings, but we have also to manage the unavailability of volunteer nodes. In this paper, we are interested in optimizing the energy consumed by the provisioning of applications in volunteer clouds. Given a set of applications requested by cloud's clients for a window of time, the objective is to find the deployment plan that is less energy consuming. In comparison with many works in resource allocation, our specificity is in the management of the unavailability of volunteer nodes. We show that our core challenge can be formalized as an NP-hard and inapproximable problem. We then propose an ILP (Integer Linear Programming) model and various greedy heuristics for its resolution. Finally, we provide an experimental analysis of our proposal in using realistic data and modeling for energy consumption. This work is a work on modeling with simulation results but not a work with emulation and experiments on real systems. However, the parameters and assumptions made for our simulations fit well with the knowledge generally accepted by people working on energy modeling and volunteer computing. Consequently our work should be analyzed as a solid building block towards the implementation of allocation mechanisms in volunteer clouds.
- Published
- 2014
- Full Text
- View/download PDF
32. Modeling Energy Savings in Volunteers Clouds
- Author
-
Christophe Cérin, Jian Wan, Congfeng Jian, Paolo Gianessi, and Yanik Ngoko
- Subjects
Mathematical optimization ,business.industry ,Computer science ,Overhead (computing) ,Cloud computing ,Node (circuits) ,Energy consumption ,Computational problem ,business ,Time complexity ,Integer programming ,Efficient energy use - Abstract
In this paper we propose different models for the energy consumption in a special existing cloud system named SlapOS. In this cloud, the data center comprises dedicated and volunteer machines, these latter ones are not always available. Our objective is to state how to plan the run of applications for minimizing the global energy consumption, we propose two modelings. In the first model, we assume that we have a finite number of homogeneous volunteers nodes on which our applications can be run. The objective is to determine which application to run on each node in order to minimize the overhead in energy consumption caused by these runs. We show that the key computational challenge in this problem consists in finding a feasible solution when it exists. We propose for it a polynomial time algorithm. In the second model, we assume that the volunteers nodes are heterogeneous. In this case, we show again how to find a feasible solution in polynomial time. But in comparison to the homogeneous case, the key computational problem to solve here is NP-hard. We then propose an ILP (Integer Linear Programming) formulation for addressing and evaluate it throughout various simulations of the SlapOS system in a realistic volunteer computing context.
- Published
- 2013
- Full Text
- View/download PDF
33. A Data Prefetching Model for Desktop Grids and the Condor Use Case
- Author
-
Mohamed Jemni, Christophe Cérin, Walid Saad, and Heithem Abbes
- Subjects
Grid computing ,Wide area ,Computer science ,Bandwidth limitation ,Distributed computing ,Testbed ,Operating system ,computer.software_genre ,Storage management ,computer ,Bottleneck ,Scheduling (computing) - Abstract
Data-aware scheduling improves data-intensive applications performance in distributed systems. Desktop Grids have been successfully used for solving scientific applications at low cost. However, since the resources are accessed through wide area networks, the bottleneck comes with the bandwidth limitation and the master-worker paradigm. Data Prefetching has the potential for overlapping the elapsed times to exchange data between nodes. In this work, we propose a decentralized approach for data prefetching for Bag-Of-Tasks and DAG applications. Experimentation, using more than 200 machines on Nancy sites of the Grid'5000 testbed, demonstrate that our model improves the performance of applications execution.
- Published
- 2013
- Full Text
- View/download PDF
34. HPDIC Introduction
- Author
-
Congfeng Jiang, Yuqing Gao, Christophe Cérin, and Jilin Zhang
- Subjects
Computer science ,Engineering ethics - Published
- 2013
- Full Text
- View/download PDF
35. Toward a data desktop grid computing based on BonjourGrid meta-middleware
- Author
-
Mohamed Jemni, Christophe Cérin, Heithem Abbes, and Walid Saad
- Subjects
Group method of data handling ,business.industry ,Computer science ,Distributed computing ,Data management ,Big data ,computer.software_genre ,Grid ,Grid computing ,Middleware (distributed applications) ,Scalability ,Operating system ,The Internet ,business ,computer - Abstract
Desktop Grid or Voluntary Computing systems forms one of the biggest computing platforms in using idles resources over Internet or over LAN's networks. Desktop Grids have been successfully used for solving scientific applications around the world at low cost. However, the data requirements of e-Science applications increase dramatically with the emergence of data-intensive applications. Hence data management becomes one of the major challenge of Desktop Grids. In this work, we describe important challenges that are needed to implementing scalable data-intensive solution in Desktop Grids systems. In addition, we explore the state-of-the art in tools and frameworks for Big Data handling. The proposal solution will be integrated in BonioueCrid Desktop Grid.
- Published
- 2013
- Full Text
- View/download PDF
36. EHA: The Extremely Heterogeneous Architecture
- Author
-
Jian-Jun Han, Won Woo Ro, Alfredo C. Salas, Chen Liu, Shaoshan Liu, Jean-Luc Gaudiot, and Christophe Cérin
- Subjects
Server farm ,Application-specific integrated circuit ,Computer science ,business.industry ,End user ,Embedded system ,Mobile computing ,Cloud computing ,Software architecture ,business ,Field-programmable gate array ,Mobile device - Abstract
The computer industry is moving towards two extremes: extremely high-performance high-throughput cloud computing, and low-power mobile computing. Cloud computing, while providing high performance, is very costly. Google and Microsoft Bing spend billions of dollars each year to maintain their server farms, mainly due to the high power bills. On the other hand, mobile computing is under a very tight energy budget, but yet the end users demand ever increasing performance on these devices. This trend indicates that conventional architectures are not able to deliver high-performance and low power consumption at the same time, and we need a new architecture model to address the needs of both extremes. In this paper, we thus introduce our Extremely Heterogeneous Architecture (EHA) project: EHA is a novel architecture that incorporates both general-purpose and specialized cores on the same chip. The general-purpose cores take care of generic control and computation. On the other hand, the specialized cores, including GPU, hard accelerators (ASIC accelerators), and soft accelerators (FPGAs), are designed for accelerating frequently used or heavy weight applications. When acceleration is not needed, the specialized cores are turned off to reduce power consumption. We demonstrate that EHA is able to improve performance through acceleration, and at the same time reduce power consumption. Since EHA is a heterogeneous architecture, it is suitable for accelerating heterogeneous workloads on the same chip. For example, data centers and clouds provide many services, including media streaming, searching, indexing, scientific computations. The ultimate goal of the EHA project is two-fold: first, to design a chip that is able to run different cloud services on it, and through this design, we would be able to greatly reduce the cost, both recurring and non-recurring, of data centers\clouds, second, to design a light-weight EHA that runs on mobile devices, providing end users with improved experience even under tight battery budget constraints.
- Published
- 2012
- Full Text
- View/download PDF
37. Practical solutions for resilience in SlapOS
- Author
-
Christophe Cérin, Romain Courteaud, and Yingjie Xu
- Subjects
Leader election ,Computer science ,business.industry ,computer.internet_protocol ,Process (engineering) ,Software as a service ,Platform as a service ,Cloud computing ,Service-oriented architecture ,Computer security ,computer.software_genre ,Resource (project management) ,Grid computing ,business ,Software engineering ,Enterprise resource planning ,computer - Abstract
SlapOS is an open source operating system for distributed cloud computing based on the motto “everything is a process”. SlapOS combines grid computing and Enterprise Resource Planning (ERP) to provide Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS) through a simple, unified API which one can learn in a matter of minutes. SlapOS opens new perspectives for research in the area of resilience and security on the Cloud. In this paper we address the question of the lack of resiliency at current IaaS providers and we show how the combination of a simple leader election algorithm and a resource and monitoring system may help in the SlapOS cloud system for solving this issue. The approach exhibits technical and research directions for more elaborated studies.
- Published
- 2012
- Full Text
- View/download PDF
38. A Self-Configurable Desktop Grid System On-Demand
- Author
-
Heithem Abbes, Mohamed Jemni, Christophe Cérin, and Walid Saad
- Subjects
Data grid ,business.industry ,Computer science ,Data management ,Distributed computing ,computer.software_genre ,Grid ,Semantic grid ,Grid computing ,Server ,Middleware ,Operating system ,High-throughput computing ,business ,computer - Abstract
Desktop Grids have been used intensively to deploy applications at low cost. Applications types are classified into high throughput computing and high throughput data management. Desktop Grids such as Condor, BOINC, XtremWeb and Our Grid provide a wide range of high throughput computing systems. on the other hand, several data management systems such as Bit dew, GatorShare, Stork, Grid FTP, have been developed to handle a huge amount of data. Users are restricted to exploit Desktop Grid resources with a specific computing system and a specific data management system. to address this limitation, we propose in this work an extension of the Bonjour Grid middleware, to support different data management systems by exploiting existing data protocols and middleware. with Bonjour Grid, users of the same infrastructure can select, each one, the desired Desktop Grid middleware among the most popular ones to deploy and execute their applications. Now, users can select in addition to the computing system, the desired data manager protocol in a transparent and decentralized manner. Experimentation, using Bit Dew and Stork, on the Grid'5000 platform, demonstrate that the new release of Bonjour Grid provides with performance and at a low overhead.
- Published
- 2012
- Full Text
- View/download PDF
39. Computing Properties of Large Scalable and Fault-Tolerant Logical Networks
- Author
-
Christophe Cérin, Yu Lei, and Michel Koskas
- Subjects
Runtime system ,Computer science ,Distributed computing ,Scalability ,Probabilistic logic ,Overlay network ,Graph theory ,Probabilistic analysis of algorithms ,Fault tolerance ,Network topology - Abstract
As the number of processors embedded in high performance computing platforms becomes higher and higher, it is vital to force the developers to enhance the scalability of their codes in order to exploit all the resources of the platforms. This often requires new algorithms, techniques and methods for code development that add to the application code new properties: the presence of faults is no more an occasional event but a challenge. Scalability and Fault-Tolerance issues are also present in hidden part of any platform: the overlay network that is necessary to build for controlling the application or in the runtime system support for messaging which is also required to be scalable and fault tolerant. In this paper, we focus on the computational challenges to experiment with large scale (many millions of nodes) logical topologies. We compute Fault-Tolerant properties of different variants of Binomial Graphs (BMG) that are generated at random. For instance, we exhibit interesting properties regarding the number of links regarding some desired Fault-Tolerant properties and we compare different metrics with the Binomial Graph structure as the reference structure. A software tool has been developed for this study and we show experimental results with topologies containing 21000 nodes. We also explain the computational challenge when we deal with such large scale topologies and we introduce various probabilistic algorithms to solve the problems of computing the conventional metrics.
- Published
- 2011
- Full Text
- View/download PDF
40. A Decentralized Model for Controlling Selfish Use for Desktop Grid Systems
- Author
-
Christophe Cérin, Heithem Abbes, and Bassem Oueghlani
- Subjects
SIMPLE (military communications protocol) ,Computer science ,business.industry ,Distributed computing ,Testbed ,Grid ,computer.software_genre ,Grid computing ,Middleware ,Economic model ,business ,computer ,Protocol (object-oriented programming) ,Computer network - Abstract
This paper proposes a decentralized model for controlling selfish use of machines (volonteers) in a Desktop Grid system when we consider that budgets are allocated to machines. The budget may increase in participating to others applications or may decrease when machines reuest for participants. We also propose a decentralized implementation of the model over a simple economic model. The decentralized protocol is built on top of any distributed system based on peer-to-peer technologies, for instance Pastry Grid that we use in the experiments. This latter belongs to the family of desktop grid middleware such as Boinc, Condor, Xtremweb and Our Grid. Pastry Grid is able to execute distributed applications, with precedence between tasks, in a fully decentralized manner and in such a way that nodes executing tasks are selected 'on the fly'. This work proposes (1) a fully distributed mechanism for the budget management of any peer, when any peer can play the role of a trade manager, and (2) a fully decentralized approach to deal with selfish behaviors. Experiments conducted on Grid5000 testbed demonstrate that our system is operational and obtained results confirm its efficiency face to selfish use.
- Published
- 2011
- Full Text
- View/download PDF
41. SlapOS: A Multi-Purpose Distributed Cloud Operating System Based on an ERP Billing Model
- Author
-
Christophe Cérin, Jean-Paul Smets-Solanes, and Romain Courteaud
- Subjects
Computer science ,business.industry ,computer.internet_protocol ,Software as a service ,Distributed computing ,Cloud computing ,Service-oriented architecture ,computer.software_genre ,NoSQL ,Grid ,Business process management ,Grid computing ,Server ,Operating system ,business ,computer - Abstract
SlapOS is an open source grid operating system for distributed cloud computing based on the moto everything is a process. SlapOS combines grid computing and Enterprise Resource Modeling (ERP) to provide Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS) through a simple, unified API which one can learn in a matter of minutes. Thanks to its unified approach and modular architecture, SlapOS has been used as a research test bed to benchmark NoSQL databases and optimize process allocation over intercontinental Cloud. SlapOS opens new perspectives for research in the area of resilience and security on the Cloud.
- Published
- 2011
- Full Text
- View/download PDF
42. A Petri-Net Model for the Publish-Subscribe Paradigm and Its Application for the Verification of the BonjourGrid Middleware
- Author
-
Christophe Cérin, Sami Evangelista, and Leila Abidi
- Subjects
Grid computing ,Computer science ,Asynchronous communication ,Middleware (distributed applications) ,Distributed computing ,Context (language use) ,Deadlock ,Petri net ,Grid ,computer.software_genre ,Protocol (object-oriented programming) ,computer - Abstract
In this article we focus on the modelization of the Bonjour Grid protocol which is based on the Publish-Subscribe (Pub-Sub) paradigm, a paradigm for asynchronous communication that is useful for implementing some approaches in distributed programming. The aim of this paper is to isolate the generic mechanisms of construction for the publish-subscribe approach then to model and verify, based on those mechanisms, the Bonjour Grid protocol that allows the coordination of multiple instances of desktop grid middleware. We produce models using colored Petri nets in order to describe a specific modeling approach for the Pub-Sub paradigm. Such models are important, first, to formally verify the adequacy of Bonjour Grid in the coordination of resources in desktop grids - for example by proving the absence of a deadlock in the Bonjour Grid protocol, and second, to offer a 'composition'mechanism for integrating any protocol based on the Pub-Sub paradigm. These ideas are illustrated along the Bonjour Gridcase study and they constitute a methodology of building Pub-Sub systems.
- Published
- 2011
- Full Text
- View/download PDF
43. Fault tolerance based on the publish-subscribe paradigm for the BonjourGrid middleware
- Author
-
Christophe Cérin, Walid Saad, Mohamed Jemni, and Heithem Abbes
- Subjects
Grid computing ,Computer science ,Server ,Software fault tolerance ,Distributed computing ,Testbed ,Message passing ,Operating system ,Fault tolerance ,BOINC Credit System ,computer.software_genre ,Virtualization ,computer - Abstract
How to federate the machines of all Boinc, Condor and XtremWeb projects? If you believe in volunteer computing and want to share more than one project then BonjourGrid may help. In previous works, we proposed a novel approach, called BonjourGrid, to orchestrate multiple instances of Institutional Desktop Grid middleware. It is our way to remove the risk of bottleneck and failure, and to guarantee the continuity of services in a distributed manner. Indeed, BonjourGrid can create a specific environment for each user based on a given computing system of his choice such as XtremWeb, Condor or Boinc. This work investigates, first, the procedure to deploy Boinc and Condor on top of BonjourGrid and, second, proposes a fault tolerant approach based on passive replication and virtualization to tolerate the crash of coordinators. The novelty resides here in an integrated environment based on Bonjour (publication-subscription mecanism) for both the coordination protocol and for the fault tolerance issues. In particular, it is not so frequent to our knowledge to describe and to implement a fault tolerant protocol according to the pub-sub paradigm. Experiments, conducted on the Grid'5000 testbed, illustrate a comparative study between Boinc (respectively Condor) on top of BonjourGrid and a centralized system using Boinc (respectively Condor) and second prove the robustness of the fault tolerant mechanism.
- Published
- 2010
- Full Text
- View/download PDF
44. Fault-tolerance for PastryGrid middleware
- Author
-
Mohamed Jemni, Yazid Missaoui, Heithem Abbes, and Christophe Cérin
- Subjects
Pastry ,Grid computing ,Computer science ,Robustness (computer science) ,Software fault tolerance ,Distributed computing ,Testbed ,Fault tolerance ,Single point of failure ,computer.software_genre ,computer ,Bottleneck - Abstract
This paper analyses the performance of a decentralized and fault-tolerant software layer for Desktop Grid resources management. The grid middleware under concern is named PastryGrid. Its main design principle is to eliminate the need for a centralized server, therefore to remove the single point of failure and bottleneck of existing Desktop Grids. PastryGrid (based on Pastry) supports the execution of distributed application with precedence between tasks in a decentralized way. Indeed, each node can play alternatively the role of client or server. Our main contribution is to propose a fault tolerant mechanism for Pastry-Grid middleware. Since the management of PastryGrid is distributed over the participants without central manager, its control becomes a challenging problem, especially when dealing with faults. The experimental results on the Grid'5000 testbed demonstrate that our decentralized fault-tolerant system is robust because it supports high fault rates.
- Published
- 2010
- Full Text
- View/download PDF
45. BonjourGrid: Orchestration of multi-instances of grid middlewares on institutional Desktop Grids
- Author
-
Mohamed Jemni, Heithem Abbes, and Christophe Cérin
- Subjects
Collaborative software ,Computer science ,business.industry ,Distributed computing ,computer.software_genre ,Grid ,Grid computing ,Middleware ,Overhead (computing) ,Orchestration (computing) ,business ,computer ,Protocol (object-oriented programming) ,Host (network) - Abstract
While the rapidly increasing number of users and applications running on Desktop Grid (DG) systems does demonstrate its inherent potential, current DG implementations follow the traditional masterworker paradigm and DG middlewares do not cooperate. To extend the DG architecture, we propose a novel system, called BonjourGrid, capable of 1) creating, for each user, a specific execution environment in a decentralized fashion and 2) contrarily to classical DG, of orchestrating multiple and various instances of Desktop Grid middlewares. This will enable us to construct, on demand, specific execution environments (combinations of XtremWeb, Condor, Boinc middlewares). BonjourGrid is a software which aims to link a discovery service based on publish/subscribe protocol with the upper layer of a Desktop Grid middleware bridging the gap to meta-grid. Our experimental evaluation proves that BonjourGrid is robust and able to orchestrate more than 400 instances of XtremWeb middleware in a concurrent fashion on a 1000 host cluster. This experiment demonstrates the concept of BonjourGrid as well as its potential and shows that, comparing to a classical Desktop Grid with one central master, BonjourGrid suffers from an acceptable overhead that can be explained.
- Published
- 2009
- Full Text
- View/download PDF
46. Experimental Study of Thread Scheduling Libraries on Degraded CPU
- Author
-
Christophe Cérin, Mohamed Jemni, and Hazem Fkaier
- Subjects
POSIX Threads ,Multi-core processor ,Job shop scheduling ,Memory hierarchy ,CPU cache ,Computer science ,Processor scheduling ,Symmetric multiprocessor system ,Linux kernel ,Parallel computing ,Thread (computing) ,computer.software_genre ,Execution time ,Scheduling (computing) ,Memory management ,Work stealing ,Multithreading ,Operating system ,Cache ,computer - Abstract
In this paper, we compare four libraries for efficiently running threads when the performance of a CPU cores are degraded. First, we are interested by 'brute performance' of the libraries when all the CPU resources are available and second, we would like to measure how the scheduling strategy impacts also the memory management in order to revisit, in the future, scheduling strategies when we artificially degrade the performance in advance. It is well known that work stealing, when done in an anarchic way, may lead to poor cache performance. It is also known that the migration of threads may induce penalties if they are too frequent. We study, at the processor level, the memory management in order to find trade-offs between active thread number that an application should start and the memory hierarchy. Our implementations, coded with the different libraries, were compared against a Pthread one where the threads are scheduled by the Linux kernel and not by a specific tool. Our experimental results indicate that scheduler may perfectly balance loads over cores but execution time is impacted in a negative way. We also put forward a relation between the L1 cache misses, the number of steals and the execution time that will allow to focus on specific points to improve 'work stealing' schedulers in the future.
- Published
- 2008
- Full Text
- View/download PDF
47. Improving Parallel Execution Time of Sorting on Heterogeneous Clusters
- Author
-
Mohamed Jemni, Michel Koskas, Hazem Fkaier, and Christophe Cérin
- Subjects
Constant factor ,Sorting problem ,Computational complexity theory ,Computer science ,Distributed computing ,Computation ,Parallel algorithm ,Symmetric multiprocessor system ,Parallel computing ,Load balancing (computing) ,Execution time - Abstract
The aim of the paper is to introduce techniques in order to optimize the parallel execution time of sorting on heterogeneous platforms (processors speeds are related by a constant factor). We develop a constant time technique for mastering processor load balancing and execution time in an heterogeneous environment. We develop an analytical model for the parallel execution time, sustained by preliminary experimental results in the case of a 2-processors systems. The computation of the solution is independent of the problem size. Consequently, there is no overhead regarding the sorting problem.
- Published
- 2004
- Full Text
- View/download PDF
48. An out-of-core sorting algorithm for clusters with processors at different speed
- Author
-
Christophe Cérin
- Subjects
Bitonic sorter ,Sorting algorithm ,Computer science ,Factor (programming language) ,Multiplicative function ,Integer sorting ,Resource allocation ,Context (language use) ,Out-of-core algorithm ,Parallel computing ,computer ,computer.programming_language - Abstract
The paper deals with the problem of parallel external integer sorting in the context of a class of heterogeneous clusters. We explore some techniques inherited from the homogeneous and in-core cases to show how they can be deployed for clusters with processor performances related by a multiplicative factor.
- Published
- 2002
- Full Text
- View/download PDF
49. Algorithms for stable sorting to minimize communications in networks of workstations and their implementations in BSP
- Author
-
Christophe Cérin and Jean-Luc Gaudiot
- Subjects
Ethernet ,Workstation ,Computer science ,law ,Programming paradigm ,Parallel algorithm ,Parallel computing ,Shellsort ,Load balancing (computing) ,Myrinet ,Algorithm ,Quicksort ,law.invention - Abstract
We introduce a novel approach to produce BSP (Bulk Synchronous Programming model) programs and we show their efficiency by implementing the stable sorting problem on clusters of PC. Experimental results on PCs based on Ethernet and Myrinet cards are compared with implementations on an SGI 2000. The algorithms presented in the paper are either developed under the theoretical framework of the Regular Sampling technique which guarantees good load balancing properties or are inspired by the technique in order to decrease the sequential work of each processor comparing to the Regular Sampling technique but impose no (theoretical) bound on load balancing. The main sequential block of code used in the algorithms for local sorting are derivatives of Shellsort (which is stable) and a new code based on Quicksort (which is not stable) plus a property on real numbers that is used for stable sorting under the framework of BSR (Broadcast with Selective Reduction).
- Published
- 1999
- Full Text
- View/download PDF
50. A Petri-Net Model for the Publish-Subscribe Paradigm and Its Application for the Verification of the BonjourGrid Middleware
- Author
-
Abidi, Leila, primary, Cérin, Christophe, additional, and Evangelista, Sami, additional
- Published
- 2011
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.