13 results on '"Riccardo Zappi"'
Search Results
2. First Experiences with CMS Data Storage on the GEMSS System at the INFN-CNAF Tier-1
- Author
-
D. Andreotti, A. Sartirana, A. Cavalli, P. P. Ricci, S. Dal Pra, B Martelli, L. dell'Agnello, Elisabetta Ronchieri, A Prosperini, Riccardo Zappi, V. Sapunenko, A.C. Forti, L. Li Gioi, Daniele Bonacorsi, D Gregori, C. Grandi, and V. Vagnoni
- Subjects
File system ,Engineering ,Large Hadron Collider ,Database ,business.industry ,Scale test ,Context (language use) ,computer.software_genre ,Tier 1 network ,Software deployment ,Computer data storage ,Operating system ,IBM ,business ,computer - Abstract
A brand new Mass Storage System solution called “Grid-Enabled Mass Storage System” (GEMSS) -based on the Storage Resource Manager (StoRM) developed by INFN, on the General Parallel File System by IBM and on the Tivoli Storage Manager by IBM -has been tested and deployed at the INFNCNAF Tier-1 Computing Centre in Italy. After a successful stress test phase, the solution is now being used in production for the data custodiality of the CMS experiment at CNAF. All data previously recorded on the CASTOR system have been transferred to GEMSS. As final validation of the GEMSS system, some of the computing tests done in the context of the WLCG “Scale Test for the Experiment Program” (STEP’09) challenge were repeated in September-October 2009 and compared with the results previously obtained with CASTOR in June 2009. In this paper, the GEMSS system basics, the stress test activity and the deployment phase -as well as the reliability and performance of the system -are overviewed. The experiences in the use of GEMSS at CNAF in preparing for the first months of data taking of the CMS experiment at the Large Hadron Collider are also presented.
- Published
- 2011
- Full Text
- View/download PDF
3. An Efficient Grid Data Access with StoRM
- Author
-
Elisabetta Ronchieri, Riccardo Zappi, A.C. Forti, and Antonia Ghiselli
- Subjects
File system ,Storage area network ,Data grid ,Grid computing ,Computer science ,Storage Resource Broker ,Grid file ,Data_FILES ,Operating system ,Lustre (file system) ,Disk storage ,computer.software_genre ,computer - Abstract
In production data Grids, high performance disk storage solutions using parallel file systems are becoming increasingly important to provide reliability and high speed I/O operations needed by High Energy Physics analysis farms. Today, Storage Area Network solutions are commonly deployed at Large Hadron Collider data centres, and parallel file systems such as GPFS and Lustre provide reliable, high speed native POSIX I/O operations in parallel fashion. In this paper, we describe StoRM, a Grid middleware component, implementing the standard Storage Resource Manager v2.2 interface. Its architecture fully exploits the potentiality offered by the underlying cluster file system. Indeed, it enables and encourages the use of the native POSIX file protocol (i.e. ”file://”) allowing managed Storage Element to improve job efficiency in data accessing. The job running on the worker node can perform a direct access to the Storage Element managed by StoRM as if it were a local disk, instead of transferring data from Storage Elements to the local disk.
- Published
- 2011
- Full Text
- View/download PDF
4. Activities and performance optimization of the Italian computing centers supporting the ATLAS experiment
- Author
-
Attilio Andreazza, Luca Vaccarossa, Lamberto Luminari, Silvia Resconi, Luca dell'Agnello, L. Magnoni, Gianpaolo Carlino, Simone Campana, Leonardo Merola, David Rebatto, Alessandra Doria, B Martelli, Alessandro Brunengo, Massimo Pistolese, Lorenzo Rinaldi, Dario Barberis, Riccardo Zappi, Alessandro Di Girolamo, A.C. Forti, Agnese Martini, Daniela Anzellotti, Elisa Musto, Claudia Ciocca, Alessandro De Salvo, A. Italiano, Maria Lorenza Ferrer, Elisabetta Vilucchi, Davide Salomoni, Laura Perini, Maria Curatolo, Mirko Corosu, E. Vilucchi, A. Andreazza, D. Anzellotti, D. Barberi, A. Brunengo, S. Campana, G. Carlino, C. Ciocca, M. Corosu, M. Curatolo, L. dell'Agnello, A. De Salvo, A. Di Girolamo, A. Doria, M.L. Ferrer, A. Forti, A. Italiano, L. Luminari, L. Magnoni, B. Martelli, A. Martini, L. Merola, E. Musto, L. Perini, M. Pistolese, D. Rebatto, S. Resconi, L. Rinaldi, D. Salomoni, L. Vaccarossa, and R. Zappi
- Subjects
GRID COMPUTING ,Database ,Atlas (topology) ,business.industry ,Computer science ,Control reconfiguration ,Cloud computing ,ATLAS ,computer.software_genre ,Failover ,Oracle ,Server ,Resource allocation ,LHC ,Daemon ,business ,computer - Abstract
With this work we present the activity and performance optimization of the Italian computing centers supporting the ATLAS experiment forming the so-called Italian Cloud. We describe the activities of the ATLAS Italian Tier-2s Federation inside the ATLAS computing model and present some Italian original contributions. We describe StoRM, a new Storage Resource Manager developed by INFN, as a replacement of Castor at CNAF - the Italian Tier-1 - and under test at the Tier-2 centers. We also show the failover solution for the ATLAS LFC, based on Oracle DataGuard, load-balancing DNS and LFC daemon reconfiguration, realized between CNAF and the Tier-2 in Roma. Finally we describe the sharing of resources between Analysis and Production, recently implemented in the ATLAS Italian Cloud with the Job Priority mechanism.
- Published
- 2009
- Full Text
- View/download PDF
5. A novel approach for mass storage data custodial
- Author
-
L. Magnoni, M. Mazzucato, Elisabetta Ronchieri, Riccardo Zappi, V. Sapunenko, D Vitlacil, B Martelli, L. dell'Agnello, V. Vagnoni, P. P. Ricci, D. Gregori, A. Carbone, and Antonia Ghiselli
- Subjects
File system ,Storage area network ,Large Hadron Collider ,Grid computing ,Magnetic tape data storage ,Database ,Computer science ,Interface (computing) ,Operating system ,Petabyte ,computer.software_genre ,computer ,Mass storage - Abstract
The mass storage challenge for the Large Hadron Collider (LHC) experiments is still nowadays a critical issue for the various Tier-1 computing centres and the Tier-0 centre involved in the custodial and analysis of the data produced by the experiments. In particular, the requirements for the tape mass storage systems are quite strong, amounting to several PetaBytes of data that should be available for near-line access at any time. Besides the solutions already widely employed by the High Energy Physics community so far, an interesting new option showed up recently. It is based on the interaction between the General Parallel File System (GPFS) and the Tivoli Storage Manager (TSM) by IBM. The new features introduced in GPFS version 3.2 allow to inteface GPFS with tape storage managers. We implemented such an interface for TSM, and performed various performance studies on a pre-production system. Together with the StoRM SRM interface, developed as a joint collaboration between INFN-CNAF and ICTP-Trieste, this solution can fulfill all the requirements of a Tier-1 WLCG centre. The first StoRM-GPFS-TSM based system has now entered its production phase at CNAF, presently adopted by the LHCb experiment. We will describe the implementation of the interface and the prototype test-bed, and we will discuss the results of some tests.
- Published
- 2008
- Full Text
- View/download PDF
6. StoRM: A flexible solution for Storage Resource Manager in grid
- Author
-
Riccardo Zappi, L. Magnoni, and Antonia Ghiselli
- Subjects
Database ,Computer science ,business.industry ,Storage Resource Broker ,Distributed computing ,Cloud computing ,Information repository ,computer.software_genre ,Object storage ,Storage area network ,Converged storage ,Scalability ,Resource management ,business ,computer - Abstract
In these times, scientific data intensive applications generate ever-increasing volumes of data that need to be stored, managed, and shared between geographically distributed communities. Data centres are normally able to provide tens of petabytes of storage space through a large variety of heterogeneous storage and file systems. However, storage systems shared by applications need a common data access mechanism, which allocates storage space dynamically, manages stored content, and automatically remove unused data to avoid clogging data stores. To accommodate these needs, the concept of Storage Resource Managers (SRMs) was devised in the context of a project that involved High Energy Physics (HEP) and Nuclear Physics (NP). The Storage Resource Manager (SRM) interface specification was defined and evolved into an international collaboration in the context of the Open Grid Forum (OGF). SRM interface provides the technology needed to share geographically distributed heterogeneous storage resources, with an effective and common interface regardless of the type of the back-end system being used. By implementing the SRM interface, grid storage services provide a consistent homogeneous interface to the Grid to manage storage resource as well as advanced functionality such as dynamic space allocation and file management on shared storage systems. Within Worldwide LHC Grid project exists more than five interoperating implementations of SRM services, and every one shows peculiarity. In this paper, we describe the flexibility of StoRM service, an implementation of the Storage Resource Manager Interface version 2.2. StoRM is designed to foster the adoption of cluster file systems, and thanks to the marked flexibility, StoRM can be used in small data centre with human resource deficiency to administer an other grid service and, at the same time, capable to grow in terms of storage managed and workload. StoRM can be used to manage any storage resources with any kind of POSIX file-system in a transparent way. As demonstration of the StoRM flexibility, the paper describes how applications scheduled via Grid can access files on a file-system directly via POSIX calls, how StoRM can be deployed in a clustered configuration to address scalability needs and finally how StoRM can be used to manage also storage classes based on Storage Cloud, like Amazon Simple Storage Service (S3).
- Published
- 2008
- Full Text
- View/download PDF
7. An Analysis of Security Services in Grid Storage Systems
- Author
-
L. Magnoni, Antonia Ghiselli, Federico Stagni, Angelos Bilas, Jesus Luna, Riccardo Zappi, A.C. Forti, Michail D. Flouris, and Manolis Marazakis
- Subjects
Semantic grid ,Cloud computing security ,Grid computing ,Data grid ,Security service ,Computer science ,Computer security model ,computer.software_genre ,Computer security ,computer ,Security testing ,Security information and event management - Abstract
With the wide-spread deployment of Data Grid installations, and rapidly increasing data volumes, storage services are becoming a critical aspect of the Grid infrastructure. Due to the distributed and shared nature of the Grid, security issues related with state of the art data storage services need to be studied thoroughly to identify potential vulnerabilities and attack vectors. In this paper, motivated by a typical use-case for Data Grid storage, we apply an extended framework for analyzing and evaluating its security from the point of view of the data and metadata, taking into consideration the security capabilities provided by both the underlying Grid infrastructure and commonly deployed Grid storage systems. For a comprehensive analysis of the latter, we identify three important elements: the players being involved, the underlying trust assumptions and the dependencies on specic security primitives. This analysis leads to the identication of a set of potential security gaps, risks, and even redundant security features found in a typical Data Grid. These results are now the starting point for our ongoing research on policies and mechanisms able to provide a fair balance between security and performance for Data Grid Storage Services.
- Published
- 2008
- Full Text
- View/download PDF
8. Review of Security Models Applied to Distributed Data Access
- Author
-
Federico Stagni, Antonia Ghiselli, and Riccardo Zappi
- Subjects
Security ,Grid ,Computer access control ,Computer science ,Data management ,XACML ,Data security ,Access control ,Computer security ,computer.software_genre ,Asset (computer security) ,Security information and event management ,Distributed System Security Architecture ,Data integrity ,computer.programming_language ,Authentication ,Cloud computing security ,Data grid ,business.industry ,Authorization ,Information security ,Computer security model ,Data access ,Security service ,Network Access Control ,Network security policy ,business ,computer ,Computer network - Abstract
In this paper, we explore the technologies behind the security models applied to distributed data access in a Grid environment. Our goal is to study a security model allowing data integrity, confidentiality, authentication and authorization for VO users. We split the process for data access in three levels: Grid authentication, Grid authorization, local enforcement. For each level, we introduce at least one possible technological solution. Finally, we show our vision of a SOA oriented security framework. This work is developed as part of the CoreGRID Network of Excellence, for the Institute on Knowledge and Data Management.
- Published
- 2007
- Full Text
- View/download PDF
9. Performance Studies of the StoRM Storage Resource Manager
- Author
-
L. Magnoni, Riccardo Zappi, V. Sapunenko, A.C. Forti, E. Lanciotti, L. dell'Agnello, V. Vagnoni, M. Mazzucato, R. Santinelli, A. Carbone, and Antonia Ghiselli
- Subjects
File system ,Software suite ,Computer science ,Distributed computing ,computer.software_genre ,Storage area network ,Grid computing ,Software deployment ,POSIX ,Scalability ,Data_FILES ,Operating system ,Lustre (file system) ,computer - Abstract
High performance disk-storage solutions based on parallel file systems are becoming increasingly important to fulfill the large I/O throughput required by high-energy physics applications. Storage area networks (SAN) are commonly employed at the Large Hadron Collider data centres, and SAN-oriented parallel file systems such as GPFS and Lustre provide high scalability and availability by aggregating many data volumes served by multiple disk-servers into a single POSIX file system hierarchy. Since these file systems do not come with a storage resource manager (SRM) interface, necessary to access and manage the data volumes in a grid environment, a specific project called StoRM has been developed for providing them with the necessary SRM capabilities. In this paper we describe the deployment of a StoRM instance, configured to manage a GPFS file system. A software suite has been realized in order to perform stress tests of functionality and throughput on StoRM. We present the results of these tests.
- Published
- 2007
- Full Text
- View/download PDF
10. StoRMon: an event log analyzer for Grid Storage Element based on StoRM
- Author
-
Stefano Dal Pra, Riccardo Zappi, Michele Dibenedetto, and Elisabetta Ronchieri
- Subjects
History ,Service (systems architecture) ,Engineering ,Data grid ,Database ,business.industry ,Event (computing) ,Data management ,Grid file ,GridFTP ,Grid ,computer.software_genre ,Computer Science Applications ,Education ,business ,computer ,Administrative domain - Abstract
Managing a collaborative production Grid infrastructure requires to identify and handle every issue, which might arise, in a timely manner. Currently, the most complex problem of the data Grid infrastructure relates to the data management because of its distributed nature. To ensure that problems are quickly addressed and solved, each site should contribute to the solution providing any useful information about services that run in its administrative domain. Often Grid sites' administrators to be eective must collect, organize and examine the scattered logs events that are produced from every service and component of the Storage Element. This paper focuses on the problem of gathering the events logs on a Grid Storage Element and describes the design of a new service, called StoRMon. StoRMon will be able to collect, archive, analyze and report on events logs produced by each service of Storage Element during the execution of its tasks. The data and the processed information will be available to the site administrators by using a single contact-point to easily identify security incidents, fraudulent activity, and the operational issues mainly. The new service is applied to a Grid Storage Element characterized by StoRM, GridFTP and YAMSS, and collects the usage data of StoRM, transferring and hierarchical storage services.
- Published
- 2011
- Full Text
- View/download PDF
11. StoRM-GPFS-TSM: A new approach to hierarchical storage management for the LHC experiments
- Author
-
P. P. Ricci, A Prosperini, M. Mazzucato, L. Magnoni, Antonia Ghiselli, Riccardo Zappi, V. Sapunenko, D Gregori, Elisabetta Ronchieri, V. Vagnoni, A. Cavalli, L. dell'Agnello, B Martelli, and D Vitlacil
- Subjects
File system ,History ,Engineering ,Large Hadron Collider ,Magnetic tape data storage ,Database ,business.industry ,Interface (computing) ,Testbed ,computer.software_genre ,Computer Science Applications ,Education ,Mass storage ,Hierarchical storage management ,Operating system ,IBM ,business ,computer - Abstract
The mass storage challenge for the experiments at the Large Hadron Collider (LHC) is still nowadays a critical issue for the various Tier-1 computing centres and the Tier-0 centre involved in the custodial and analysis of the data produced by the experiments. In particular, the requirements for the tape mass storage systems are quite strong, amounting to about 15 PB of data produced annually that should be available for near-line access at any time. Besides the solutions already widely employed by the High Energy Physics community so far, INFN-CNAF has approched, in the last year, a solution based on the collaboration between the General Parallel File System (GPFS) and the Tivoli Storage Manager (TSM) by IBM and StoRM as Storage Resource Management (SRM) interface, developed at INFN. The new features available in GPFS version 3.2 allow in general to interface GPFS with any tape storage manager. We implemented such an interface for TSM, and we performed various performance studies on a prototype testbed system. The first StoRM/GPFS/TSM based system is already in production at CNAF for T1D1 Storage Class; it is used by the LHCb experiment. Currently we are performing new tests to exploit features implemented in the new versione of TSM 6.1. These features are focused in high performance and optimized access to the tape backend. We will describe the implementation of the interface and details of the prototype testbed for T1D0 SC and we will discuss the results of the LHCb production system.
- Published
- 2010
- Full Text
- View/download PDF
12. Storage management solutions and performance tests at the INFN Tier-1
- Author
-
L. Magnoni, D Vitlacil, Marco Bencivenni, Davide Salomoni, B Martelli, Donato De Girolamo, A D'Apice, G L Re, S Zani, F. Furano, D. Galli, A. Carbone, R. Veraldi, R. Santinelli, P. P. Ricci, Antonia Ghiselli, A. Fella, M. Mazzucato, L. dell'Agnello, U. Marconi, A. Italiano, M. Donatelli, Riccardo Zappi, V. Sapunenko, A.C. Forti, F Rosso, Andrea Chierici, V. Vagnoni, Giacinto Donvito, E. Lanciotti, RANDALL SOBIE, REDA TAFIROUT AND JANA THOMSON, M. Bencivenni, A. Carbone, A. Chierici, A. D’Apice, D. De Girolamo, L. dell’Agnello, M. Donatelli, G. Donvito, A. Fella, A. Forti, F. Furano, D. Galli, A. Ghiselli, A. Italiano, E. Lanciotti, G. Lo Re, L. Magnoni, U. Marconi, B. Martelli, M. Mazzucato, P. P. Ricci, F. Rosso, D. Salomoni, R. Santinelli, V. Sapunenko, V. Vagnoni, R. Veraldi, D. Vitlacil, S. Zani, and R. Zappi
- Subjects
File system ,DATA PROCESSING AND ANALYSIS ,STORAGE RESOURCE MANAGEMENT ,History ,Engineering ,business.industry ,Context (language use) ,computer.software_genre ,Computer Science Applications ,Education ,Storage area network ,Fibre Channel ,Data access ,Gigabit ,Server ,Scalability ,PARALLEL FILE SYSTEM ,Operating system ,LARGE SCALE STORAGE INFRASTRUCTURES ,STORAGE AREA NETWORK ,business ,computer - Abstract
Performance, reliability and scalability in data access are key issues in the context of HEP data processing and analysis applications. In this paper we present the results of a large scale performance measurement performed at the INFN-CNAF Tier-1, employing some storage solutions presently available for HEP computing, namely CASTOR, GPFS, Scalla/Xrootd and dCache. The storage infrastructure was based on Fibre Channel systems organized in a Storage Area Network, providing 260 TB of total disk space, and 24 disk servers connected to the computing farm (280 worker nodes) via Gigabit LAN. We also describe the deployment of a StoRM SRM instance at CNAF, configured to manage a GPFS file system, presenting and discussing its performances.
- Published
- 2008
- Full Text
- View/download PDF
13. Storage resource manager version 2.2: design, implementation, and testing experience
- Author
-
Ezio Corso, Tigran Mkrtchan, Patrick Fuhrmann, Alex Sim, David Smith, Dimitry Litvintsev, Riccardo Zappi, Arie Shoshani, T. Perelmutov, Jean Philippe Baud, Sophie Lemaitre, Paolo Tedesco, Flavia Donno, Don Petravick, Giuseppe Lo Presti, Birger Koblitz, Gavin McCance, L. Magnoni, Junmin Gu, Rémi Mollon, Shaun De Witt, Vijaya Natarajan, Paolo Badino, Lana Abadie, and Maarten Litmaath
- Subjects
History ,Engineering ,Large Hadron Collider ,Database ,business.industry ,Interface (computing) ,Petabyte ,computer.software_genre ,Computing and Computers ,Computer Science Applications ,Education ,Consistency (database systems) ,Test suite ,business ,Worldwide LHC Computing Grid ,Protocol (object-oriented programming) ,Implementation ,computer - Abstract
Storage Services are crucial components of the Worldwide LHC Computing Grid Infrastructure spanning more than 200 sites and serving computing and storage resources to the High Energy Physics LHC communities. Up to tens of Petabytes of data are collected every year by the four LHC experiments at CERN. To process these large data volumes it is important to establish a protocol and a very efficient interface to the various storage solutions adopted by the WLCG sites. In this work we report on the experience acquired during the definition of the Storage Resource Manager v2.2 protocol. In particular, we focus on the study performed to enhance the interface and make it suitable for use by the WLCG communities. At the moment 5 different storage solutions implement the SRM v2.2 interface: BeStMan (LBNL), CASTOR (CERN and RAL), dCache (DESY and FNAL), DPM (CERN), and StoRM (INFN and ICTP). After a detailed inside review of the protocol, various test suites have been written identifying the most effective set of tests: the S2 test suite from CERN and the SRM- Tester test suite from LBNL. Such test suites have helped verifying the consistency and coherence of the proposed protocol and validating existing implementations. We conclude our work describing the results achieved.
- Published
- 2008
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.