18 results on '"Roberto Baldoni"'
Search Results
2. Survey of machine learning techniques for malware analysis
- Author
-
Roberto Baldoni, Daniele Ucci, and Leonardo Aniello
- Subjects
FOS: Computer and information sciences ,Computer Science - Cryptography and Security ,General Computer Science ,Computer science ,02 engineering and technology ,Machine learning ,computer.software_genre ,portable executable ,benchmark ,0202 electrical engineering, electronic engineering, information engineering ,Malware analysis ,business.industry ,020206 networking & telecommunications ,computer.file_format ,machine learning ,malware analysis economics ,malware analysis ,Malware ,020201 artificial intelligence & image processing ,Executable ,Artificial intelligence ,business ,Cryptography and Security (cs.CR) ,Law ,computer - Abstract
Coping with malware is getting more and more challenging, given their relentless growth in complexity and volume. One of the most common approaches in literature is using machine learning techniques, to automatically learn models and patterns behind such complexity, and to develop technologies to keep pace with malware evolution. This survey aims at providing an overview on the way machine learning has been used so far in the context of malware analysis in Windows environments, i.e. for the analysis of Portable Executables. We systematize surveyed papers according to their objectives (i.e., the expected output), what information about malware they specifically use (i.e., the features), and what machine learning techniques they employ (i.e., what algorithm is used to process the input and produce the output). We also outline a number of issues and challenges, including those concerning the used datasets, and identify the main current topical trends and how to possibly advance them. In particular, we introduce the novel concept of malware analysis economics, regarding the study of existing trade-offs among key metrics, such as analysis accuracy and economical costs., 55 pages, 4 figures, 8 tables, added new references, corrected typos, and revised manuscript. Forthcoming in Computers & Security
- Published
- 2019
3. High frequency batch-oriented computations over large sliding time windows
- Author
-
Roberto Baldoni, Leonardo Querzoni, and Leonardo Aniello
- Subjects
Computer Networks and Communications ,Computer science ,Computation ,Distributed computing ,Real-time computing ,Complex event processing ,Batch processing ,Big data ,Workflow ,Time window based computations ,Event processing ,Hardware and Architecture ,Time windows ,Data analytics ,Software - Abstract
Today’s business workflows are very likely to include batch computations that periodically analyze subsets of data within specific time ranges to provide strategic information for stakeholders and other interested parties. The frequency of these batch computations provides an effective measure of data analytics freshness available to decision makers. Nevertheless, the typical amounts of data to elaborate in a batch are so large that a computation can take very long. Considering that usually a new batch starts when the previous one has completed, the frequency of such batches can thus be very low. In this paper we propose a model for batch processing based on overlapping sliding time windows that allows to increase the frequency of batches. The model is well suited to scenarios (e.g., financial, security etc.) characterized by large data volumes, observation windows in the order of hours (or days) and frequent updates (order of seconds). The model introduces multiple metrics whose aim is reducing the latency between the end of a computation time window and the availability of results, increasing thus the frequency of the batches. These metrics specifically take into account the organization of input data to minimize its impact on such latency. The model is then instantiated on the well-known Hadoop platform, a batch processing engine based on the MapReduce paradigm, and a set of strategies for efficiently arranging input data is described and evaluated.
- Published
- 2015
4. Fault-tolerant oblivious assignment with m slots in synchronous systems
- Author
-
Giuseppe Antonio Di Luna, Roberto Baldoni, Silvia Bonomi, and Giuseppe Ateniese
- Subjects
distributed systems ,Correctness ,Theoretical computer science ,Computer Networks and Communications ,Computer science ,business.industry ,failures ,Fault tolerance ,Cloud computing ,Security token ,distributed coordination abstractions ,Theoretical Computer Science ,mutual exclusion ,Artificial Intelligence ,Hardware and Architecture ,Distributed algorithm ,Bounded function ,Probabilistic analysis of algorithms ,secure computations ,Mutual exclusion ,business ,Software - Abstract
Preserving anonymity and privacy of customer actions within a complex software system, such as a cloud computing system, is one of the main issues that should be addressed to boost private computation outsourcing. In this paper, we propose a coordination paradigm, namely oblivious assignment with m slots of a resource R (with m ≥ 1 ), allowing processes to compete in order to get a slot of R , while ensuring at the same time both fairness in the assignment of resource slots and that no process learns which slot of R is assigned to a specific process. We present a distributed algorithm solving oblivious assignment with m slots within a distributed system, assuming (1) a bounded number of crash failures f , (2) the existence of at least f + 2 honest processes, and (3) m ≤ n (where n is the number of processes). The algorithm is based on a rotating token paradigm and its correctness is formally proved. A probabilistic analysis of the average waiting time before getting a slot is also provided.
- Published
- 2014
5. A protocol for implementing byzantine storage in churn-prone distributed systems
- Author
-
Silvia Bonomi, Roberto Baldoni, and Amir Soltani Nezhad
- Subjects
safe register ,Service (systems architecture) ,General Computer Science ,Computer science ,Distributed computing ,ComputerSystemsOrganization_COMPUTER-COMMUNICATIONNETWORKS ,dynamic distributed systems ,eventually synchronous distributed system ,byzantine fault tolerance ,churn ,Theoretical Computer Science ,Constraint (information theory) ,Quantum Byzantine agreement ,Server ,Distributed data store ,Protocol (object-oriented programming) ,Byzantine fault tolerance ,Byzantine architecture ,Block (data storage) - Abstract
Distributed storage service is one of the main abstractions provided to the developers of distributed applications due to its capability to hide the complexity generated by the messages exchanged between processes. Many protocols have been proposed to build byzantine-fault-tolerant storage services on top of a message-passing system, but they do not consider the possibility to have servers joining and leaving the computation (churn phenomenon). This phenomenon, if not properly mastered, can either block protocols or violate the safety of the storage. In this paper, we address the problem of building a safe register storage resilient to byzantine failures in a distributed system affected from churn. A protocol implementing a safe register in an eventually synchronous system is proposed and some feasibility constraints on the arrival and departure of the processes are given. The protocol is proved to be correct under the assumption that the constraint on the churn is satisfied.
- Published
- 2013
6. Virtual Tree: A robust architecture for interval valid queries in dynamic distributed systems
- Author
-
Roberto Baldoni, Leonardo Querzoni, Silvia Bonomi, and Adriano Cerocchi
- Subjects
Theoretical computer science ,peer-to-peer systems ,Computer Networks and Communications ,Computer science ,Semantics (computer science) ,Distributed computing ,overlay networks ,Overlay network ,Topology (electrical circuits) ,Interval (mathematics) ,node clustering ,dynamic distributed systems ,distributed query answering ,Theoretical Computer Science ,Tree (data structure) ,Churn rate ,Artificial Intelligence ,Hardware and Architecture ,Robustness (computer science) ,Bounded function ,Software - Abstract
This paper studies the problem of answering aggregation queries, satisfying the interval validity semantics, in a distributed system prone to continuous arrival and departure of participants. The interval validity semantics states that the query answer must be calculated considering contributions of at least all processes that remained in the distributed system for the whole query duration. Satisfying this semantics in systems experiencing unbounded churn is impossible due to the lack of connectivity and path stability between processes. This paper presents a novel architecture, namely Virtual Tree, for building and maintaining a structured overlay network with guaranteed connectivity and path stability in settings characterized by bounded churn rate. The architecture includes a simple query answering algorithm that provides interval valid answers. The overlay network generated by the Virtual Tree architecture is a tree-shaped topology with virtual nodes constituted by clusters of processes and virtual links constituted by multiple communication links connecting processes located in adjacent virtual nodes. We formally prove a bound on the churn rate for interval valid queries in a distributed system where communication latencies are bounded by a constant unknown by processes. Finally, we carry out an extensive experimental evaluation that shows the degree of robustness of the overlay network generated by the virtual tree architecture under different churn rates.
- Published
- 2013
7. Dynamic quorums for DHT-based enterprise infrastructures
- Author
-
Ricardo Jiménez-Peris, Roberto Baldoni, Antonino Virgillito, Leonardo Querzoni, and Marta Patiño-Martínez
- Subjects
Computer Networks and Communications ,Computer science ,business.industry ,Distributed computing ,Peer-to-peer ,computer.software_genre ,p2p systems ,Replication (computing) ,quorum systems ,Theoretical Computer Science ,Distributed hash table ,hierarchical grid ,hierarchical majority ,File sharing ,Artificial Intelligence ,Hardware and Architecture ,Distributed algorithm ,Scalability ,Mutual exclusion ,business ,computer ,Software ,Computer network - Abstract
Peer-to-peer systems (P2P) have become a popular technique to design large-scale distributed applications in unmanaged inter-domain settings, such as file sharing or chat systems, thanks to their capabilities to self-organize and evenly split the load among peers. Recently, enterprises owning a large IT hardware and software infrastructure started looking at these P2P technologies as a means both to reduce costs and to help their technical divisions to manage huge number of devices characterized by a high level of cooperation and a relatively low churn. Gaining a quick exclusive access to the system for maintenance or auditing purposes in these enterprise infrastructures is a fundamental operation to be implemented. Conversely, this kind of operation is usually not an issue in the previously mentioned inter-domain setting, where peers are inherently independent and cannot be managed. In the context of classical distributed applications, quorum systems have been considered as a major building block for implementing many paradigms, from distributed mutual exclusion to data replication management. In this paper, we explore how to architect decentralized protocols implementing quorum systems in Distributed Hash Table based cooperative P2P networks. Our results show that quorum systems taken ''as is'' from the literature and directly applied to such networks are not scalable due to the high load imposed onto the underlying network. This paper introduces some design principles for both quorum systems and protocols using them that boost their scalability and performance. These design principles consist in a dynamic and decentralized selection of quorums and in the exposition and exploitation of internals of the DHT. As a third design principle it is also shown how to redesign quorum systems to enable efficient decentralization. We show that by combining these design principles in a cooperative environment with relatively low churn it is possible to minimize the imposed load in the system, in terms of sites contacted to obtain a quorum, and the latency of quorum acquisition.
- Published
- 2008
8. A classification of total order specifications and its application to fixed sequencer-based implementations
- Author
-
Roberto Baldoni, Stefano Cimmino, and Carlo Marchetti
- Subjects
distributed algorithms ,agreement problems ,atomic broadcast ,distributed systems ,fault-tolerance ,global ordering ,group communication ,mapping implementations into specifications ,message passing ,specification hierarchy ,taxonomy ,total order broadcast ,Theoretical computer science ,Computer Networks and Communications ,Computer science ,Context (language use) ,Theoretical Computer Science ,Atomic broadcast ,Artificial Intelligence ,Taxonomy (general) ,Implementation ,Message passing ,Fault tolerance ,Hardware and Architecture ,Distributed algorithm ,Software - Abstract
During the last two decades the design and development of total order (TO) communications has been one of the main research topics in dependable distributed computing. The huge amount of research work has produced several TO specifications and a wide variety of TO implementations with different guarantees whose differences are often left hidden or unclear. This paper presents a systematic classification of six distinct TO specifications based on a well-defined formal framework. The classification allows us (i) to define in a formal way the differences among the behaviors of faulty and correct processes admitted by each specification, and (ii) to easily match TO implementations with respect to their enforced specification. The classification is applied to study the properties of eight variations of TO implementations based on a fixed sequencer given in a well-known context, namely primary component group communication systems.
- Published
- 2006
9. A least flow-time first load sharing approach for distributed server farm
- Author
-
James Broberg, Albert Y. Zomaya, Roberto Baldoni, and Zahir Tari
- Subjects
scheduling policies ,Computer Networks and Communications ,Computer science ,Distributed computing ,load balancing ,Real-time computing ,Workload ,Load balancing (computing) ,heavy-tailed workloads ,Theoretical Computer Science ,Scheduling (computing) ,Server farm ,Artificial Intelligence ,Hardware and Architecture ,Server ,load sharing ,task assignment ,Queue ,Software - Abstract
The most critical property exhibited by a heavy-tailed workload distribution (found in many WWW workloads) is that a very small fraction of tasks make up a large fraction of the workload, making the load very difficult to distribute in a distributed system. Load balancing and load sharing are the two predominant load distribution strategies used in such systems. Load sharing generally has better response time than load balancing because the latter can exhibit excessive overheads in selecting servers and partitioning tasks. We therefore further explored the least-loaded-first (LLF) load sharing approach and found two important limitations: (a) LLF does not consider the order of processing, and (b) when it assigns a task, LLF does not consider the processing capacity of servers. The high task size variation that exists in heavy-tailed workloads often causes smaller tasks to be severely delayed by large tasks. This paper proposes a size-based approach, called the least flow-time first (LFF-SIZE), which reduces the delay caused by size variation while maintaining a balanced load in the system. LFF-SIZE takes the relative processing time of a task into account and dynamically assigns a task to the fittest server with a lighter load and higher processing capacity. LFF-SIZE also uses a multi-section queue to separate larger tasks from smaller ones. This arrangement effectively reduces the delay of smaller tasks by larger ones as small tasks are given a higher priority to be processed. The performance results performed on the LFF-SIZE implementation shows a substantial improvement over existing load sharing and static size-based approaches under realistic heavy-tailed workloads.
- Published
- 2005
10. The DaQuinCIS architecture: a platform for exchanging and improving data quality in cooperative information systems
- Author
-
Massimo Mecella, Roberto Baldoni, Antonino Virgillito, Monica Scannapieco, and Carlo Marchetti
- Subjects
cooperative information system ,data integration ,data quality ,publish & subscribe ,xml model ,Service (systems architecture) ,Database ,Computer science ,media_common.quotation_subject ,Information quality ,computer.software_genre ,Data governance ,Hardware and Architecture ,Data quality ,Information system ,Quality (business) ,computer ,Dissemination ,Software ,Information Systems ,Data integration ,media_common - Abstract
In cooperative information systems, the quality of data exchanged and provided by different data sources is extremely important. A lack of attention to data quality can imply data of low quality to spread all over the cooperative system. At the same time, improvement can be based on comparing data, correcting them and thus disseminating high quality data. In this paper, we present an architecture for managing data quality in cooperative information systems, by focusiag on two specific modules, the Data Quality Broker and the Quality Notification Service. The Data Quality Broker allows for querying and improving data quality values. The Quality Notification Service is specifically targeted to the dissemination of changes on data quality values.
- Published
- 2004
11. Consensus in Byzantine asynchronous systems
- Author
-
Lenaik Tangui, Jean-Michel Hélary, Roberto Baldoni, and Michel Raynal
- Subjects
asynchronous systems ,byzantine failures ,consensus problem ,failure detectors ,fault-tolerance ,Computer science ,Distributed computing ,Asynchronous systems ,Failure detectors ,0102 computer and information sciences ,02 engineering and technology ,01 natural sciences ,Theoretical Computer Science ,Consensus ,0202 electrical engineering, electronic engineering, information engineering ,Discrete Mathematics and Combinatorics ,Byzantine fault tolerance ,Protocol (object-oriented programming) ,Finite-state machine ,business.industry ,TheoryofComputation_GENERAL ,020206 networking & telecommunications ,Fault tolerance ,Modular design ,Byzantine failures ,Fault-tolerance ,Quantum Byzantine agreement ,Computational Theory and Mathematics ,010201 computation theory & mathematics ,Asynchronous communication ,business ,Consensus problem - Abstract
This paper presents a consensus protocol resilient to Byzantine failures. It uses signed and certified messages and is based on two underlying failure detection modules. The first is a muteness failure detection module of the class ♢M. The second is a reliable Byzantine behaviour detection module. More precisely, the first module detects processes that stop sending messages, while processes experiencing other non-correct behaviours (i.e., Byzantine) are detected by the second module. The protocol is resilient to F faulty processes, F⩽min(⌊(n−1)/2⌋,C) (where C is the maximum number of faulty processes that can be tolerated by the underlying certification service).The approach used to design the protocol is new. While usual Byzantine consensus protocols are based on failure detectors to detect processes that stop communicating, none of them use a module to detect their Byzantine behaviour (this detection is not isolated from the protocol and makes it difficult to understand and prove correct). In addition to this modular approach and to a consensus protocol for Byzantine systems, the paper presents a finite state automaton-based implementation of the Byzantine behaviour detection module. Finally, the modular approach followed in this paper can be used to solve other problems in asynchronous systems experiencing Byzantine failures.
- Published
- 2003
12. On the minimal information to encode timestamps in distributed computations
- Author
-
Roberto Baldoni and Giovanna Melideo
- Subjects
Theoretical computer science ,Vector clock ,Computer science ,Concurrency ,Open problem ,Timestamping ,ENCODE ,Computer Science Applications ,Theoretical Computer Science ,causal pasts ,distributed computing ,timestamps ,Asynchronous communication ,Signal Processing ,vector clocks ,Timestamp ,causality relation ,Communications protocol ,Information Systems - Abstract
Timestamping protocols are used to capture the causal order or the concurrency of events in asynchronous distributed computations. In this paper we give an answer to the open problem issued by Schwarz and Mattern [Distrib. Comput. 7 (3) (1994) 149-174] about the minimum amount of information managed by protocols which represent causality in an isomorphic way. We point out that to encode each timestamp an amount of non-structured information (i.e., the number of bits) of ⌈log2((m+1)n - Σk=3n(nk)(2k-35))⌉ bits is necessary.
- Published
- 2002
13. Impossibility of scalar clock-based communication-induced checkpointing protocols ensuring the RDT property
- Author
-
Roberto Baldoni, Jean-Michel Hélary, Achour Mostefaoui, and Michel Raynal
- Subjects
Domino effect ,Computer science ,Distributed computing ,Computation ,Signal Processing ,Scalar (mathematics) ,Impossibility ,Communications protocol ,Computer Science Applications ,Information Systems ,Theoretical Computer Science - Abstract
Communication-induced checkpointing protocols constitute an interesting approach to the on-line determination of checkpoint and communication patterns enjoying desirable properties such as domino-effect freedom. They do not add control messages to the computation, but instead may attach control information to computation messages. Among these protocols, scalar clock-based protocols are particularly attractive as they use a single integer as control information. An interesting property of checkpoint and communication patterns is Rollback-Dependency Trackability, which ensures that all local checkpoint dependencies are on-the-fly trackable. So, it would be nice to design scalar clock-based communication-induced checkpointing protocols providing the RDT property, a previously open question. This paper shows that the design of such protocols is impossible.
- Published
- 2001
14. Rollback-Dependency Trackability: A Minimal Characterization and Its Protocol
- Author
-
Roberto Baldoni, Michel Raynal, and Jean-Michel Hélary
- Subjects
Sequence ,Property (philosophy) ,Dependency (UML) ,Computational Theory and Mathematics ,Computer science ,Concurrency ,Distributed computing ,Fault tolerance ,Characterization (mathematics) ,Rollback ,Computer Science Applications ,Information Systems ,Theoretical Computer Science - Abstract
Considering a checkpoint and communication pattern, the rollback-dependency trackability (RDT) property stipulates that there is no hidden dependency between local checkpoints. In other words, if there is a dependency between two checkpoints due to a noncausal sequence of messages (Z-path), then there exists a causal sequence of messages (C-path) that doubles the noncausal one and that establishes the same dependency. This paper introduces the notion of RDT-compliance. A property defined on Z-paths is RDT-compliant if the causal doubling of Z-paths having this property is sufficient to ensure RDT. Based on this notion, the paper provides examples of such properties. Moreover, these properties are visible, i.e., they can be tested on the fly. One of these properties is shown to be minimal with respect to visible and RDT-compliant properties. In other words, this property defines a minimal visible set of Z-paths that have to be doubled for the RDT property to be satisfied. Then, a family of communication-induced checkpointing protocols that ensure on-the-fly RDT properties is considered. Assuming processes take local checkpoints independently (called basic checkpoints), protocols of this family direct them to take on-the-fly additional local checkpoints (called forced checkpoints) in order that the resulting checkpoint and communication pattern satisfies the RDT property. The second contribution of this paper is a new communication-induced checkpointing protocol P . This protocol, based on a condition derived from the previous characterization, tracks a minimal set of Z-paths and breaks those not perceived as being doubled. Finally, a set of communication-induced checkpointing protocols are derived from P . Each of these derivations considers a particular weakening of the general condition used by P . It is interesting to note that some of these derivations produce communication-induced checkpointing protocols that have already been proposed in the literature.
- Published
- 2001
15. On the No-Z-Cycle Property in Distributed Executions
- Author
-
Bruno Ciciani, Francesco Quaglia, and Roberto Baldoni
- Subjects
distributed algorithms ,distributed systems ,Sequence ,causality ,Property (philosophy) ,Theoretical computer science ,Dependency (UML) ,Computer science ,Computer Networks and Communications ,Distributed computing ,Applied Mathematics ,global states ,checkpointing ,Fault tolerance ,Characterization (mathematics) ,fault-tolerance ,Theoretical Computer Science ,Core (game theory) ,Computational Theory and Mathematics ,Distributed algorithm ,theory of distributed computing ,Protocol (object-oriented programming) - Abstract
Given a checkpoint and communication pattern of a distributed execution, the No Z-Cycle property (NZC) states that a dependency between a checkpoint and itself does not exist. In other words, a noncausal sequence of messages that starts after a checkpoint and terminates before that checkpoint does not exist. From an operational point of view, this property corresponds to the fact that each checkpoint belongs to at least one consistent global checkpoint. So it could be used, for example, for restarting a distributed application after the occurrence of a failure. In this paper we derive a characterization of the NZC property (previously an open problem). It identifies a subset of Z-cycles, namely core Z-cycles (CZCs), that has to be empty in order that the checkpoint and communication pattern of the execution satisfies the NZC property. Then, we present a communication-induced checkpointing protocol that prevents CZCs on-the-fly. This protocol actually removes the common causal part to any CZC. Finally we propose a taxonomy of communication-induced checkpointing protocols that ensure the NZC property.
- Published
- 2000
- Full Text
- View/download PDF
16. Exploiting intra-object dependencies in parallel simulation
- Author
-
Francesco Quaglia and Roberto Baldoni
- Subjects
discrete event simulation ,distributed systems ,causality ,parallel processing ,rollback-recovery ,Relation (database) ,Computer science ,Concurrency ,Parallel computing ,optimistic synchronization ,Object (computer science) ,Synchronization ,Computer Science Applications ,Theoretical Computer Science ,Causality (physics) ,Parallel processing (DSP implementation) ,Signal Processing ,concurrency ,Timestamp ,Discrete event simulation ,time warp ,Information Systems - Abstract
This paper introduces the notion of weak causality that models the intra-object parallelism in parallel discrete event simulation. In this setting, a run where events are executed at each object according to their timestamp is a correct run. The weak causality relation allows to define the largest subset of all runs of a simulation that are equivalent to the timestamp-based run. Finally, we describe an application of weak causality to optimistic synchronization (Time Warp) by introducing a synchronization protocol that reduces the number of rollbacks and their extent.
- Published
- 1999
17. On the Correctness of Goscinski′s Algorithm
- Author
-
Bruno Ciciani, Giacomo Cioffi, and Roberto Baldoni
- Subjects
Causality (physics) ,Correctness ,Artificial Intelligence ,Computer Networks and Communications ,Hardware and Architecture ,Simple (abstract algebra) ,Computer science ,Mutual exclusion ,Algorithm ,Software ,Theoretical Computer Science - Abstract
In this paper, the correctness of the mutual exclusion algorithm proposed by Goscinski (J. Parallel Distribut. Comput.9(7), 77-82 (1990)), hereafter G, is discussed and its features are compared with other token-based algorithms already published. In particular, we show that G works correctly only using a communication system that guarantees a total ordering of messages, otherwise it is incorrect. We further give a modified version of G, hereafter BCC, and show that BCC is actually a simple modification of the Suzuki-Kasami algorithm (ACM Trans. Comput. Systems3(5), 344-349 (1985)).
- Published
- 1995
18. Distributed algorithms for multiple entries to a critical section with priority
- Author
-
Bruno Ciciani and Roberto Baldoni
- Subjects
Critical section ,Computer science ,Distributed computing ,Serialization ,Security token ,Computer Science Applications ,Theoretical Computer Science ,Priority inheritance ,Distributed algorithm ,Signal Processing ,Overhead (computing) ,Mutual exclusion ,Priority ceiling protocol ,Information Systems - Abstract
The distributed Mutual Exclusion (M.E.) algorithms for multiple entries to a critical section (C.S.) proposed in previous publications adopt an FCFS discipline to serialize M.E. requests, i.e., the ordering of the M.E. requests is done at their virtual generation time or physical arrival time. The insertion of priority disciplines such as Short-Job-First, Head-Of-Line, Shortest-Remaining-Job-First, etc., could be useful in many applications to optimize some performance indices. A drawback is that priority disciplines are prone to starvation. This paper shows how to insert a priority-based serialization discipline in token-based M.E. algorithms for multiple entries to a C.S. avoiding starvation. Moreover, we investigate its implementation overhead in the algorithm and the number of messages per C.S. entry.
- Published
- 1994
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.