21 results on '"dependency tracking"'
Search Results
2. Balancing Tracking Granularity and Parallelism in Many-Task Systems: The Horizons Approach
- Author
-
Thoman, Peter and Salzmann, Philip
- Published
- 2024
- Full Text
- View/download PDF
3. Improving Data Scientist Efficiency with Provenance.
- Author
-
Jingmei Hu, Jiwon Joung, Jacobs, Maia, Gajos, Krzysztof Z., and Seltzer, Margo I.
- Subjects
DEBUGGING ,COGNITIVE load ,DATA analysis ,SOFTWARE engineering ,PROGRAMMING languages - Abstract
Data scientists frequently analyze data by writing scripts. We conducted a contextual inquiry with interdisciplinary researchers, which revealed that parameter tuning is a highly iterative process and that debugging is time-consuming. As analysis scripts evolve and become more complex, analysts have difficulty conceptualizing their workflow. In particular, after editing a script, it becomes diffi- cult to determine precisely which code blocks depend on the edit. Consequently, scientists frequently re-run entire scripts instead of re-running only the necessary parts. We present ProvBuild, a tool that leverages language-level provenance to streamline the debugging process by reducing programmer cognitive load and decreasing subsequent runtimes, leading to an overall reduction in elapsed debugging time. ProvBuild uses provenance to track dependencies in a script. When an analyst debugs a script, ProvBuild generates a simplified script that contains only the information necessary to debug a particular problem. We demonstrate that debugging the simplified script lowers a programmer's cognitive load and permits faster re-execution when testing changes. The combination of reduced cognitive load and shorter runtime reduces the time necessary to debug a script. We quantitatively and qualitatively show that even though ProvBuild introduces overhead during a script's first execution, it is a more efficient way for users to debug and tune complex workflows. ProvBuild demonstrates a novel use of language-level provenance, in which it is used to proactively improve programmer productively rather than merely providing a way to retroactively gain insight into a body of code. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
4. Parallel Query Systems : Demand-Driven Incremental Compilers
- Author
-
Nolander, Christofer and Nolander, Christofer
- Abstract
Query systems were recently introduced as an architecture for constructing compilers, and have shown to enable fast and efficient incremental compilation, where results from previous builds is reused to accelerate future builds. With this architecture, a compiler is composed of several queries, each of which extracts a small piece of information about the source program. For example, one query might determine the type of a variable, and another the list of functions defined in some file. The dependencies of a query, which includes other queries or files on disk, are automatically recorded at runtime. With these dependencies, query systems can detect changes in their inputs and incorporate them into the final output, while reusing old results from queries which have not changed. This reduces the amount of work needed to recompile code, which saves both time and energy. We present a new parallel execution model for query systems using work-stealing, which dynamically balances the workload across multiple threads. This is facilitated by various augmentations to existing algorithms to allow concurrent operations. Furthermore, we introduce a novel data structure that accelerates incremental compilation for common use cases. We evaluated the impact of these augmentations by implementing a compiler frontend capable of parsing and type-checking the Go programming language. We demonstrate a 10x reduction in compile times using the parallel execution mode. Finally, under certain common conditions, we show a 5x reduction in incremental compile times compared to the state-of-the-art., Query-system är en ny arkitektur som har använts för att implementera kompilatorer för programspråk och har ett fokus på att möjliggöra snabb och effektiv inkrementell kompilering. Med denna arkitektur består en kompilator flera olika mindre funktioner, som var och en svarar på en liten fråga om källprogrammet, såsom typen av en variabel eller listan över funktioner i en fil. Genom att spåra hur dessa funktioner anropar varandra, och den data de läser, kan kompilatorer upptäcka förändringar i sina indata och utföra den minimala mängd arbete som krävs för att sammanställa dessa förändringar i utdata. Detta minskar mängden arbete som behövs för att kompilera om kod, vilket sparar både tid och energi. I denna rapport presenterar vi en ny exekveringsmodell för Query-system som möjliggör parallellism med hjälp av work-stealing. Detta underlättas av flera tillägg till befintliga algoritmer som gör det möjligt att utföra alla operationer parallellt. Utöver detta introducerar vi även en ny datastruktur som gör inkrementell kompilering snabbare för många vanliga användningsområden. Vi utvärderade effekten av dessa förändringar genom att implementera ett kompilatorgränssnitt som kan analysera och verifiera korrekthet av typer Go-programmeringsspråket. Resultaten visar en 10x reduktion i kompileringstider med hjälp av parallellkörningsläget. Vi demonstrerar även 5 gånger lägre kompileringstider vid inkrementella ändringar än vad som tidigare varit möjligt.
- Published
- 2023
5. StarFlow: A Script-Centric Data Analysis Environment
- Author
-
Angelino, Elaine, Yamins, Daniel, Seltzer, Margo, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Nierstrasz, Oscar, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Sudan, Madhu, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Vardi, Moshe Y., Series editor, Weikum, Gerhard, Series editor, McGuinness, Deborah L., editor, Michaelis, James R., editor, and Moreau, Luc, editor
- Published
- 2010
- Full Text
- View/download PDF
6. CHECKPOINTING WITH MINIMAL RECOVERY IN ADHOCNET BASED TMR
- Author
-
Sarmistha Neogy
- Subjects
rollback recovery ,adhoc networks ,dependency tracking ,checkpointing, dependency tracking, rollback recovery, adhoc networks, triple modular redundancy ,checkpointing ,triple modular redundancy - Abstract
This paper describes two-fold approach towards utilizing Triple Modular Redundancy (TMR) in Wireless Adhoc Network (AdocNet). A distributed checkpointing and recovery protocol is proposed. The protocol eliminates useless checkpoints and helps in selecting only dependent processes in the concerned checkpointing interval, to recover. A process starts recovery from its last checkpoint only if it finds that it is dependent (directly or indirectly) on the faulty process. The recovery protocol also prevents the occurrence of missing or orphan messages. In AdocNet, a set of three nodes (connected to each other) is considered to form a TMR set, being designated as main, primary and secondary. A main node in one set may serve as primary or secondary in another. Computation is not triplicated, but checkpoint by main is duplicated in its primary so that primary can continue if main fails. Checkpoint by primary is then duplicated in secondary if primary fails too.
- Published
- 2022
- Full Text
- View/download PDF
7. Bivariate dependency tracking in interval arithmetic.
- Author
-
Gray, Ander, de Angelis, Marco, Patelli, Edoardo, and Ferson, Scott
- Subjects
- *
NONLINEAR differential equations , *COMPUTER software execution , *BOOLEAN matrices , *ORDINARY differential equations , *PROBABILITY theory - Abstract
We propose a correlated bivariate interval arithmetic which allows for an initial dependence to be propagated, as well as the tracking of complicated non-linear dependencies arising from a computer program's execution. For this task, we extend several familiar concepts from probability theory to intervals, including bivariate copulas, conditioning, inference, and vine copulas. The interval copulas, which we call interval relations, may take any shape, and are represented by Boolean matrices defining where two intervals jointly exist or not. We use set conditioning to define an efficient correlated interval arithmetic, which may be used to find the input–output relations of operations. A key component of the presented arithmetic are interval relation networks, interval analogues to vine copulas, which store the interval relations throughout a program's execution, and use set inference to determine any unknown relations. The presented network inference can give a robust outer approximation to the exact multivariate interval dependency, which is found by projecting each pairwise bivariate relation into higher dimensions. Although some higher dimensional information is lost in this process, the bivariate projections are often sufficient to stop interval bounds becoming excessively wide. This extension allows for intervals to be rigorously and tightly propagated in deterministic engineering codes in an automatic fashion, and we apply the arithmetic on several engineering dynamics problems, including a non-linear ordinary differential equation. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
8. COVID-19 Knowledge Resource Categorization and Tracking: Conceptual Framework Study
- Author
-
Jamil Hussain, Muhammad Afzal, Jaehun Bang, Sungyoung Lee, and Maqbool Hussain
- Subjects
Knowledge management ,020205 medical informatics ,Computer science ,Computer applications to medicine. Medical informatics ,digital health ,R858-859.7 ,Health Informatics ,02 engineering and technology ,03 medical and health sciences ,Resource (project management) ,information technology ,0202 electrical engineering, electronic engineering, information engineering ,Humans ,Resource management ,resource management ,Action research ,Pandemics ,030304 developmental biology ,0303 health sciences ,Original Paper ,Consumer Health Information ,business.industry ,SARS-CoV-2 ,pandemic ,dashboards ,Information technology ,information organization ,COVID-19 ,interactive dashboard ,Metadata ,Identification (information) ,Workflow ,knowledge graphs ,Knowledge ,Conceptual framework ,Health Resources ,dependency tracking ,tracing information ,Public aspects of medicine ,RA1-1270 ,business - Abstract
Background Since the declaration of COVID-19 as a global pandemic by the World Health Organization, the disease has gained momentum with every passing day. Various private and government sectors of different countries allocated funding for research in multiple capacities. A significant portion of efforts has been devoted to information technology and service infrastructure development, including research on developing intelligent models and techniques for alerts, monitoring, early diagnosis, prevention, and other relevant services. As a result, many information resources have been created globally and are available for use. However, a defined structure to organize these resources into categories based on the nature and origin of the data is lacking. Objective This study aims to organize COVID-19 information resources into a well-defined structure to facilitate the easy identification of a resource, tracking information workflows, and to provide a guide for a contextual dashboard design and development. Methods A sequence of action research was performed that involved a review of COVID-19 efforts and initiatives on a global scale during the year 2020. Data were collected according to the defined structure of primary, secondary, and tertiary categories. Various techniques for descriptive statistical analysis were employed to gain insights into the data to help develop a conceptual framework to organize resources and track interactions between different resources. Results Investigating diverse information at the primary, secondary, and tertiary levels enabled us to develop a conceptual framework for COVID-19–related efforts and initiatives. The framework of resource categorization provides a gateway to access global initiatives with enriched metadata, and assists users in tracking the workflow of tertiary, secondary, and primary resources with relationships between various fragments of information. The results demonstrated mapping initiatives at the tertiary level to secondary level and then to the primary level to reach firsthand data, research, and trials. Conclusions Adopting the proposed three-level structure allows for a consistent organization and management of existing COVID-19 knowledge resources and provides a roadmap for classifying future resources. This study is one of the earliest studies to introduce an infrastructure for locating and placing the right information at the right place. By implementing the proposed framework according to the stated guidelines, this study allows for the development of applications such as interactive dashboards to facilitate the contextual identification and tracking of interdependent COVID-19 knowledge resources.
- Published
- 2021
9. Projecting interval uncertainty through the discrete Fourier transform: An application to time signals with poor precision.
- Author
-
Behrendt, Marco, de Angelis, Marco, Comerford, Liam, Zhang, Yuanjin, and Beer, Michael
- Subjects
- *
SIGNAL frequency estimation , *MONTE Carlo method , *INTERVAL analysis , *AMPLITUDE estimation , *DISCRETE Fourier transforms - Abstract
The discrete Fourier transform (DFT) is often used to decompose a signal into a finite number of harmonic components. The efficient and rigorous propagation of the error present in a signal through the transform can be computationally challenging. Real data is always subject to imprecision because of measurement uncertainty. For example, such uncertainty may come from sensors whose precision is affected by degradation, or simply from digitisation. On many occasions, only error bounds on the signal may be known, thus it may be necessary to automatically propagate the error bounds without making additional artificial assumptions. This paper presents a method that can automatically propagate interval uncertainty through the DFT while yielding the exact bounds on the Fourier amplitude and on an estimation of the Power Spectral Density (PSD) function. The method allows technical analysts to project interval uncertainty – present in the time signals – to the Fourier amplitude and PSD function without making assumptions about the dependence and the distribution of the error over the time steps. Thus, it is possible to calculate and analyse system responses in the frequency domain without conducting extensive Monte Carlo simulations nor running expensive optimisations in the time domain. The applicability of this method in practice is demonstrated by a technical application. It is also shown that conventional Monte Carlo methods severely underestimate the uncertainty. • A novel algorithm for propagating interval signals through the DFT is presented. • Interval arithmetic is utilised to obtain exact bounds on the Fourier amplitude. • The problem of repeated variables arising in the amplitude calculation is addressed. • It is shown that MC underestimates the uncertainty in the Fourier amplitude bounds. • The efficiency is tested against an example involving dynamic structural analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
10. Securing information flow via dynamic capture of dependencies.
- Author
-
Shroff, Paritosh, Smith, Scott F., and Thober, Mark
- Subjects
- *
INFORMATION technology , *COMPUTER security , *SECURITY systems , *DATA protection , *DYNAMICS , *COMPUTER science - Abstract
Although static systems for information flow security are well studied, few works address runtime information flow monitoring. Runtime information flow control offers distinct advantages in precision and in the ability to support dynamically defined policies. To this end, we here develop a new runtime information flow system based on the runtime tracking of indirect dependencies between program points. Our system tracks both direct and indirect information flows, and noninterference results are proved. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
11. Efficient dependency tracking for relevant events in concurrent systems.
- Author
-
Agarwal, Anurag and Garg, Vijay
- Subjects
- *
COMPUTER multitasking , *SCALABILITY , *COMPUTER networks , *SIMULTANEOUS multithreading processors , *JAVA programming language , *PARTIALLY ordered sets , *DATA structures - Abstract
In a concurrent system with N processes, vector clocks of size N are used for tracking dependencies between the events. Using vectors of size N leads to scalability problems. Moreover, association of components with processes makes vector clocks cumbersome and inefficient for systems with a dynamic number of processes. We present a class of logical clock algorithms, called chain clock, for tracking dependencies between relevant events based on generalizing a process to any chain in the computation poset. Chain clocks are generally able to track dependencies using fewer than N components and also adapt automatically to systems with dynamic number of processes. We compared the performance of Dynamic Chain Clock (DCC) with vector clock for multithreaded programs in Java. With 1 % of total events being relevant events, DCC requires 10 times fewer components than vector clock and the timestamp traces are smaller by a factor of 100. For the same case, although DCC requires shared data structures, it is still 10 times faster than vector clock in our experiments. We also study the class of chain clocks which perform optimally for posets of small width and show that a single algorithm cannot perform optimally for posets of small width as well as large width. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
12. Computer arithmetic for probability distribution variables
- Author
-
Li, Weiye and Mac Hyman, James
- Subjects
- *
UNCERTAINTY (Information theory) , *COMPUTER simulation , *PROBABILITY theory , *RANDOM variables , *DISTRIBUTION (Probability theory) - Abstract
The uncertainty in the variables and functions in computer simulations can be quantified by probability distributions and the correlations between the variables. We augment the standard computer arithmetic operations and the interval arithmetic approach to include probability distribution variable (PDV) as a basic data type. Probability distribution variable is a random variable that is usually characterized by generalized probabilistic discretization. The correlations or dependencies between PDVs that arise in a computation are automatically calculated and tracked. These correlations are used by the computer arithmetic rules to achieve the convergent approximation of the probability distribution function of a PDV and to guarantee that the derived bounds include the true solution. In many calculations, the calculated uncertainty bounds for PDVs are much tighter than they would have been had the dependencies been ignored. We describe the new PDV Arithmetic and verify the effectiveness of the approach to account for the creation and propagation of uncertainties in a computer program due to uncertainties in the initial data. [Copyright &y& Elsevier]
- Published
- 2004
- Full Text
- View/download PDF
13. COVID-19 Knowledge Resource Categorization and Tracking: Conceptual Framework Study.
- Author
-
Afzal, Muhammad, Hussain, Maqbool, Hussain, Jamil, Bang, Jaehun, and Lee, Sungyoung
- Subjects
COVID-19 pandemic ,INFORMATION superhighway ,COVID-19 ,INFRASTRUCTURE (Economics) ,INFORMATION resources ,KNOWLEDGE graphs ,INFORMATION technology - Abstract
Background: Since the declaration of COVID-19 as a global pandemic by the World Health Organization, the disease has gained momentum with every passing day. Various private and government sectors of different countries allocated funding for research in multiple capacities. A significant portion of efforts has been devoted to information technology and service infrastructure development, including research on developing intelligent models and techniques for alerts, monitoring, early diagnosis, prevention, and other relevant services. As a result, many information resources have been created globally and are available for use. However, a defined structure to organize these resources into categories based on the nature and origin of the data is lacking.Objective: This study aims to organize COVID-19 information resources into a well-defined structure to facilitate the easy identification of a resource, tracking information workflows, and to provide a guide for a contextual dashboard design and development.Methods: A sequence of action research was performed that involved a review of COVID-19 efforts and initiatives on a global scale during the year 2020. Data were collected according to the defined structure of primary, secondary, and tertiary categories. Various techniques for descriptive statistical analysis were employed to gain insights into the data to help develop a conceptual framework to organize resources and track interactions between different resources.Results: Investigating diverse information at the primary, secondary, and tertiary levels enabled us to develop a conceptual framework for COVID-19-related efforts and initiatives. The framework of resource categorization provides a gateway to access global initiatives with enriched metadata, and assists users in tracking the workflow of tertiary, secondary, and primary resources with relationships between various fragments of information. The results demonstrated mapping initiatives at the tertiary level to secondary level and then to the primary level to reach firsthand data, research, and trials.Conclusions: Adopting the proposed three-level structure allows for a consistent organization and management of existing COVID-19 knowledge resources and provides a roadmap for classifying future resources. This study is one of the earliest studies to introduce an infrastructure for locating and placing the right information at the right place. By implementing the proposed framework according to the stated guidelines, this study allows for the development of applications such as interactive dashboards to facilitate the contextual identification and tracking of interdependent COVID-19 knowledge resources. [ABSTRACT FROM AUTHOR]- Published
- 2021
- Full Text
- View/download PDF
14. A Module-System Discipline for Model-Driven Software Development
- Author
-
Sebastian Erdweg and Klaus Ostermann
- Subjects
FOS: Computer and information sciences ,Dependency (UML) ,Computer Science - Programming Languages ,Programming language ,Computer science ,business.industry ,Semantics (computer science) ,Model transformation ,model-driven software development ,Software development ,Model-driven software development ,computer.software_genre ,Software Engineering (cs.SE) ,Computer Science - Software Engineering ,Scripting language ,module system ,software components ,Code (cryptography) ,dependency tracking ,Programmer ,business ,computer ,domain-specic languages ,computer.programming_language ,Programming Languages (cs.PL) - Abstract
Model-driven development is a pragmatic approach to software development that embraces domain-specific languages (DSLs), where models correspond to DSL programs. A distinguishing feature of model-driven development is that clients of a model can select from an open set of alternative semantics of the model by applying different model transformation. However, in existing model-driven frameworks, dependencies between models, model transformations, and generated code artifacts are either implicit or globally declared in build scripts, which impedes modular reasoning, separate compilation, and programmability in general.We propose the design of a new module system that incorporates models and model transformations as modules. A programmer can apply transformations in import statements, thus declaring a dependency on generated code artifacts. Our design enables modular reasoning and separate compilation by preventing hidden dependencies, and it supports mixing modeling artifacts with conventional code artifacts as well as higher-order transformations. We have formalized our design and the aforementioned properties and have validated it by an implementation and case studies that show that our module system successfully integrates model-driven development into conventional programming languages.
- Published
- 2017
15. A Module-System Discipline for Model-Driven Software Development
- Author
-
Erdweg, S.T. (author), Ostermann, Klaus (author), Erdweg, S.T. (author), and Ostermann, Klaus (author)
- Abstract
Model-driven development is a pragmatic approach to software development that embraces domain-specific languages (DSLs), where models correspond to DSL programs. A distinguishing feature of model-driven development is that clients of a model can select from an open set of alternative semantics of the model by applying different model transformation. However, in existing model-driven frameworks, dependencies between models, model transformations, and generated code artifacts are either implicit or globally declared in build scripts, which impedes modular reasoning, separate compilation, and programmability in general. We propose the design of a new module system that incorporates models and model transformations as modules. A programmer can apply transformations in import statements, thus declaring a dependency on generated code artifacts. Our design enables modular reasoning and separate compilation by preventing hidden dependencies, and it supports mixing modeling artifacts with conventional code artifacts as well as higher-order transformations. We have formalized our design and the aforementioned properties and have validated it by an implementation and case studies that show that our module system successfully integrates model-driven development into conventional programming languages., Programming Languages
- Published
- 2017
- Full Text
- View/download PDF
16. Checkpointing with Minimal Recovery in Adhoc Net Based TMR
- Author
-
Sarmistha Neogy
- Subjects
Networking and Internet Architecture (cs.NI) ,FOS: Computer and information sciences ,Triple modular redundancy ,Computer science ,business.industry ,Wireless ad hoc network ,Node (networking) ,Distributed computing ,Process (computing) ,checkpointing ,Interval (mathematics) ,Computer Science - Networking and Internet Architecture ,Set (abstract data type) ,Computer Science - Distributed, Parallel, and Cluster Computing ,rollback recovery ,Data_FILES ,Wireless ,dependency tracking ,Distributed, Parallel, and Cluster Computing (cs.DC) ,business ,Protocol (object-oriented programming) - Abstract
This paper describes two-fold approach towards utilizing Triple Modular Redundancy (TMR) in Wireless Adhoc Network (AdocNet). A distributed checkpointing and recovery protocol is proposed. The protocol eliminates useless checkpoints and helps in selecting only dependent processes in the concerned checkpointing interval, to recover. A process starts recovery from its last checkpoint only if it finds that it is dependent (directly or indirectly) on the faulty process. The recovery protocol also prevents the occurrence of missing or orphan messages. In AdocNet, a set of three nodes (connected to each other) is considered to form a TMR set, being designated as main, primary and secondary. A main node in one set may serve as primary or secondary in another. Computation is not triplicated, but checkpoint by main is duplicated in its primary so that primary can continue if main fails. Checkpoint by primary is then duplicated in secondary if primary fails too., Comment: International Journal of UbiComp (IJU), Vol.6, No.4, October 2015
- Published
- 2015
- Full Text
- View/download PDF
17. Vulnerability dependencies in antivirus software
- Author
-
Kimmo Halunen, Juha Röning, Pekka Pietikäinen, Marko Laakso, K. Askola, Rauli Puuperä, and Juhani Eronen
- Subjects
Vulnerability dependencies ,Computer science ,business.industry ,Vulnerability ,Context (language use) ,Vulnerability management ,Semantic data model ,computer.software_genre ,Computer security ,Antivirus vulnerabilities ,Dependency tracking ,Critical infrastructure ,Computer virus ,Software ,Communications protocol ,business ,computer - Abstract
In this paper we present an application of the MATINE method for investigating dependencies in antivirus (AV) software and some vulnerabilities arising from these dependencies. Previously, this method has been effectively used to find vulnerabilities in network protocols. Because AV software is as vulnerable as any other software and has a great security impact, we decided to use this method to find vulnerabilities in AV software. These findings may have implications to critical infrastructure, for the use of AV is often considered obligatory. The results were obtained by gathering semantic data on AV vulnerabilities, analyisis of the data and content analysis of media follow-up. The results indicate, that different aspects of AV software should be observed in the context of critical infrastructure planning and management.
- Published
- 2008
- Full Text
- View/download PDF
18. Automatic dependency tracking in modern software configuration management systems
- Author
-
Sajnani, Bharat Mohan
- Subjects
Software configuration management ,Dependency tracking - Abstract
In today’s Configuration Management system, it becomes extremely necessary to be able to support multiple projects, branches and versions of the source code. As an institution’s build system becomes more complex, management of the source code becomes very time-consuming and complex if the build system is not mature enough. This puts pressure on several entities, especially the build and release management teams, as well as the quality assurance and product testing teams. This thesis introduces a technique to obtain dependency information from build systems, mapping executables to source files and analyzing their dependencies. Dependency information can be used to reduce the testing effort, automate the entire sustaining production process (service packs and patches) to customers, prevent test escapes, highlight the relationships between various components of a system as well as provide product and system model views of the internal representation of a product’s architecture. This thesis proposes a method to gather dependency information of a system by extracting certain metadata from executables. After some in-depth research, we identified that source files can be initially injected with certain information that remains embedded in it even after a system has been built into executables. This information can also be extracted and compiled to create dependency information that maps executables, static and dynamic shared libraries as well as software and enterprise archives to the source files that were used to build them
- Published
- 2004
- Full Text
- View/download PDF
19. Locks and Barriers in Checkpointing and Recovery
- Author
-
Ramamurthy Badrinath, C. Morin, Indian Institute of Technology Kharagpur (IIT Kharagpur), Programming distributed parallel systems for large scale numerical simulation (PARIS), Institut de Recherche en Informatique et Systèmes Aléatoires (IRISA), Université de Rennes 1 (UR1), Université de Rennes (UNIV-RENNES)-Université de Rennes (UNIV-RENNES)-Institut National des Sciences Appliquées - Rennes (INSA Rennes), Institut National des Sciences Appliquées (INSA)-Université de Rennes (UNIV-RENNES)-Institut National des Sciences Appliquées (INSA)-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre National de la Recherche Scientifique (CNRS)-Université de Rennes 1 (UR1), Institut National des Sciences Appliquées (INSA)-Université de Rennes (UNIV-RENNES)-Institut National des Sciences Appliquées (INSA)-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre National de la Recherche Scientifique (CNRS)-École normale supérieure - Cachan (ENS Cachan)-Inria Rennes – Bretagne Atlantique, Institut National de Recherche en Informatique et en Automatique (Inria), Université de Rennes (UR)-Institut National des Sciences Appliquées - Rennes (INSA Rennes), Institut National des Sciences Appliquées (INSA)-Institut National des Sciences Appliquées (INSA)-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre National de la Recherche Scientifique (CNRS)-Université de Rennes (UR)-Institut National des Sciences Appliquées - Rennes (INSA Rennes), Institut National des Sciences Appliquées (INSA)-Institut National des Sciences Appliquées (INSA)-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre National de la Recherche Scientifique (CNRS)-École normale supérieure - Cachan (ENS Cachan)-Inria Rennes – Bretagne Atlantique, and INRIA
- Subjects
LOCK ,Tracking model ,Dependency (UML) ,Computer science ,Distributed computing ,KERRIGHED ,CHECKPOINTING ,[INFO.INFO-OH]Computer Science [cs]/Other [cs.OH] ,02 engineering and technology ,FT ,Kerrighed ,Synchronization (computer science) ,0202 electrical engineering, electronic engineering, information engineering ,ComputingMilieux_MISCELLANEOUS ,020203 distributed computing ,DEPENDENCY TRACKING ,Message passing ,020207 software engineering ,CLUSTER ,BARRIER ,Lock (computer science) ,BACKWARD ERROR RECOVERY ,Task (computing) ,Shared memory ,SYNCHRONIZATION ,[INFO.INFO-DC]Computer Science [cs]/Distributed, Parallel, and Cluster Computing [cs.DC] - Abstract
Dependency tracking between communicating tasks is an important concept in backward error recovery for parallel applications. One can extend the traditional dependence tracking model for message passing systems to track dependencies between shared memory and task private states for shared memory applications. The objective of this paper is to analyze the issues generated by locks and barriers in parallel applications so that we can checkpoint tasks at any time (even when holding or waiting for locks and barriers). In particular we attempt to extend earlier dependency tracking mechanisms to locks and barriers. We address both coordinated and uncoordinated checkpointing schemes.
- Published
- 2004
20. Common Mechanisms for supporting fault tolerance in DSM and message passing systems
- Author
-
Badrinath, Ramamurthy, Morin, Christine, Programming distributed parallel systems for large scale numerical simulation (PARIS), Institut de Recherche en Informatique et Systèmes Aléatoires (IRISA), Université de Rennes 1 (UR1), Université de Rennes (UNIV-RENNES)-Université de Rennes (UNIV-RENNES)-Institut National des Sciences Appliquées - Rennes (INSA Rennes), Institut National des Sciences Appliquées (INSA)-Université de Rennes (UNIV-RENNES)-Institut National des Sciences Appliquées (INSA)-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre National de la Recherche Scientifique (CNRS)-Université de Rennes 1 (UR1), Institut National des Sciences Appliquées (INSA)-Université de Rennes (UNIV-RENNES)-Institut National des Sciences Appliquées (INSA)-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre National de la Recherche Scientifique (CNRS)-École normale supérieure - Cachan (ENS Cachan)-Inria Rennes – Bretagne Atlantique, Institut National de Recherche en Informatique et en Automatique (Inria), Grigoras, Dan and Nicalau, Alex and Tiplea, Ferucio Laurentiu, Université de Rennes (UR)-Institut National des Sciences Appliquées - Rennes (INSA Rennes), Institut National des Sciences Appliquées (INSA)-Institut National des Sciences Appliquées (INSA)-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre National de la Recherche Scientifique (CNRS)-Université de Rennes (UR)-Institut National des Sciences Appliquées - Rennes (INSA Rennes), Institut National des Sciences Appliquées (INSA)-Institut National des Sciences Appliquées (INSA)-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre National de la Recherche Scientifique (CNRS)-École normale supérieure - Cachan (ENS Cachan)-Inria Rennes – Bretagne Atlantique, and INRIA
- Subjects
FT ,ROLLBACK RECOVERY ,DEPENDENCY TRACKING ,DISTRIBUTED SYSTEMS ,SHARED MEMORY ,KERRIGHED ,CHECKPOINTING ,[INFO.INFO-OH]Computer Science [cs]/Other [cs.OH] ,FAULT TOLERANCE ,MESSAGE PASSING ,CLUSTER ,[INFO.INFO-DC]Computer Science [cs]/Distributed, Parallel, and Cluster Computing [cs.DC] ,ComputingMilieux_MISCELLANEOUS - Abstract
Backward error recovery involving checkpointing and restart of tasks is an important component of any system providing fault tolerance to applicati- ons distributed over a network. A central problem to checkpointing and recovery is the ability to track dependencies and arrive at a consistent global checkpoint. Traditionally literature treats one of either distributed shared memory (DSM) or message passing as the interprocess communication mechanism when considering the issue of fault tolerance. This paper describes preliminary investigation into common mechanisms that can be implemented to support a wide variety of protocols in both shared memory and message passing systems. In effect it can be used in a system that combines both these IPC mechanisms.
- Published
- 2003
21. Consistent Main-Memory Database Federations under Deferred Disk Writes
- Author
-
R. Schmidt and Fernando Pedone
- Subjects
Atomicity ,Database ,Distributed database ,consistency ,rollback-recovery ,Transaction processing ,Computer science ,Commit ,computer.software_genre ,Consistency (database systems) ,distributed transactions ,Operating system ,Distributed transaction ,Stable storage ,dependency tracking ,Database transaction ,computer ,main-memory database systems - Abstract
Current cluster architectures provide the ideal environment to run federations of main-memory database systems (FMMDBs). In FMMDBs, data resides in the main memory of the federation servers. FMMDBs significantly improve performance by avoiding I/O during the execution of read operations. To maximize the performance of update transactions as well, some applications recur to deferred disk writes. This means that update transactions commit before their modifications are written on stable storage and durability must be ensured outside the database. While deferred disk writes in centralized MMDBs relax the durability property of transactions only, in FMMDBs transaction atomicity may be also violated in case of failures. We address this issue from the perspective of log-based rollback-recovery in distributed systems and provide an efficient solution to the problem. Besides presenting a mechanism to ensure atomicity in FMMDBs, the paper bridges the gap between rollback-recovery in message-passing distributed systems and distributed transaction processing, and shows how results developed in the first context can be exploited in the second.
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.