46 results on '"Weidlich, Matthias"'
Search Results
2. Detecting rumours with latency guarantees using massive streaming data
- Author
-
Nguyen, Thanh Tam, Huynh, Thanh Trung, Yin, Hongzhi, Weidlich, Matthias, Nguyen, Thanh Thi, Mai, Thai Son, and Nguyen, Quoc Viet Hung
- Published
- 2023
- Full Text
- View/download PDF
3. Fire now, fire later: alarm-based systems for prescriptive process monitoring
- Author
-
Fahrenkrog-Petersen, Stephan A., Tax, Niek, Teinemaa, Irene, Dumas, Marlon, Leoni, Massimiliano de, Maggi, Fabrizio Maria, and Weidlich, Matthias
- Published
- 2022
- Full Text
- View/download PDF
4. The Collaborative Research Center FONDA
- Author
-
Leser, Ulf, Hilbrich, Marcus, Draxl, Claudia, Eisert, Peter, Grunske, Lars, Hostert, Patrick, Kainmüller, Dagmar, Kao, Odej, Kehr, Birte, Kehrer, Timo, Koch, Christoph, Markl, Volker, Meyerhenke, Henning, Rabl, Tilmann, Reinefeld, Alexander, Reinert, Knut, Ritter, Kerstin, Scheuermann, Björn, Schintke, Florian, Schweikardt, Nicole, and Weidlich, Matthias
- Published
- 2021
- Full Text
- View/download PDF
5. What spreadsheets are to numbers, process mining is to events
- Author
-
Weidlich, Matthias
- Published
- 2019
- Full Text
- View/download PDF
6. Privacy-Preserving Process Mining: Differential Privacy for Event Logs
- Author
-
Mannhardt, Felix, Koschmider, Agnes, Baracaldo, Nathalie, Weidlich, Matthias, and Michael, Judith
- Published
- 2019
- Full Text
- View/download PDF
7. Privacy-preserving Process Mining: Differential: Privacy for Event Logs (Extended Abstract)
- Author
-
Mannhardt, Felix, Koschmider, Agnes, Baracaldo, Nathalie, Weidlich, Matthias, and Michael, Judith
- Published
- 2019
- Full Text
- View/download PDF
8. Answer validation for generic crowdsourcing tasks with minimal efforts
- Author
-
Hung, Nguyen Quoc Viet, Thang, Duong Chi, Tam, Nguyen Thanh, Weidlich, Matthias, Aberer, Karl, Yin, Hongzhi, and Zhou, Xiaofang
- Published
- 2017
- Full Text
- View/download PDF
9. PRETSA: Event Log Sanitization for Privacy-aware Process Discovery: (Extended Abstract)
- Author
-
Fahrenkrog-Petersen, Stephan A., van der Aa, Han, and Weidlich, Matthias
- Published
- 2019
- Full Text
- View/download PDF
10. Argument discovery via crowdsourcing
- Author
-
Nguyen, Quoc Viet Hung, Duong, Chi Thang, Nguyen, Thanh Tam, Weidlich, Matthias, Aberer, Karl, Yin, Hongzhi, and Zhou, Xiaofang
- Published
- 2017
- Full Text
- View/download PDF
11. Process Analytics over IoT-based Event Streams with Privacy Guarantees
- Author
-
Koschmider, Agnes, Degeling, Martin, and Weidlich, Matthias
- Published
- 2019
- Full Text
- View/download PDF
12. Querying process models by behavior inclusion
- Author
-
Kunze, Matthias, Weidlich, Matthias, and Weske, Mathias
- Published
- 2015
- Full Text
- View/download PDF
13. Styles in business process modeling: an exploration and a model
- Author
-
Pinggera, Jakob, Soffer, Pnina, Fahland, Dirk, Weidlich, Matthias, Zugal, Stefan, Weber, Barbara, Reijers, Hajo A., and Mendling, Jan
- Published
- 2015
- Full Text
- View/download PDF
14. Connectivity of workflow nets: the foundations of stepwise verification
- Author
-
Polyvyanyy, Artem, Weidlich, Matthias, and Weske, Mathias
- Published
- 2011
- Full Text
- View/download PDF
15. Editorial
- Author
-
Carbone, Marco, Hildebrandt, Thomas, Parrow, Joachim, and Weidlich, Matthias
- Published
- 2016
- Full Text
- View/download PDF
16. Reasoning on the Efficiency of Distributed Complex Event Processing.
- Author
-
Akili, Samira, Weidlich, Matthias, Schlingloff, H., and Penczek, W.
- Subjects
- *
MODULAR design , *ELECTRONIC data processing , *AXIOMS - Abstract
Complex event processing (CEP) evaluates queries over streams of event data to detect situations of interest. If the event data are produced by geographically distributed sources, CEP may exploit in-network processing that distributes the evaluation of a query among the nodes of a network. To this end, a query is modularized and individual query operators are assigned to nodes, especially those that act as data sources. Existing solutions for such operator placement, however, are limited in that they assume all query results to be gathered at one designated node, commonly referred to as a sink. Hence, existing techniques postulate a hierarchical structure of the network that generates and processes the event data. This largely neglects the optimisation potential that stems from truly decentralised query evaluation with potentially many sinks. To address this gap, in this paper, we propose Multi-Sink Evaluation (MuSE) graphs as a formal computational model to evaluate common CEP queries in a decentralised manner. We further prove the completeness of query evaluation under this model. Striving for distributed CEP that can scale to large volumes of high-frequency event streams, we show how to reason on the network costs induced by distributed query evaluation and prune inefficient query execution plans. As such, our work lays the foundation for distributed CEP that is both, sound and efficient. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
17. Context-aware temporal network representation of event logs: Model and methods for process performance analysis.
- Author
-
Senderovich, Arik, Weidlich, Matthias, and Gal, Avigdor
- Subjects
- *
TIME-varying networks , *CONGESTION pricing , *AD hoc computer networks , *RESEARCH methodology - Abstract
Analysing performance of business processes is an important vehicle to improve their operation. Specifically, an accurate assessment of sojourn times and remaining times enables bottleneck analysis and resource planning. Recently, methods to create respective performance models from event logs have been proposed. These works have several limitations, though: They either consider control-flow and performance information separately, or rely on an ad-hoc selection of temporal relations between events. In this paper, we introduce the Temporal Network Representation (TNR) of a log. It is based on Allen's interval algebra, comprises the pairwise temporal relations for activity executions, and potentially incorporates the context in which these relations have been observed. We demonstrate the usefulness of the TNR for detecting (unrecorded) delays and for probabilistic mining of variants when modelling the performance of a process. In order to compare different models from the performance perspective, we further develop a framework for measuring performance fitness. Under this framework, TNR-based process discovery is guaranteed to dominate existing techniques in measuring performance characteristics of a process. In addition, we show how contextual information in terms of the congestion levels of the process can be mined in order to further improve capabilities for performance analysis. To illustrate the practical value of the proposed models, we evaluate our approaches with three real-life datasets. Our experiments show that the TNR yields an improvement in performance fitness over state-of-the-art algorithms, while congestion learning is able to accurately reconstruct congestion levels from event data. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
18. Handling probabilistic integrity constraints in pay-as-you-go reconciliation of data models.
- Author
-
Hung, Nguyen Quoc Viet, Weidlich, Matthias, Tam, Nguyen Thanh, Miklós, Zoltán, Aberer, Karl, Gal, Avigdor, and Stantic, Bela
- Subjects
- *
ONLINE social networks , *DATA modeling , *PEER-to-peer architecture (Computer networks) , *RECONCILIATION , *DATA integrity , *ELECTRONIC systems , *INTEGRITY - Abstract
Data models capture the structure and characteristic properties of data entities, e.g., in terms of a database schema or an ontology. They are the backbone of diverse applications, reaching from information integration, through peer-to-peer systems and electronic commerce to social networking. Many of these applications involve models of diverse data sources. Effective utilisation and evolution of data models, therefore, calls for matching techniques that generate correspondences between their elements. Various such matching tools have been developed in the past. Yet, their results are often incomplete or erroneous, and thus need to be reconciled, i.e., validated by an expert. This paper analyses the reconciliation process in the presence of large collections of data models, where the network induced by generated correspondences shall meet consistency expectations in terms of integrity constraints. We specifically focus on how to handle data models that show some internal structure and potentially differ in terms of their assumed level of abstraction. We argue that such a setting calls for a probabilistic model of integrity constraints, for which satisfaction is preferred, but not required. In this work, we present a model for probabilistic constraints that enables reasoning on the correctness of individual correspondences within a network of data models, in order to guide an expert in the validation process. To support pay-as-you-go reconciliation, we also show how to construct a set of high-quality correspondences, even if an expert validates only a subset of all generated correspondences. We demonstrate the efficiency of our techniques for real-world datasets comprising database schemas and ontologies from various application domains. • A reconciliation process for a network of data models with integrity constraints. • A wide range of integrity constraints is handled for different data models. • The computation of proposed probabilistic model is scalable. • The proposed expert guidance saves about half of effort budgets. • The proposed instantiation technique increases quality up to 20%. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
19. Conformance checking and performance improvement in scheduled processes: A queueing-network perspective.
- Author
-
Senderovich, Arik, Weidlich, Matthias, Yedidsion, Liron, Gal, Avigdor, Mandelbaum, Avishai, Kadish, Sarah, and Bunnell, Craig A.
- Subjects
- *
CONFORMANCE testing , *PRODUCTION scheduling , *QUEUEING networks , *INFORMATION processing , *OPERATIONS research , *INFERENTIAL statistics - Abstract
Service processes, for example in transportation, telecommunications or the health sector, are the backbone of today׳s economies. Conceptual models of service processes enable operational analysis that supports, e.g., resource provisioning or delay prediction. In the presence of event logs containing recorded traces of process execution, such operational models can be mined automatically. In this work, we target the analysis of resource-driven, scheduled processes based on event logs. We focus on processes for which there exists a pre-defined assignment of activity instances to resources that execute activities. Specifically, we approach the questions of conformance checking ( how to assess the conformance of the schedule and the actual process execution ) and performance improvement ( how to improve the operational process performance ). The first question is addressed based on a queueing network for both the schedule and the actual process execution. Based on these models, we detect operational deviations and then apply statistical inference and similarity measures to validate the scheduling assumptions, thereby identifying root-causes for these deviations. These results are the starting point for our technique to improve the operational performance. It suggests adaptations of the scheduling policy of the service process to decrease the tardiness (non-punctuality) and lower the flow time. We demonstrate the value of our approach based on a real-world dataset comprising clinical pathways of an outpatient clinic that have been recorded by a real-time location system (RTLS). Our results indicate that the presented technique enables localization of operational bottlenecks along with their root-causes, while our improvement technique yields a decrease in median tardiness and flow time by more than 20%. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
20. Queue mining for delay prediction in multi-class service processes.
- Author
-
Senderovich, Arik, Weidlich, Matthias, Gal, Avigdor, and Mandelbaum, Avishai
- Subjects
- *
QUEUEING networks , *DATA mining , *INFORMATION storage & retrieval systems , *TELECOMMUNICATION , *FINANCIAL services industry - Abstract
Information systems have been widely adopted to support service processes in various domains, e.g., in the telecommunication, finance, and health sectors. Information recorded by systems during the operation of these processes provides an angle for operational process analysis, commonly referred to as process mining. In this work, we establish a queueing perspective in process mining to address the online delay prediction problem, which refers to the time that the execution of an activity for a running instance of a service process is delayed due to queueing effects. We present predictors that treat queues as first-class citizens and either enhance existing regression-based techniques for process mining or are directly grounded in queueing theory. In particular, our predictors target multi-class service processes, in which requests are classified by a type that influences their processing. Further, we introduce queue mining techniques that derive the predictors from event logs recorded by an information system during process execution. Our evaluation based on large real-world datasets, from the telecommunications and financial sectors, shows that our techniques yield accurate online predictions of case delay and drastically improve over predictors neglecting the queueing perspective. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
21. Optimizing Event Pattern Matching Using Business Process Models.
- Author
-
Weidlich, Matthias, Ziekow, Holger, Gal, Avigdor, Mendling, Jan, and Weske, Mathias
- Subjects
- *
EVENT processing (Computer science) , *QUERYING (Computer science) , *INDUSTRIAL efficiency , *BUSINESS models , *BEHAVIORAL systems analysis , *EXPERIMENTAL design - Abstract
A growing number of enterprises use complex event processing for monitoring and controlling their operations, while business process models are used to document working procedures. In this work, we propose a comprehensive method for complex event processing optimization using business process models. Our proposed method is based on the extraction of behaviorial constraints that are used, in turn, to rewrite patterns for event detection, and select and transform execution plans. We offer a set of rewriting rules that is shown to be complete with respect to the all, seq, and any patterns. The effectiveness of our method is demonstrated in an experimental evaluation with a large number of processes from an insurance company. We illustrate that the proposed optimization leads to significant savings in query processing. By integrating the optimization in state-of-the-art systems for event pattern matching, we demonstrate that these savings materialize in different technical infrastructures and can be combined with existing optimization techniques. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
22. Behaviour Equivalence and Compatibility of Business Process Models with Complex Correspondences.
- Author
-
Weidlich, Matthias, Dijkman, Remco, and Weske, Mathias
- Subjects
- *
BUSINESS models , *COMPUTER networks , *DISTRIBUTED computing , *HUMAN behavior , *SOFTWARE compatibility , *PSYCHOLOGICAL adaptation - Abstract
Once multiple models of a business process are created for different purposes or to capture different variants, verification of behaviour equivalence or compatibility is needed. Equivalence verification ensures that two business process models specify the same behaviour. Since different process models are likely to differ with respect to their assumed level of abstraction and the actions that they take into account, equivalence notions have to cope with correspondences between sets of actions and actions that exist in one process but not in the other. In this paper, we present notions of equivalence and compatibility that can handle these problems. In essence, we present a notion of equivalence that works on correspondences between sets of actions rather than single actions. We then integrate our equivalence notion with work on behaviour inheritance that copes with actions that exist in one process but not in the other, leading to notions of behaviour compatibility. Compatibility notions verify that two models have the same behaviour with respect to the actions that they have in common. As such, our contribution is a collection of behaviour equivalence and compatibility notions that are applicable in more general settings than existing ones. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
23. Perceived consistency between process models
- Author
-
Weidlich, Matthias and Mendling, Jan
- Subjects
- *
INFORMATION resources management , *STAKEHOLDERS , *MATHEMATICAL models , *BUSINESS models , *EMPIRICAL research , *INFORMATION resources , *INVESTORS - Abstract
Abstract: Process-aware information systems typically involve various kinds of process stakeholders. That, in turn, leads to multiple process models that capture a common process from different perspectives and at different levels of abstraction. In order to guarantee a certain degree of uniformity, the consistency of such related process models is evaluated using formal criteria. However, it is unclear how modelling experts assess the consistency between process models, and which kind of notion they perceive to be appropriate. In this paper, we focus on control flow aspects and investigate the adequacy of consistency notions. In particular, we report findings from an online experiment, which allows us to compare in how far trace equivalence and two notions based on behavioural profiles approximate expert perceptions on consistency. Analysing 69 expert statements from process analysts, we conclude that trace equivalence is not suited to be applied as a consistency notion, whereas the notions based on behavioural profiles approximate the perceived consistency of our subjects significantly. Therefore, our contribution is an empirically founded answer to the correlation of behaviour consistency notions and the consistency perception by experts in the field of business process modelling. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
24. Causal Behavioural Profiles - Efficient Computation, Applications, and Evaluation.
- Author
-
Weidlich, Matthias, Polyvyanyy, Artem, Mendling, Jan, and Weske, Mathias
- Subjects
- *
SOFTWARE engineering , *MANIFOLDS (Mathematics) , *BUSINESS models , *FORMAL methods (Computer science) , *SYSTEMS theory , *INFORMATION theory - Abstract
Analysis of behavioural consistency is an important aspect of software engineering. In process and service management, consistency verification of behavioural models has manifold applications. For instance, a business process model used as system specification and a corresponding workflow model used as implementation have to be consistent. Another example would be the analysis to what degree a process log of executed business operations is consistent with the corresponding normative process model. Typically, existing notions of behaviour equivalence, such as bisimulation and trace equivalence, are applied as consistency notions. Still, these notions are exponential in computation and yield a Boolean result. In many cases, however, a quantification of behavioural deviation is needed along with concepts to isolate the source of deviation. In this article, we propose causal behavioural profiles as the basis for a consistency notion. These profiles capture essential behavioural information, such as order, exclusiveness, and causality between pairs of activities of a process model. Consistency based on these profiles is weaker than trace equivalence, but can be computed efficiently for a broad class of models. In this article, we introduce techniques for the computation of causal behavioural profiles using structural decomposition techniques for sound free-choice workflow systems if unstructured net fragments are acyclic or can be traced back to S- or T-nets. We also elaborate on the findings of applying our technique to three industry model collections. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
25. Process compliance analysis based on behavioural profiles
- Author
-
Weidlich, Matthias, Polyvyanyy, Artem, Desai, Nirmit, Mendling, Jan, and Weske, Mathias
- Subjects
- *
LEGAL compliance , *INDUSTRIAL management , *PROBLEM solving , *QUALITY standards , *SERVICE industries , *CASE studies - Abstract
Abstract: Process compliance measurement is getting increasing attention in companies due to stricter legal requirements and market pressure for operational excellence. In order to judge on compliance of the business processing, the degree of behavioural deviation of a case, i.e., an observed execution sequence, is quantified with respect to a process model (referred to as fitness, or recall). Recently, different compliance measures have been proposed. Still, nearly all of them are grounded on state-based techniques and the trace equivalence criterion, in particular. As a consequence, these approaches have to deal with the state explosion problem. In this paper, we argue that a behavioural abstraction may be leveraged to measure the compliance of a process log – a collection of cases. To this end, we utilise causal behavioural profiles that capture the behavioural characteristics of process models and cases, and can be computed efficiently. We propose different compliance measures based on these profiles, discuss the impact of noise in process logs on our measures, and show how diagnostic information on non-compliance is derived. As a validation, we report on findings of applying our approach in a case study with an international service provider. [Copyright &y& Elsevier]
- Published
- 2011
- Full Text
- View/download PDF
26. Visually specifying compliance rules and explaining their violations for business processes
- Author
-
Awad, Ahmed, Weidlich, Matthias, and Weske, Mathias
- Subjects
- *
VISUAL perception , *MANUFACTURING processes , *LEGAL compliance , *GRAPH theory , *MATHEMATICAL logic , *CONSTRAINT satisfaction , *FEEDBACK control systems - Abstract
Abstract: A business process is a set of steps designed to be executed in a certain order to achieve a business value. Such processes are often driven by and documented using process models. Nowadays, process models are also applied to drive process execution. Thus, correctness of business process models is a must. Much of the work has been devoted to check general, domain-independent correctness criteria, such as soundness. However, business processes must also adhere to and show compliance with various regulations and constraints, the so-called compliance requirements. These are domain-dependent requirements. In many situations, verifying compliance on a model level is of great value, since violations can be resolved in an early stage prior to execution. However, this calls for using formal verification techniques, e.g., model checking, that are too complex for business experts to apply. In this paper, we utilize a visual language, BPMN-Q, to express compliance requirements visually in a way similar to that used by business experts to build process models. Still, using a pattern based approach, each BPMN-Q graph has a formal temporal logic expression in computational tree logic (CTL). Moreover, the user is able to express constraints, i.e., compliance rules, regarding control flow and data flow aspects. In order to provide valuable feedback to a user in case of violations, we depend on temporal logic querying approaches as well as BPMN-Q to visually highlight paths in a process model whose execution causes violations. [Copyright &y& Elsevier]
- Published
- 2011
- Full Text
- View/download PDF
27. Discovering and Analyzing Contextual Behavioral Patterns From Event Logs.
- Author
-
Acheli, Mehdi, Grigori, Daniela, and Weidlich, Matthias
- Subjects
- *
POINT processes , *INFORMATION storage & retrieval systems - Abstract
Event logs that are recorded by information systems provide a valuable starting point for the analysis of processes in various domains, reaching from healthcare, through logistics, to e-commerce. Specifically, behavioral patterns discovered from an event log enable operational insights, even in scenarios where process execution is rather unstructured and shows a large degree of variability. While such behavioral patterns capture frequently recurring episodes of a process’ behavior, they are not limited to sequential behavior but include notions of concurrency and exclusive choices. Existing algorithms to discover behavioral patterns are context-agnostic, though. They neglect the context in which patterns are observed, which severely limits the granularity at which behavioral regularities are identified. In this paper, we therefore present an approach to discover contextual behavioral patterns. Contextual patterns may be frequent solely in a certain partition of the event log, which enables fine-granular insights into the aspects that influence the conduct of a process. Moreover, we show how to analyze the discovered contextual behavioral patterns in terms of causal relations between context information and the patterns, as well as correlations between the patterns themselves. A complete analysis methodology leveraging all the tools presented in the paper and supplemented by interpretations guidelines is also provided. Finally, experiments with real-world event logs demonstrate the effectiveness of our techniques in obtaining fine-granular process insights. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
28. Efficient multi-query evaluation for distributed CEP through predicate-based push–pull plans.
- Author
-
Purtzel, Steven, Akili, Samira, and Weidlich, Matthias
- Subjects
- *
CONSTRUCTION planning , *TELECOMMUNICATION systems , *COMMUNICATION models - Abstract
Complex Event Processing (CEP) evaluates queries over event streams. However, once events are generated by nodes in a network, query evaluation requires the transmission of events between the nodes. Commonly, this is realized by sending events immediately upon their generation. Yet, to reduce network communication, query evaluation may incorporate pull-based communication, where events are buffered locally until they are requested by another node. Existing approaches for push–pull communication in distributed CEP, however, are limited in the expressiveness of pull requests and leverage solely the temporal constraints imposed by a query. In this paper, we propose a new evaluation model for distributed CEP, coined predicate-based push-pull (PrePP) plans. It includes pull requests that enable fine-granular filtering of events at their sources based on query predicates, thereby reducing event transmission. We formulate the problem of constructing PrePP plans with minimal transmission cost that also exploit opportunities for sharing of intermediate results among the queries of workload. However, the construction of optimal PrePP plans turns out to be NP-hard. We therefore propose algorithms to speed up the construction of PrePP plans, while producing near-optimal results. We illustrate the benefits of PrePP plans for distributed CEP in comprehensive experiments with synthetic as well as real-world data. Specifically, PrePP plans can reduce event transmission by up to three orders of magnitude over baseline techniques. • We propose a model for push–pull communication in distributed complex event processing (CEP) that leverages query predicates in pull requests. • Given a network of event sources and a query workload, we formulate the problem of constructing a predicate-based push–pull plan that minimizes transmission costs. • We present strategies to construct efficient evaluation plans under this model. • Experiments with synthetic and real-world data illustrate the benefits of the presented evaluation model for distributed CEP. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. Extraction, correlation, and abstraction of event data for process mining.
- Author
-
Diba, Kiarash, Batoulis, Kimon, Weidlich, Matthias, and Weske, Mathias
- Subjects
- *
PROCESS mining , *ELECTRONIC data processing , *INFORMATION storage & retrieval systems , *DATA mining , *DATA logging - Abstract
Process mining provides a rich set of techniques to discover valuable knowledge of business processes based on data that was recorded in different types of information systems. It enables analysis of end‐to‐end processes to facilitate process re‐engineering and process improvement. Process mining techniques rely on the availability of data in the form of event logs. In order to enable process mining in diverse environments, the recorded data need to be located and transformed to event logs. The journey from raw data to event logs suitable for process mining can be addressed by a variety of methods and techniques, which are the focus of this article. In particular, techniques proposed in the literature to support the creation of event logs from raw data are reviewed and classified. This includes techniques for identification and extraction of the required event data from diverse sources as well as their correlation and abstraction. This article is categorized under:Technologies > Structure Discovery and ClusteringFundamental Concepts of Data and Knowledge > Data ConceptsTechnologies > Data Preprocessing [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
30. To aggregate or to eliminate? Optimal model simplification for improved process performance prediction.
- Author
-
Senderovich, Arik, Shleyfman, Alexander, Weidlich, Matthias, Gal, Avigdor, and Mandelbaum, Avishai
- Subjects
- *
OPTIMAL control theory , *PROCESS mining , *BUSINESS process management , *PERFORMANCE management , *KEY performance indicators (Management) - Abstract
Highlights • A technique for performance-driven model reduction of GSPNs is proposed. • The technique relies on foldings that aggregate or eliminate performance information. • Foldings preserve model stability and have a bound for the introduced performance estimation error. • Given a budget for the estimation error, an optimal sequence of foldings can be found. Abstract Operational process models such as generalised stochastic Petri nets (GSPNs) are useful when answering performance questions about business processes (e.g. ‘how long will it take for a case to finish?’). Recently, methods for process mining have been developed to discover and enrich operational models based on a log of recorded executions of processes, which enables evidence-based process analysis. To avoid a bias due to infrequent execution paths, discovery algorithms strive for a balance between over-fitting and under-fitting regarding the originating log. However, state-of-the-art discovery algorithms address this balance solely for the control-flow dimension, neglecting the impact of their design choices in terms of performance measures. In this work, we thus offer a technique for controlled performance-driven model reduction of GSPNs, using structural simplification rules, namely foldings. We propose a set of foldings that aggregate or eliminate performance information. We further prove the soundness of these foldings in terms of stability preservation and provide bounds on the error that they introduce with respect to the original model. Furthermore, we show how to find an optimal sequence of simplification rules, such that their application yields a minimal model under a given error budget for performance estimation. We evaluate the approach with two real-world datasets from the healthcare and telecommunication domains, showing that model simplification indeed enables a controlled reduction of model size, while preserving performance metrics with respect to the original model. Moreover, we show that aggregation dominates elimination when abstracting performance models by preventing under-fitting due to information loss. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
31. Scalable maximal subgraph mining with backbone-preserving graph convolutions.
- Author
-
Nguyen, Thanh Toan, Huynh, Thanh Trung, Weidlich, Matthias, Tho, Quan Thanh, Yin, Hongzhi, Aberer, Karl, and Nguyen, Quoc Viet Hung
- Subjects
- *
SUBGRAPHS , *ISOMORPHISM (Mathematics) , *SEQUENTIAL pattern mining , *DATABASES , *BIOINFORMATICS , *NP-complete problems , *SPINE - Abstract
Maximal subgraph mining is increasingly important in various domains, including bioinformatics, genomics, and chemistry, as it helps identify common characteristics among a set of graphs and enables their classification into different categories. Existing approaches for identifying maximal subgraphs typically rely on traversing a graph lattice. However, in practice, these approaches are limited to relatively small subgraphs due to the exponential growth of the search space and the NP-completeness of the underlying subgraph isomorphism test. In this work, we propose SCAMA, an approach that addresses these limitations by adopting a divide-and-conquer strategy for efficient mining of maximal subgraphs. Our approach involves initially partitioning a graph database into equivalence classes using bootstrapped backbones, which are tree-shaped frequent subgraphs. We then introduce a learning process based on a novel graph convolutional network (GCN) to extract maximal backbones for each equivalence class. A critical insight of our approach is that by estimating each maximal backbone directly in the embedding space, we can avoid the exponential traversal of the graph lattice. From the extracted maximal backbones, we construct the maximal frequent subgraphs. Furthermore, we outline how SCAMA can be extended to perform top- k largest frequent subgraph mining and how the discovered patterns facilitate graph classification. Our experimental results demonstrate the effectiveness of SCAMA in identifying almost perfectly maximal frequent subgraphs, while exhibiting approximately 10 times faster performance compared to the best baseline technique. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
32. Optimal event log sanitization for privacy-preserving process mining.
- Author
-
Fahrenkrog-Petersen, Stephan A., van der Aa, Han, and Weidlich, Matthias
- Subjects
- *
PROCESS mining , *GREEDY algorithms , *POINT processes , *COMPUTATIONAL complexity , *INFORMATION storage & retrieval systems , *DATA privacy - Abstract
Event logs that originate from information systems enable comprehensive analysis of business processes. These logs serve as the starting point for the discovery of process models or the analysis of conformance of a log with a given specification. However, logs potentially contain personal information about individuals involved in process execution. In this paper, we therefore address the risk of privacy attacks on event logs. Specifically, we rely on group-based privacy guarantees instead of noise insertion in order to enable anonymization without adding new behaviour to the log. To this end, we propose two new algorithms for event log sanitization that provide privacy guarantees in terms of k-anonymity for the behavioural perspective of a process and t-closeness for sensitive information associated with events. The algorithms thereby avoid the disclosure of employee identities, prevent the identification of employee membership in the log, and preclude the characterization of employees based on sensitive attributes. Our algorithms overcome the limitations of an existing, greedy algorithm, providing users with a trade-off between computational complexity and the utility of the sanitized event log for downstream analysis. Our Experiments demonstrate that sanitization with our algorithms generates event logs of higher utility compared to the state of the art. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
33. Hiding in the forest: Privacy-preserving process performance indicators.
- Author
-
Kabierski, Martin, Fahrenkrog-Petersen, Stephan A., and Weidlich, Matthias
- Subjects
- *
ORGANIZATIONAL goals , *DATA release , *PERSONALLY identifiable information - Abstract
Event logs recorded during the execution of business processes provide a valuable starting point for operational monitoring, analysis, and improvement. Specifically, measures that quantify any deviation between the recorded operations and organizational goals enable the identification of operational issues. The data to compute such process-specific measures, commonly referred to as process performance indicators (PPIs), may contain personal data of individuals, though, which implies an inevitable risk of privacy intrusion that must be addressed. In this article, we target the privacy-aware computation of process performance indicators. To this end, we adopt tree-based definitions of PPIs according to the well-established PPINOT meta-model. For such a PPI, we design data release mechanisms for the functions in a PPI tree. Using a probabilistic formulation of the expected result of a privatized PPI, we further show how to determine the combination of release mechanisms that inflicts the least loss in utility. Moreover, given a set of PPIs, we provide an algorithmic framework to manage an inherent trade-off: Privatization may strive for maximal utility of each single PPI or for maximal reuse of privatized functions among all PPIs to use a privacy budget most effectively. Results from experiments with synthetic as well as real-world data indicate the general feasibility of privacy-aware PPIs and shed light on the trade-offs once a set of them is considered. • A framework for privatizing PPIs, and differentially private release mechanisms. • A method for automatically selecting the optimal instantiation of the framework. • An extension for forests of PPIs, optimizing privacy budget consumption. • Evaluations on synthetic and real data indicate the feasibility of the approaches. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
34. Efficient and Effective Multi-Modal Queries Through Heterogeneous Network Embedding.
- Author
-
Duong, Chi Thang, Nguyen, Thanh Tam, Yin, Hongzhi, Weidlich, Matthias, Mai, Thai Son, Aberer, Karl, and Nguyen, Quoc Viet Hung
- Subjects
- *
INFORMATION needs , *INFORMATION retrieval , *VECTOR data , *INFORMATION resources - Abstract
The heterogeneity of today’s Web sources requires information retrieval (IR) systems to handle multi-modal queries. Such queries define a user’s information needs by different data modalities, such as keywords, hashtags, user profiles, and other media. Recent IR systems answer such a multi-modal query by considering it as a set of separate uni-modal queries. However, depending on the chosen operationalisation, such an approach is inefficient or ineffective. It either requires multiple passes over the data or leads to inaccuracies since the relations between data modalities are neglected in the relevance assessment. To mitigate these challenges, we present an IR system that has been designed to answer genuine multi-modal queries. It relies on a heterogeneous network embedding, so that features from diverse modalities can be incorporated when representing both, a query and the data over which it shall be evaluated. By embedding a query and the data in the same vector space, the relations across modalities are made explicit and exploited for more accurate query evaluation. At the same time, multi-modal queries are answered with a single pass over the data. An experimental evaluation using diverse real-world and synthetic datasets illustrates that our approach returns twice the amount of relevant information compared to baseline techniques, while scaling to large multi-modal databases. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
35. Sampling and approximation techniques for efficient process conformance checking.
- Author
-
Bauer, Martin, van der Aa, Han, and Weidlich, Matthias
- Subjects
- *
SAMPLING (Process) , *MAGNITUDE (Mathematics) , *DATA analysis , *PROCESS mining - Abstract
Conformance checking enables organizations to automatically assess whether their business processes are executed according to their specification. State-of-the-art conformance checking algorithms perform this task by establishing alignments between behaviour recorded by IT systems to a process model capturing desired behaviour. While such alignments clearly highlight conformance issues, a major downside is that these algorithms scale exponentially in the size of both the event data, capturing recorded behaviour, and the process model used as input. At the same time, it is crucial to recognize that event data used for such analyses typically only relates to a specific interval of process execution rather than the entire history, meaning that the employed event data is inherently incomplete. Therefore, we argue that statistical methods allow one to obtain a proper understanding of the overall conformance of a process by considering only a fraction of the available data. In this paper, we therefore present a statistical approach to conformance checking that employs trace sampling and result approximation in order to derive conformance results in an efficient manner. The approach reduces the runtime significantly, while still providing guarantees on the accuracy of the estimated conformance result. We instantiate the general approach for different measures of the overall conformance of an event log and a process model, including fitness as a direct quantification of conformance as well as the distribution of deviations over activities and deviations related to contextual factors, such as the involved resources. Moreover, to increase the robustness of our approach, we elaborate on mechanisms to reveal biases in sampling procedures. Experiments with real-world and synthetic datasets show that our approach speeds up state-of-the-art conformance checking algorithms by up to three orders of magnitude, while largely maintaining the analysis accuracy. • We propose sample and approximation-based techniques for conformance checking. • Cover three types of conformance measures, fitness, deviations, and resources. • Experiments show that runtime efficiency is increased by orders of magnitude. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
36. Preface to BPM 2015.
- Author
-
Motahari-Nezhad, Hamid Reza, Recker, Jan, and Weidlich, Matthias
- Subjects
- *
BUSINESS process management , *BIG data , *COMPUTER algorithms , *PROBABILITY theory , *ELECTRONIC data processing - Published
- 2017
- Full Text
- View/download PDF
37. Computing Crowd Consensus with Partial Agreement.
- Author
-
Hung, Nguyen Quoc Viet, Viet, Huynh Huu, Tam, Nguyen Thanh, Weidlich, Matthias, Yin, Hongzhi, and Zhou, Xiaofang
- Subjects
- *
CROWDSOURCING , *DISTRIBUTED artificial intelligence , *HUMAN-computer interaction , *HUMAN-machine systems , *INFORMATION technology - Abstract
Crowdsourcing has been widely established as a means to enable human computation at large-scale, in particular for tasks that require manual labelling of large sets of data items. Answers obtained from heterogeneous crowd workers are aggregated to obtain a robust result. However, existing methods for answer aggregation are designed for discrete tasks, where answers are given as a single label per item. In this paper, we consider partial-agreement tasks that are common in many applications such as image tagging and document annotation, where items are assigned sets of labels. Common approaches for the aggregation of partial-agreement answers either (i) reduce the problem to several instances of an aggregation problem for discrete tasks or (ii) consider each label independently. Going beyond the state-of-the-art, we propose a novel Bayesian nonparametric model to aggregate the partial-agreement answers in a generic way. This model enables us to compute the consensus of partially-sound and partially-complete worker answers, while taking into account mutual relationships in labels and different answer sets. We also show how this model is instantiated for incremental learning, incorporating new answers from crowd workers as they arrive. An evaluation of our method using real-world datasets reveals that it consistently outperforms the state-of-the-art in terms of precision, recall, and robustness against faulty workers and data sparsity. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
38. Dual center validation of deep learning for automated multi-label segmentation of thoracic anatomy in bedside chest radiographs.
- Author
-
Busch, Felix, Xu, Lina, Sushko, Dmitry, Weidlich, Matthias, Truhn, Daniel, Müller-Franzes, Gustav, Heimer, Maurice M., Niehues, Stefan M., Makowski, Marcus R., Hinsche, Markus, Vahldiek, Janis L., Aerts, Hugo JWL., Adams, Lisa C., and Bressem, Keno K.
- Subjects
- *
CONVOLUTIONAL neural networks , *CHEST (Anatomy) , *CHEST X rays , *DEEP learning , *LUNGS , *TRACHEA , *ARTIFICIAL intelligence - Abstract
• A lightweight convolutional neural network was trained using 2000 bedside CXRs. • Segmentation included lungs, heart, clavicles, trachea, and mediastinum. • The model achieves highly comparable performance to state-of-the-art approaches. • A human-in-the-loop annotation workflow reduced annotation times by 32%. • The implemented workflow allows for more efficient use of the human workforce. Bedside chest radiographs (CXRs) are challenging to interpret but important for monitoring cardiothoracic disease and invasive therapy devices in critical care and emergency medicine. Taking surrounding anatomy into account is likely to improve the diagnostic accuracy of artificial intelligence and bring its performance closer to that of a radiologist. Therefore, we aimed to develop a deep convolutional neural network for efficient automatic anatomy segmentation of bedside CXRs. To improve the efficiency of the segmentation process, we introduced a "human-in-the-loop" segmentation workflow with an active learning approach, looking at five major anatomical structures in the chest (heart, lungs, mediastinum, trachea, and clavicles). This allowed us to decrease the time needed for segmentation by 32% and select the most complex cases to utilize human expert annotators efficiently. After annotation of 2,000 CXRs from different Level 1 medical centers at Charité – University Hospital Berlin, there was no relevant improvement in model performance, and the annotation process was stopped. A 5-layer U-ResNet was trained for 150 epochs using a combined soft Dice similarity coefficient (DSC) and cross-entropy as a loss function. DSC, Jaccard index (JI), Hausdorff distance (HD) in mm, and average symmetric surface distance (ASSD) in mm were used to assess model performance. External validation was performed using an independent external test dataset from Aachen University Hospital (n = 20). The final training, validation, and testing dataset consisted of 1900/50/50 segmentation masks for each anatomical structure. Our model achieved a mean DSC/JI/HD/ASSD of 0.93/0.88/32.1/5.8 for the lung, 0.92/0.86/21.65/4.85 for the mediastinum, 0.91/0.84/11.83/1.35 for the clavicles, 0.9/0.85/9.6/2.19 for the trachea, and 0.88/0.8/31.74/8.73 for the heart. Validation using the external dataset showed an overall robust performance of our algorithm. Using an efficient computer-aided segmentation method with active learning, our anatomy-based model achieves comparable performance to state-of-the-art approaches. Instead of only segmenting the non-overlapping portions of the organs, as previous studies did, a closer approximation to actual anatomy is achieved by segmenting along the natural anatomical borders. This novel anatomy approach could be useful for developing pathology models for accurate and quantifiable diagnosis. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
39. Traveling time prediction in scheduled transportation with journey segments.
- Author
-
Gal, Avigdor, Mandelbaum, Avishai, Schnitzler, François, Senderovich, Arik, and Weidlich, Matthias
- Subjects
- *
QUEUING theory , *MACHINE learning , *ELECTRONIC data processing , *PREDICTION models , *REAL-time computing - Abstract
Urban mobility impacts urban life to a great extent. To enhance urban mobility, much research was invested in traveling time prediction: given an origin and destination, provide a passenger with an accurate estimation of how long a journey lasts. In this work, we investigate a novel combination of methods from Queueing Theory and Machine Learning in the prediction process. We propose a prediction engine that, given a scheduled bus journey (route) and a ‘source/destination’ pair, provides an estimate for the traveling time, while considering both historical data and real-time streams of information that are transmitted by buses. We propose a model that uses natural segmentation of the data according to bus stops and a set of predictors, some use learning while others are learning-free, to compute traveling time. Our empirical evaluation, using bus data that comes from the bus network in the city of Dublin, demonstrates that the snapshot principle, taken from Queueing Theory, works well yet suffers from outliers. To overcome the outliers problem, we use Machine Learning techniques as a regulator that assists in identifying outliers and propose prediction based on historical data. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
40. Semantics-aware mechanisms for control-flow anonymization in process mining.
- Author
-
Fahrenkrog-Petersen, Stephan A., Kabierski, Martin, van der Aa, Han, and Weidlich, Matthias
- Subjects
- *
PROCESS mining , *INFORMATION storage & retrieval systems , *INFORMATION processing , *ELECTRONIC data processing - Abstract
Information systems support the execution of business processes. As part of that, data about process execution is recorded in event logs, which can be used to analyse the control-flow of the respective processes. However, such data may contain personal information on process stakeholders that is protected by privacy regulations. Process analysis based on event logs shall, therefore, employ anonymization techniques. In this paper, we introduce two approaches to anonymize the recorded control-flow of a process. Specifically, we present SaCoFa and SaPa as two techniques to anonymize the result of trace-variant queries over an event log. Unlike existing techniques that achieve differential privacy through randomized noise insertion, our techniques rely on noise insertion mechanisms that incorporate a process' semantics, thereby avoiding easily-recognizable noise. Both techniques take different design choices, though. SaCoFa anonymizes a trace-variant distribution directly, thereby focusing on utility preservation at the expense of potentially changing the number of a traces in the result considerably. SaPa, in turn, anonymizes a trace-variant distribution indirectly, through play-out of an anonymized directly-follows distribution. This way, the number of traces in the result is close to the original log, but the drop in utility may become larger due to using only local control-flow information. However, our experiments demonstrate that both approaches strike a better balance of preserving the utility of an event log compared to existing techniques. • We propose methods to anonymize the control-flow perspective of business processes. • The anonymized data is protected by differential privacy. • Our techniques provide better utility for process discovery than the state of the art. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
41. Deep MinCut: Learning Node Embeddings by Detecting Communities.
- Author
-
Duong, Chi Thang, Nguyen, Thanh Tam, Hoang, Trung-Dung, Yin, Hongzhi, Weidlich, Matthias, and Nguyen, Quoc Viet Hung
- Subjects
- *
DEEP learning , *REPRESENTATIONS of graphs , *LEARNING communities - Abstract
• Our node embeddings are both interpretable and competent for classification tasks. • Our graph representation learning process is scalable. • Our interpretable node embeddings outperform baselines. • Our technique is robust to different experimental settings. • Our embedding reveals graph's community structure, especially a hierarchy one. We present Deep MinCut (DMC), an unsupervised approach to learn node embeddings for graph-structured data. It derives node representations based on their membership in communities. As such, the embeddings directly provide insights into the graph structure, so that a separate clustering step is no longer needed. DMC learns both, node embeddings and communities, simultaneously by minimizing the mincut loss , which captures the number of connections between communities. Striving for high scalability, we also propose a training process for DMC based on minibatches. We provide empirical evidence that the communities learned by DMC are meaningful and that the node embeddings are competitive in different node classification benchmarks. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
42. Adaptive incentive-based demand response with distributed non-compliance assessment.
- Author
-
Raman, Gururaghav, Zhao, Bo, Peng, Jimmy Chih-Hsien, and Weidlich, Matthias
- Subjects
- *
HOME energy use , *NONCOMPLIANCE - Published
- 2022
- Full Text
- View/download PDF
43. Model-agnostic and diverse explanations for streaming rumour graphs.
- Author
-
Nguyen, Thanh Tam, Phan, Thanh Cong, Nguyen, Minh Hieu, Weidlich, Matthias, Yin, Hongzhi, Jo, Jun, and Nguyen, Quoc Viet Hung
- Subjects
- *
RUMOR , *REPRESENTATIONS of graphs , *EXPLANATION , *SOCIAL media , *SOCIAL networks , *CHARTS, diagrams, etc. - Abstract
The propagation of rumours on social media poses an important threat to societies, so that various techniques for rumour detection have been proposed recently. Yet, existing work focuses on what entities constitute a rumour, but provides little support to understand why the entities have been classified as such. This prevents an effective evaluation of the detected rumours as well as the design of countermeasures. In this work, we argue that explanations for detected rumours may be given in terms of examples of related rumours detected in the past. A diverse set of similar rumours helps users to generalize, i.e., to understand the properties that govern the detection of rumours. Since the spread of rumours in social media is commonly modelled using feature-annotated graphs, we propose a query-by-example approach that, given a rumour graph, extracts the k most similar and diverse subgraphs from past rumours. The challenge is that all of the computations require fast assessment of similarities between graphs. To achieve an efficient and adaptive realization of the approach in a streaming setting, we present a novel graph representation learning technique and report on implementation considerations. Our evaluation experiments show that our approach outperforms baseline techniques in delivering meaningful explanations for various rumour propagation behaviours. • Our approach yields explanations of much higher accumulated utility up to 16x. • Our approach is robust to concept drift with a 67.95% of explanation accuracy. • Our approach has indexing performance significantly better in time and space. • Our embedding-based similarity is much faster to compute than graph measures. • Our example-based explanations also support the detection accuracy (>=0.95). [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
44. Process discovery with context-aware process trees.
- Author
-
Shraga, Roee, Gal, Avigdor, Schumacher, Dafna, Senderovich, Arik, and Weidlich, Matthias
- Subjects
- *
TREES - Abstract
Discovery plays a key role in data-driven analysis of business processes. The vast majority of contemporary discovery algorithms aims at the identification of control-flow constructs. The increase in data richness, however, enables discovery that incorporates the context of process execution beyond the control-flow perspective. A "control-flow first" approach, where context data serves for refinement and annotation, is limited and fails to detect fundamental changes in the control-flow that depend on context data. In this work, we thus propose a novel approach for combining the control-flow and data perspectives under a single roof by extending inductive process discovery. Our approach provides criteria under which context data, handled through unsupervised learning, take priority over control-flow in guiding process discovery. The resulting model is a process tree, in which some operators carry data semantics instead of control-flow semantics. We show that the proposed approach produces trees that are context consistent, deterministic, complete, and can be explainable without a major quality reduction. We evaluate the approach using synthetic and real-world datasets, showing that the resulting models are superior to state-of-the-art discovery methods in terms of measures based on multi-perspective alignments. • Context-aware process trees (CaT) are defined. • An inductive context-aware discovery algorithm (CaDi) is proposed. • CaDi produces trees that are context consistent, deterministic, and complete. • CaDi discovers models of higher quality than the state-of-the-art. • CaDi can be tuned to provide CaTs that are more explainable. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
45. An iterative approach to synthesize business process templates from compliance rules
- Author
-
Awad, Ahmed, Goré, Rajeev, Hou, Zhe, Thomson, James, and Weidlich, Matthias
- Subjects
- *
ITERATIVE methods (Mathematics) , *PROCESS control systems , *DATA mining , *LINEAR systems , *COMPUTER software execution , *COMPUTER science , *COMPUTER systems , *INFORMATION storage & retrieval systems - Abstract
Abstract: Companies have to adhere to compliance requirements. The compliance analysis of business operations is typically a joint effort of business experts and compliance experts. Those experts need to create a common understanding of business processes to effectively conduct compliance management. In this paper, we present a technique that aims at supporting this process. We argue that process templates generated out of compliance requirements provide a basis for negotiation among business and compliance experts. We introduce a semi-automated and iterative approach to the synthesis of such process templates from compliance requirements expressed in Linear Temporal Logic (LTL). We show how generic constraints related to business process execution are incorporated and present criteria that point at underspecification. Further, we outline how such underspecification may be resolved to iteratively build up a complete specification. For the synthesis, we leverage existing work on process mining and process restructuring. However, our approach is not limited to the control-flow perspective, but also considers direct and indirect data-flow dependencies. Finally, we elaborate on the application of the derived process templates and present an implementation of our approach. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
46. Introduction to the Special Issue on Integrating Process-oriented and Event-based Systems.
- Author
-
Eyers, David, Gal, Avigdor, Jacobsen, Hans-Arno, and Weidlich, Matthias
- Subjects
- *
TRANSPORTATION , *LOGISTICS , *VELOCITY - Published
- 2019
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.