84 results on '"Bencomo A"'
Search Results
2. History-aware explanations: towards enabling human-in-the-loop in self-adaptive systems
- Author
-
Juan Parra-Ullauri, Antonio García-Domínguez, Nelly Bencomo, and Luis Garcia-Paucar
- Abstract
The complexity of real-world problems requires modern software systems to autonomously adapt and modify their behaviour at run time to deal with internal and external challenges and contexts. Consequently, these self-adaptive systems (SAS) can show unexpected and surprising behaviours to users, who may not understand or agree with them. This is exacerbated due to the ubiquity and complexity of AI-based systems which are often considered as "black-boxes". Users may feel that the decision-making process of SAS is oblivious to the user's own decision-making criteria and priorities. Inevitably, users may mistrust or even avoid using the system. Furthermore, SAS could benefit from the human involvement in satisfying stakeholders' requirements. Accordingly, it is argued that a system should be able to explain its behaviour and how it has reached its current state. A history-aware, human-in-the-loop approach to address these issues is presented in this paper. For this approach, the system should i) offer access and retrieval of historic data about the past behaviour of the system, ii) track over time the reasons for its decisions to show and explain them to the users, and iii) provide capabilities, called effectors, to empower users by allowing them to steer the decision-making based on the information provided by i) and ii). This paper looks into enabling a human-in-the-loop approach into the decision-making of SAS based on the MAPE-K architecture. We present a feedback layer based on temporal graph databases (TGDB) that has been added to the MAPE-K architecture to provide a two-way communication between the human and the SAS. Collaboration, communication and trustworthiness between the human and SAS is promoted by the provision of history-based explanations extracted from the TGDB, and a set of effectors allow human users to influence the system based on the received information. The encouraging results of an application of the approach to a network management case study and a validation from a SAS expert are shown.
- Published
- 2022
3. History-aware explanations
- Author
-
Parra-Ullauri, Juan, primary, García-Domínguez, Antonio, additional, Bencomo, Nelly, additional, and Garcia-Paucar, Luis, additional
- Published
- 2022
- Full Text
- View/download PDF
4. A Bayesian Network-based model to understand the role of soft requirements in technology acceptance: the Case of the NHS COVID-19 Test and Trace App in England and Wales
- Author
-
Luis H. Garcia Paucar, Nelly Bencomo, Alistair Sutcliffe, and Pete Sawyer
- Abstract
Soft requirements (such as human values, motivations, and personal attitudes) can strongly influence technology acceptance. As such, we need to understand, model and predict decisions made by end users regarding the adoption and utilization of software products, where soft requirements need to be taken into account. Therefore, we address this need by using a novel Bayesian network approach that allows the prediction of end users’ decisions and ranks soft requirements’ importance when making these decisions. The approach offers insights that help requirements engineers better understand which soft requirements are essential for particular software to be accepted by its target users. We have implemented a Bayesian network to model hidden states and their relationships to the dynamics of technology acceptance. The model has been applied to the healthcare domain using the NHS COVID-19 Test and Trace app (COVID-19 app). Our findings show that soft requirements such as Responsibility and Trust (e.g. Trust in the supplier/brand) are relevant for the COVID-19 app acceptance. However, the importance of soft requirements is also contextual and time-dependent. For example, Fear of infection was an essential soft requirement, but its relevance decreased over time. The results are reported as part of a two stage-validation of the model.
- Published
- 2022
- Full Text
- View/download PDF
5. A Bayesian network-based model to understand the role of soft requirements in technology acceptance
- Author
-
Paucar, Luis H. Garcia, primary, Bencomo, Nelly, additional, Sutcliffe, Alistair, additional, and Sawyer, Pete, additional
- Published
- 2022
- Full Text
- View/download PDF
6. Towards priority-awareness in autonomous intelligent systems
- Author
-
Samin, Huma, Paucar, Luis H. Garcia, Bencomo, Nelly, and Sawyer, Peter
- Abstract
In Autonomous and Intelligent systems (AIS), the decision-making process can be divided into two parts: (i) the priorities of the requirements are determined at design-time; (ii) design selection follows where alternatives are compared, and the preferred alternatives are chosen autonomously by the AIS. Runtime design selection is a trade-off analysis between non-functional requirements (NFRs) that uses optimisation methods, including decision-analysis and utility theory. The aim is to select the design option yielding the highest expected utility. A problem with these techniques is that they use a uni-scalar cumulative utility value to represent a combined priority for all the NFRs. However, this uni-scalar value doesn't give information about the varying impacts of actions under uncertain environmental contexts on the satisfaction priorities of individual NFRs. In this paper, we present a novel use of Multi-Reward Partially Observable Markov Decision Process (MR-POMDP) to support reasoning of separate NFR priorities. We discuss the use of rewards in MR-POMDPs as a way to support AIS with (a) priority-aware decision-making; and (b) maintain service-level agreement, by autonomously tuning NFRs' priorities to new contexts and based on data gathered at runtime. We evaluate our approach by applying it to a substantial Network case.
- Published
- 2021
7. Towards priority-awareness in autonomous intelligent systems
- Author
-
Pete Sawyer, Huma Samin, Luis Hernan Garcia Paucar, and Nelly Bencomo
- Subjects
Value (ethics) ,Non-functional requirement ,Operations research ,Process (engineering) ,Computer science ,Utility theory ,Intelligent decision support system ,Partially observable Markov decision process ,020207 software engineering ,02 engineering and technology ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Expected utility hypothesis ,Selection (genetic algorithm) - Abstract
In Autonomous and Intelligent systems (AIS), the decision-making process can be divided into two parts: (i) the priorities of the requirements are determined at design-time; (ii) design selection follows where alternatives are compared, and the preferred alternatives are chosen autonomously by the AIS. Runtime design selection is a trade-off analysis between non-functional requirements (NFRs) that uses optimisation methods, including decision-analysis and utility theory. The aim is to select the design option yielding the highest expected utility. A problem with these techniques is that they use a uni-scalar cumulative utility value to represent a combined priority for all the NFRs. However, this uni-scalar value doesn't give information about the varying impacts of actions under uncertain environmental contexts on the satisfaction priorities of individual NFRs. In this paper, we present a novel use of Multi-Reward Partially Observable Markov Decision Process (MR-POMDP) to support reasoning of separate NFR priorities. We discuss the use of rewards in MR-POMDPs as a way to support AIS with (a) priority-aware decision-making; and (b) maintain service-level agreement, by autonomously tuning NFRs' priorities to new contexts and based on data gathered at runtime. We evaluate our approach by applying it to a substantial Network case.
- Published
- 2021
- Full Text
- View/download PDF
8. Towards an architecture integrating complex event processing and temporal graphs for service monitoring
- Author
-
Juan Marcelo Parra-Ullauri, Juan Boubeta-Puig, Antonio García-Domínguez, Nelly Bencomo, and Guadalupe Ortiz
- Subjects
Flexibility (engineering) ,Data stream ,Service (systems architecture) ,Event (computing) ,Computer science ,business.industry ,Quality of service ,Distributed computing ,Complex event processing ,020207 software engineering ,02 engineering and technology ,Software ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Key (cryptography) ,business - Abstract
Software is becoming more complex as it needs to deal with an increasing number of aspects in volatile environments. This complexity may cause behaviors that violate the imposed constraints. A goal of runtime service monitoring is to determine whether the service behaves as intended to potentially allow the correction of the behavior. It may be set up in advance the infrastructure to allow the detections of suspicious situations. However, there may also be unexpected situations to look for as they only become evident during data stream monitoring at runtime produced by te system. The access to historic data may be key to detect relevant situations in the monitoring infrastructure. Available technologies used for monitoring offer different trade-offs, e.g. in cost and flexibility to store historic information. For instance, Temporal Graphs (TGs) can store the long-term history of an evolving system for future querying, at the expense of disk space and processing time. In contrast, Complex Event Processing (CEP) can quickly react to incoming situations efficiently, as long as the appropriate event patterns have been set up in advance. This paper presents an architecture that integrates CEP and TGs for service monitoring through the data stream produced at runtime by a system. The pros and cons of the proposed architecture for extracting and treating the monitored data are analyzed. The approach is applied on the monitoring of Quality of Service (QoS) of a data-management network case study. It is demonstrated how the architecture provides rapid detection of issues, as well as the ability to access to historical data about the state of the system to allow for a comprehensive monitoring solution.
- Published
- 2021
9. MEDIIK: Design and Manufacturing of an Emergency Ventilator Against COVID-19 Pandemic
- Author
-
Javier Vazquez Armendariz, Hiram Uribe Hernandez, Luis H.Olivas Alanis, Nicolas J.Hendrichs Troeglen, Jan Lammel Lindemann, Agustin Carvajal Rivera, Eduardo Flores Villalba, Eduardo González Mendívil, Miguel Mendoza Machain, Marcos David Moya Bencomo, Arturo Vazquez Almazan, Erick Ramirez Cedillo, Adriana Vargas Martínez, Ciro A. Rodríguez, Rogelio Letechipia Duran, Ricardo Linan Garcia, Cesar Caamal Torres, J. Israel Martinez Lopez, Azael Capetillo, Joaquin Acevedo Mascarua, Victor Segura Ibarra, and Julio Noriega Velasco
- Subjects
Mechanical ventilation ,Coronavirus disease 2019 (COVID-19) ,Computer science ,business.industry ,medicine.medical_treatment ,Electromagnetic compatibility ,Modular design ,Artificial lung ,Reliability engineering ,law.invention ,law ,Ventilation (architecture) ,medicine ,business ,Tidal volume ,Volume (compression) - Abstract
Herein we describe the modular design and manufacturing of an emergency ventilator based on cyclical compression of a resuscitation bag to face the COVID-19 pandemic. This was done to mitigate the staggering conditions to supply these medical devices under challenging scenarios of need and logistics. The design is based on international standards and commissions for medical electrical equipment, particular requirements for basic safety, electromagnetic compatibility, and essential performance of critical care and emergency ventilators. The modular design is capable of providing four ventilation modes: volume/pressure mandatory ventilation and volume/pressure assisted ventilation. After testing with artificial lungs, calibration, and validation instruments it was found that the main ventilation parameters achieved are: maximum tidal volume of 700 mL, maximum pressure of 50 cmH2O, inspiration/expiration ratio up to 1:4 at 30 breaths per minute. The MEDIIK designation is derived from the mayan word ik’ which means wind.
- Published
- 2021
- Full Text
- View/download PDF
10. Towards automated provenance collection for runtime models to record system history
- Author
-
Owen Reynolds, Nelly Bencomo, and Antonio García-Domínguez
- Subjects
050101 languages & linguistics ,Provenance ,On the fly ,Computer science ,05 social sciences ,Logging ,Volume (computing) ,Traffic simulation ,02 engineering and technology ,Data science ,Graph ,Multithreading ,Accountability ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,0501 psychology and cognitive sciences - Abstract
In highly dynamic environments, systems are expected to make decisions on the fly based on their observations that are bound to be partial. As such, the reasons for its runtime behaviour may be difficult to understand. In these cases, accountability is crucial, and decisions by the system need to be traceable. Logging is essential to support explanations of behaviour, but it poses challenges. Concerns about analysing massive logs have motivated the introduction of structured logging, however, knowing what to log and which details to include is still a challenge. Structured logs still do not necessarily relate events to each other, or indicate time intervals. We argue that logging changes to a runtime model in a provenance graph can mitigate some of these problems. The runtime model keeps only relevant details, therefore reducing the volume of the logs, while the provenance graph records causal connections between the changes and the activities performed by the agents in the system that have introduced them. In this paper, we demonstrate a first version towards a reusable infrastructure for the automated construction of such a provenance graph. We apply it to a multithreaded traffic simulation case study, with multiple concurrent agents managing different parts of the simulation. We show how the provenance graphs can support validating the system behaviour, and how a seeded fault is reflected in the provenance graphs.
- Published
- 2020
- Full Text
- View/download PDF
11. Temporal Models for History-Aware Explainability
- Author
-
Luis Hernan Garcia-Paucar, Juan Marcelo Parra-Ullauri, Nelly Bencomo, and Antonio García-Domínguez
- Subjects
050101 languages & linguistics ,Graph database ,Computer science ,05 social sciences ,Context (language use) ,02 engineering and technology ,computer.software_genre ,Data science ,Forensic accounting ,Temporal database ,0202 electrical engineering, electronic engineering, information engineering ,Graph (abstract data type) ,020201 artificial intelligence & image processing ,0501 psychology and cognitive sciences ,State (computer science) ,Autonomous system (mathematics) ,computer ,Evolutionary programming - Abstract
On one hand, there has been a growing interest towards the application of AI-based learning and evolutionary programming for self-adaptation under uncertainty. On the other hand, self-explanation is one of the self-* properties that has been neglected. This is paradoxical as self-explanation is inevitably needed when using such techniques. In this paper, we argue that a self-adaptive autonomous system (SAS) needs an infrastructure and capabilities to be able to look at its own history to explain and reason why the system has reached its current state. The infrastructure and capabilities need to be built based on the right conceptual models in such a way that the system's history can be stored, queried to be used in the context of the decision-making algorithms. The explanation capabilities are framed in four incremental levels, from forensic self-explanation to automated history-aware (HA) systems. Incremental capabilities imply that capabilities at Level n should be available for capabilities at Level n + 1. We demonstrate our current reassuring results related to Level 1 and Level 2, using temporal graph-based models. Specifically, we explain how Level 1 supports forensic accounting after the system's execution. We also present how to enable on-line historical analyses while the self-adaptive system is running, underpinned by the capabilities provided by Level 2. An architecture which allows recording of temporal data that can be queried to explain behaviour has been presented, and the overheads that would be imposed by live analysis are discussed. Future research opportunities are envisioned.
- Published
- 2020
12. Automated provenance graphs for models@run.time
- Author
-
Antonio García-Domínguez, Owen Reynolds, and Nelly Bencomo
- Subjects
Black box (phreaking) ,Provenance ,Computer science ,business.industry ,020207 software engineering ,02 engineering and technology ,Tracing ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Key (cryptography) ,State (computer science) ,Software system ,Layer (object-oriented design) ,Software engineering ,business ,Abstraction (linguistics) - Abstract
Software systems are increasingly making decisions autonomously by incorporating AI and machine learning capabilities. These systems are known as self-adaptive and autonomous systems (SAS). Some of these decisions can have a life-changing impact on the people involved and therefore, they need to be appropriately tracked and justified: the system should not be taken as a black box. It is required to be able to have knowledge about past events and records of history of the decision making. However, tracking everything that was going on in the system at the time a decision was made may be unfeasible, due to resource constraints and complexity. In this paper, we propose an approach that combines the abstraction and reasoning support offered by models used at runtime with provenance graphs that capture the key decisions made by a system through its execution. Provenance graphs relate the entities, actors and activities that take place in the system over time, allowing for tracing the reasons why the system reached its current state. We introduce activity scopes, which highlight the high-level activities taking place for each decision, and reduce the cost of instrumenting a system to automatically produce provenance graphs of these decisions. We demonstrate a proof of concept implementation of our proposal across two case studies, and present a roadmap towards a reusable provenance layer based on the experiments.
- Published
- 2020
- Full Text
- View/download PDF
13. Next steps in variability management due to autonomous behaviour and runtime learning
- Author
-
Nelly Bencomo
- Subjects
business.industry ,End user ,Process (engineering) ,Computer science ,020207 software engineering ,02 engineering and technology ,020204 information systems ,Component-based software engineering ,New product development ,0202 electrical engineering, electronic engineering, information engineering ,Domain engineering ,Reference architecture ,Software engineering ,business ,Adaptation (computer science) ,Software product line - Abstract
One of the basic principles in product lines is to delay design decisions related to offered functionality and quality to later phases of the life cycle [25]. Instead of deciding on what system to develop in advance, a set of assets and a common reference architecture are specified and implemented during the Domain Engineering process. Later on, during Application Engineering, specific systems are developed to satisfy the requirements reusing the assets and architecture [16]. Traditionally, this is during the Application Engineering when delayed design decisions are solved. The realization of this delay relies heavily on the use of variability in the development of product lines and systems. However, as systems become more interconnected and diverse, software architects cannot easily foresee the software variants and the interconnections between components. Consequently, a generic a priori model is conceived to specify the system's dynamic behaviour and architecture. The corresponding design decisions are left to be solved at runtime [13].Surprisingly, few research initiatives have investigated variability models at runtime [9]. Further, they have been applied only at the level of goals and architecture, which contrasts to the needs claimed by the variability community, i.e., Software Product Lines (SPLC) and Dynamic Software Product Lines (DSPL) [2, 10, 14, 22]. Especially, the vision of DSPL with their ability to support runtime updates with virtually zero downtime for products of a software product line, denotes the obvious need of variability models being used at runtime to adapt the corresponding programs. A main challenge for dealing with runtime variability is that it should support a wide range of product customizations under various scenarios that might be unknown until the execution time, as new product variants can be identified only at runtime [10, 11]. Contemporary variability models face the challenge of representing runtime variability to therefore allow the modification of variation points during the system's execution, and underpin the automation of the system's reconfiguration [15]. The runtime representation of feature models (i.e. the runtime model of features) is required to automate the decision making [9].Software automation and adaptation techniques have traditionally required a priori models for the dynamic behaviour of systems [17]. With the uncertainty present in the scenarios involved, the a priori model is difficult to define [20, 23, 26]. Even if foreseen, its maintenance is labour-intensive and, due to architecture decay, it is also prone to get out-of-date. However, the use of models@runtime does not necessarily require defining the system's behaviour model beforehand. Instead, different techniques such as machine learning, or mining software component interactions from system execution traces can be used to build a model which is in turn used to analyze, plan, and execute adaptations [18], and synthesize emergent software on the fly [7].Another well-known problem posed by the uncertainty that characterize autonomous systems is that different stakeholders (e.g. end users, operators and even developers) may not understand them due to the emergent behaviour. In other words, the running system may surprise its customers and/or developers [4]. The lack of support for explanation in these cases may compromise the trust to stakeholders, who may eventually stop using a system [12, 24]. I speculate that variability models can offer great support for (i) explanation to understand the diversity of the causes and triggers of decisions during execution and their corresponding effects using traceability [5], and (ii) better understand the behaviour of the system and its environment.Further, an extension and potentially reframing of the techniques associated with variability management may be needed to help taming uncertainty and support explanation and understanding of the systems. The use of new techniques such as machine learning exacerbates the current situation. However, at the same time machine learning techniques can also help and be used, for example, to explore the variability space [1]. What can the community do to face the challenges associated?We need to meaningfully incorporate techniques from areas such as artificial intelligence, machine learning, optimization, planning, decision theory, and bio-inspired computing into our variability management techniques to provide explanation and management of the diversity of decisions, their causes and the effects associated. My own previous work has progressed [3, 5, 6, 8, 11, 12, 19, 21] to reflect what was discussed above.
- Published
- 2020
- Full Text
- View/download PDF
14. Towards an architecture integrating complex event processing and temporal graphs for service monitoring
- Author
-
Parra-Ullauri, Juan Marcelo, García-Domínguez, Antonio, Boubeta-Puig, Juan, Bencomo, Nelly, Ortiz, Guadalupe, Parra-Ullauri, Juan Marcelo, García-Domínguez, Antonio, Boubeta-Puig, Juan, Bencomo, Nelly, and Ortiz, Guadalupe
- Abstract
Software is becoming more complex as it needs to deal with an increasing number of aspects in volatile environments. This complexity may cause behaviors that violate the imposed constraints. A goal of runtime service monitoring is to determine whether the service behaves as intended to potentially allow the correction of the behavior. It may be set up in advance the infrastructure to allow the detections of suspicious situations. However, there may also be unexpected situations to look for as they only become evident during data stream monitoring at runtime produced by te system. The access to historic data may be key to detect relevant situations in the monitoring infrastructure. Available technologies used for monitoring offer different trade-offs, e.g. in cost and flexibility to store historic information. For instance, Temporal Graphs (TGs) can store the long-term history of an evolving system for future querying, at the expense of disk space and processing time. In contrast, Complex Event Processing (CEP) can quickly react to incoming situations efficiently, as long as the appropriate event patterns have been set up in advance. This paper presents an architecture that integrates CEP and TGs for service monitoring through the data stream produced at runtime by a system. The pros and cons of the proposed architecture for extracting and treating the monitored data are analyzed. The approach is applied on the monitoring of Quality of Service (QoS) of a data-management network case study. It is demonstrated how the architecture provides rapid detection of issues, as well as the ability to access to historical data about the state of the system to allow for a comprehensive monitoring solution.
- Published
- 2021
15. Towards an architecture integrating complex event processing and temporal graphs for service monitoring
- Author
-
Parra-Ullauri, Juan Marcelo, primary, García-Domínguez, Antonio, additional, Boubeta-Puig, Juan, additional, Bencomo, Nelly, additional, and Ortiz, Guadalupe, additional
- Published
- 2021
- Full Text
- View/download PDF
16. Towards priority-awareness in autonomous intelligent systems
- Author
-
Samin, Huma, primary, Paucar, Luis H. Garcia, additional, Bencomo, Nelly, additional, and Sawyer, Peter, additional
- Published
- 2021
- Full Text
- View/download PDF
17. MEDIIK: Design and Manufacturing of an Emergency Ventilator Against COVID-19 Pandemic
- Author
-
Vazquez Armendariz, Javier, primary, Segura Ibarra, Victor, additional, H.Olivas Alanis, Luis, additional, Lammel Lindemann, Jan, additional, Israel Martinez Lopez, J., additional, Ramirez Cedillo, Erick, additional, Uribe Hernandez, Hiram, additional, Linan Garcia, Ricardo, additional, Letechipia Duran, Rogelio, additional, D.Moya Bencomo, Marcos, additional, Carvajal Rivera, Agustin, additional, Cortes Capetillo, Azael, additional, J.Hendrichs Troeglen, Nicolas, additional, Mendoza Machain, Miguel, additional, Vazquez Almazan, Arturo, additional, Caamal Torres, Cesar, additional, Noriega Velasco, Julio, additional, Acevedo Mascarua, Joaquin, additional, Vargas Martinez, Adriana, additional, Gonzalez Mendivil, Eduardo, additional, Flores Villalba, Eduardo, additional, and A. Rodriguez, Ciro, additional
- Published
- 2021
- Full Text
- View/download PDF
18. Temporal Models for History-Aware Explainability
- Author
-
Parra-Ullauri, Juan Marcelo, primary, García-Domínguez, Antonio, additional, García-Paucar, Luis Hernán, additional, and Bencomo, Nelly, additional
- Published
- 2020
- Full Text
- View/download PDF
19. Towards automated provenance collection for runtime models to record system history
- Author
-
Reynolds, Owen, primary, García-Domínguez, Antonio, additional, and Bencomo, Nelly, additional
- Published
- 2020
- Full Text
- View/download PDF
20. Automated provenance graphs for models@run.time
- Author
-
Reynolds, Owen, primary, García-Domínguez, Antonio, additional, and Bencomo, Nelly, additional
- Published
- 2020
- Full Text
- View/download PDF
21. Towards an assessment grid for intelligent modeling assistance
- Author
-
Mussbacher, Gunter, primary, Combemale, Benoit, additional, Abrahão, Silvia, additional, Bencomo, Nelly, additional, Burgueño, Loli, additional, Engels, Gregor, additional, Kienzle, Jörg, additional, Kühn, Thomas, additional, Mosser, Sébastien, additional, Sahraoui, Houari, additional, and Weyssow, Martin, additional
- Published
- 2020
- Full Text
- View/download PDF
22. Next steps in variability management due to autonomous behaviour and runtime learning
- Author
-
Bencomo, Nelly, primary
- Published
- 2020
- Full Text
- View/download PDF
23. ARRoW
- Author
-
Kevin Kam Fung Yuen, Luis Hernan Garcia Paucar, and Nelly Bencomo
- Subjects
Theoretical computer science ,Computer science ,Process (engineering) ,Principal (computer security) ,Analytic hierarchy process ,020207 software engineering ,Context (language use) ,02 engineering and technology ,Bayesian inference ,Application domain ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Arrow ,Empirical evidence - Abstract
[Context/Motivation] Decision-making for self-adaptive systems (SAS) requires the runtime trade-off of multiple non-functional requirements (NFRs) and the costs-benefits analysis of the alternative solutions. Usually, it is required the specification of the weights (a.k.a. preferences) associated with the NFRs and decision-making strategies. These preferences are traditionally defined at design-time. [Questions/Problems] A big challenge is the need to deal with unsuitable preferences, based on empirical evidence available at runtime, and which may not agree anymore with previous assumptions. Therefore, new techniques are needed to systematically reassess the current preferences according to empirical evidence collected at runtime. [Principal ideas/ results] We present ARRoW (Automatic Runtime Reappraisal of Weights) to support the dynamic update of preferences/weights associated with the NFRs and decision-making strategies in SAS, while taking into account the current levels of satisficement that NFRs can reach during the system's operation. [Contribution] To developed ARRoW, we have extended the Primitive Cognitive Network Process (P-CNP), a version of the Analytical Hierarchy Process (AHP), to enable the handling and update of weights during runtime. Specifically, in this paper, we show a formalization for the specification of the decision-making of a SAS in terms of NFRs, the design decisions and their corresponding weights as a P-CNP problem. We also report on how the P-CNP has been extended to be used at runtime. We show how the propagation of elements of P-CNP matrices is performed in such a way that the weights are updated to therefore, improve the levels of satisficement of the NFRs to better match the current environment during runtime. ARRoW leverages the Bayesian learning process underneath, which on the other hand, provides the mechanism to get access to evidence about the levels of satisficement of the NFRs. The experiments have been applied to a case study of the networking application domain where the decision-making has been improved.
- Published
- 2019
- Full Text
- View/download PDF
24. Automated provenance graphs for models@run.time
- Author
-
Reynolds, Owen, García-Domínguez, Antonio, Bencomo, Nelly, Reynolds, Owen, García-Domínguez, Antonio, and Bencomo, Nelly
- Abstract
Software systems are increasingly making decisions autonomously by incorporating AI and machine learning capabilities. These systems are known as self-adaptive and autonomous systems (SAS). Some of these decisions can have a life-changing impact on the people involved and therefore, they need to be appropriately tracked and justified: the system should not be taken as a black box. It is required to be able to have knowledge about past events and records of history of the decision making. However, tracking everything that was going on in the system at the time a decision was made may be unfeasible, due to resource constraints and complexity. In this paper, we propose an approach that combines the abstraction and reasoning support offered by models used at runtime with provenance graphs that capture the key decisions made by a system through its execution. Provenance graphs relate the entities, actors and activities that take place in the system over time, allowing for tracing the reasons why the system reached its current state. We introduce activity scopes, which highlight the high-level activities taking place for each decision, and reduce the cost of instrumenting a system to automatically produce provenance graphs of these decisions. We demonstrate a proof of concept implementation of our proposal across two case studies, and present a roadmap towards a reusable provenance layer based on the experiments.
- Published
- 2020
25. Temporal Models for History-Aware Explainability
- Author
-
Parra-Ullauri, Juan Marcelo, García-Domínguez, Antonio, García-Paucar, Luis Hernán, Bencomo, Nelly, Parra-Ullauri, Juan Marcelo, García-Domínguez, Antonio, García-Paucar, Luis Hernán, and Bencomo, Nelly
- Abstract
On one hand, there has been a growing interest towards the application of AI-based learning and evolutionary programming for self-adaptation under uncertainty. On the other hand, self-explanation is one of the self-* properties that has been neglected. This is paradoxical as self-explanation is inevitably needed when using such techniques. In this paper, we argue that a self-adaptive autonomous system (SAS) needs an infrastructure and capabilities to be able to look at its own history to explain and reason why the system has reached its current state. The infrastructure and capabilities need to be built based on the right conceptual models in such a way that the system's history can be stored, queried to be used in the context of the decision-making algorithms. The explanation capabilities are framed in four incremental levels, from forensic self-explanation to automated history-aware (HA) systems. Incremental capabilities imply that capabilities at Level n should be available for capabilities at Level n + 1. We demonstrate our current reassuring results related to Level 1 and Level 2, using temporal graph-based models. Specifically, we explain how Level 1 supports forensic accounting after the system's execution. We also present how to enable on-line historical analyses while the self-adaptive system is running, underpinned by the capabilities provided by Level 2. An architecture which allows recording of temporal data that can be queried to explain behaviour has been presented, and the overheads that would be imposed by live analysis are discussed. Future research opportunities are envisioned.
- Published
- 2020
26. Towards automated provenance collection for runtime models to record system history
- Author
-
Reynolds, Owen, García-Domínguez, Antonio, Bencomo, Nelly, Reynolds, Owen, García-Domínguez, Antonio, and Bencomo, Nelly
- Abstract
In highly dynamic environments, systems are expected to make decisions on the fly based on their observations that are bound to be partial. As such, the reasons for its runtime behaviour may be difficult to understand. In these cases, accountability is crucial, and decisions by the system need to be traceable. Logging is essential to support explanations of behaviour, but it poses challenges. Concerns about analysing massive logs have motivated the introduction of structured logging, however, knowing what to log and which details to include is still a challenge. Structured logs still do not necessarily relate events to each other, or indicate time intervals. We argue that logging changes to a runtime model in a provenance graph can mitigate some of these problems. The runtime model keeps only relevant details, therefore reducing the volume of the logs, while the provenance graph records causal connections between the changes and the activities performed by the agents in the system that have introduced them. In this paper, we demonstrate a first version towards a reusable infrastructure for the automated construction of such a provenance graph. We apply it to a multithreaded traffic simulation case study, with multiple concurrent agents managing different parts of the simulation. We show how the provenance graphs can support validating the system behaviour, and how a seeded fault is reflected in the provenance graphs.
- Published
- 2020
27. RE-STORM
- Author
-
Nelly Bencomo and Luis Hernan Garcia Paucar
- Subjects
Requirements management ,Bayes estimator ,Non-functional requirement ,Operations research ,Computer science ,Process (engineering) ,Frame (networking) ,020207 software engineering ,02 engineering and technology ,0202 electrical engineering, electronic engineering, information engineering ,Probability distribution ,020201 artificial intelligence & image processing ,Markov decision process ,Empirical evidence - Abstract
Different model-based techniques have been used to model and underpin requirements management and decision-making strategies under uncertainty for self-adaptive Systems (SASs). The models specify how the partial or total fulfillment of non-functional requirements (NFRs) drive the decision-making process at runtime. There has been considerable progress in this research area. How-ever, precarious progress has been made by the use of models at runtime using machine learning to deal with uncertainty and support decision-making based on new evidence learned during execution. New techniques are needed to systematically revise the current model and the satisficement of its NFRs when empirical evidence becomes available from the monitoring infrastructure. In this paper, we frame the decision-making problem and trade-off specifications of NFRs in terms of Partially Observable Markov Decision Processes (POMDPs) models. The mathematical probabilistic framework based on the concept of POMDPs serves as a runtime model that can be updated with new learned evidence to support reasoning about partial satisficement of NFRs and their trade-o under the new changes in the environment. In doing so, we demonstrate how our novel approach RE-STORM underpins reasoning over uncertainty and dynamic changes during the system's execution.
- Published
- 2018
- Full Text
- View/download PDF
28. DeSiRE
- Author
-
Nelly Bencomo and Ross Edwards
- Subjects
Non-functional requirement ,Computer science ,Process (engineering) ,020207 software engineering ,Context (language use) ,02 engineering and technology ,Identification (information) ,Risk analysis (engineering) ,Ranking ,Adaptive system ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Adaptation (computer science) ,Set (psychology) - Abstract
[Context/Motivation] Self-adaptive systems (SAS) are being deployed in environments of increasing uncertainty, in which they must adapt reconfiguring themselves in such a way as to continuously fulfil multiple objectives according to changes in the environment. The trade-offs between a system's non-functional requirements (NFRs) need to be done to maximise a system's utility (or equity) with regards to the NFRs, and are key drivers of the adaptation process. Decision-making for multiple objective scenarios frequently uses utility functions as measures of satisfaction of both individual and sets of NFRs, usually resulting in a weighted sum of the different objectives. [Questions/Problems] However, while adaptations are performed autonomously, the methods for choosing an adaptation are based on the criteria of human expert(s), who are susceptible to bias, subjectivity and/or lack of quantitativeness in their judgements. Thus, there is a need for a non-subjective and quantitative approach to reason about NFR satisfaction in multi-objective self-adaptation without relying on human expertise. Furthermore, human biases can also apply to the relationships between two or more NFRs (e.g. how much the satisfaction of one NFR affects the satisfaction of another), resulting in emergent inaccuracies affecting the decision(s) chosen. [Principal ideas/ results] This paper presents DeSiRE (Degrees of Satisfaction of NFRs), a purely automated objective statistical approach to quantifying the extent that a requirement is violated or satisfied, and its application to further explore the trade-offs between NFRs in decision making. Experiments using case studies have positive results showing the identification of a Pareto optimal set of candidate solutions, in addition to a ranking of these configurations by their satisfaction of each NFR.
- Published
- 2018
- Full Text
- View/download PDF
29. ARRoW
- Author
-
Paucar, Luis H. Garcia, primary, Bencomo, Nelly, additional, and Yuen, Kevin Kam Fung, additional
- Published
- 2019
- Full Text
- View/download PDF
30. ARRoW: Automatic Runtime Reappraisal of Weights for Self-Adaptation
- Author
-
Bencomo, Nelly, Garcia Paucar, Luis Hernan, Yuen, Kevin Kam Fung, Bencomo, Nelly, Garcia Paucar, Luis Hernan, and Yuen, Kevin Kam Fung
- Abstract
[Context/Motivation] Decision-making for self-adaptive systems (SAS) requires the runtime trade-off of multiple non-functional requirements (NFRs) and the costs-benefits analysis of the alternative solutions. Usually, it is required the specification of the weights (a.k.a. preferences) associated with the NFRs and decision-making strategies. These preferences are traditionally defined at design-time. [Questions/Problems] A big challenge is the need to deal with unsuitable preferences, based on empirical evidence available at runtime, and which may not agree anymore with previous assumptions. Therefore, new techniques are needed to systematically reassess the current preferences according to empirical evidence collected at runtime. [Principal ideas/ results] We present ARRoW (Automatic Runtime Reappraisal of Weights) to support the dynamic update of preferences/weights associated with the NFRs and decision-making strategies in SAS, while taking into account the current levels of satisficement that NFRs can reach during the system's operation. [Contribution] To developed ARRoW, we have extended the Primitive Cognitive Network Process (P-CNP), a version of the Analytical Hierarchy Process (AHP), to enable the handling and update of weights during runtime. Specifically, in this paper, we show a formalization for the specification of the decision-making of a SAS in terms of NFRs, the design decisions and their corresponding weights as a P-CNP problem. We also report on how the P-CNP has been extended to be used at runtime. We show how the propagation of elements of P-CNP matrices is performed in such a way that the weights are updated to therefore, improve the levels of satisficement of the NFRs to better match the current environment during runtime. ARRoW leverages the Bayesian learning process underneath, which on the other hand, provides the mechanism to get access to evidence about the levels of satisficement of the NFRs. The experiments have been applied to a case s
- Published
- 2019
31. DeSiRE
- Author
-
Edwards, Ross, primary and Bencomo, Nelly, additional
- Published
- 2018
- Full Text
- View/download PDF
32. RE-STORM
- Author
-
Paucar, Luis H. Garcia, primary and Bencomo, Nelly, additional
- Published
- 2018
- Full Text
- View/download PDF
33. Two-B or not Two-B?
- Author
-
Anikó Ekárt, Peter R. Lewis, Alina Patelli, Harry Goldingay, and Nelly Bencomo
- Subjects
Simple (abstract algebra) ,Computer science ,Search algorithm ,Black box ,Software design pattern ,Alternation (formal language theory) ,Noise (video) ,Representation (mathematics) ,Metaheuristic ,Algorithm - Abstract
Real world search problems, characterised by nonlinearity, noise and multidimensionality, are often best solved by hybrid algorithms. Techniques embodying different necessary features are triggered at specific iterations, in response to the current state of the problem space. In the existing literature, this alternation is managed either statically (through pre-programmed policies) or dynamically, at the cost of high coupling with algorithm inner representation. We extract two design patterns for hybrid metaheuristic search algorithms, the All-Seeing Eye and the Commentator patterns, which we argue should be replaced by the more flexible and loosely coupled Simple Black Box (Two-B) and Utility-based Black Box (Three-B) patterns that we propose here. We recommend the Two-B pattern for purely fitness based hybridisations and the Three-B pattern for more generic search quality evaluation based hybridisations.
- Published
- 2015
- Full Text
- View/download PDF
34. A world full of surprises: bayesian theory of surprise to quantify degrees of uncertainty
- Author
-
Nelly Bencomo, Amel Belaggoun, Aston University [Birmingham], Département Ingénierie Logiciels et Systèmes (DILS), Laboratoire d'Intégration des Systèmes et des Technologies (LIST), Direction de Recherche Technologique (CEA) (DRT (CEA)), Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Direction de Recherche Technologique (CEA) (DRT (CEA)), Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Université Paris-Saclay, and Laboratoire d'Intégration des Systèmes et des Technologies (LIST (CEA))
- Subjects
Artificial intelligence ,Specific areas ,Computer science ,media_common.quotation_subject ,Bayesian probability ,02 engineering and technology ,Adaptive systems ,Decision theory ,Bayesian ,Self adaptation ,Bayesian theory ,Normal behavior ,Self-adaptive system ,0202 electrical engineering, electronic engineering, information engineering ,[INFO]Computer Science [cs] ,Divergence (statistics) ,media_common ,Event (probability theory) ,Software engineering ,business.industry ,Uncertainty ,Bayesian network ,020207 software engineering ,Bayesian statistics ,Surprise ,Bayesian networks ,Quantitative analysis (finance) ,020201 artificial intelligence & image processing ,business ,Dynamic decision network - Abstract
Conference of 36th International Conference on Software Engineering, ICSE 2014 ; Conference Date: 31 May 2014 Through 7 June 2014; Conference Code:106011; International audience; In the specific area of software engineering (SE) for self-adaptive systems (SASs) there is a growing research awareness about the synergy between SE and artificial intelligence (AI). However, just few significant results have been published so far. In this paper, we propose a novel and formal Bayesian definition of surprise as the basis for quantitative analysis to measure degrees of uncertainty and deviations of self-adaptive systems from normal behavior. A surprise measures how observed data affects the models or assumptions of the world during runtime. The key idea is that a "surprising" event can be defined as one that causes a large divergence between the belief distributions prior to and posterior to the event occurring. In such a case the system may decide either to adapt accordingly or to flag that an abnormal situation is happening. In this paper, we discuss possible applications of Bayesian theory of surprise for the case of self-adaptive systems using Bayesian dynamic decision networks.
- Published
- 2014
- Full Text
- View/download PDF
35. Summary of the 7th International Workshop on Models@run.time
- Author
-
Gordon Blair, Nelly Bencomo, Sebastian Götz, Brice Morin, and Bernhard Rumpe
- Subjects
Engineering management ,Software ,Computer science ,business.industry ,Leverage (statistics) ,business - Abstract
The Models@run.time (MRT) workshop series offers a discussion forum for the rising need to leverage modeling techniques for the software of the future. The main goals are to explore the benefits of models@run.time and to foster collaboration and cross-fertilization between different research communities like for example like model-driven engineering (e.g. MODELS), self-adaptive/autonomous systems communities (e.g., SEAMS and ICAC), the control theory community and the artificial intelligence community.
- Published
- 2012
- Full Text
- View/download PDF
36. Dynamic decision-making based on NFR for managing software variability and configuration selection
- Author
-
Almeida, André, Bencomo, Nelly, Batista, Thais, Cavalcante, Everton, Dantas, Francisco, Almeida, André, Bencomo, Nelly, Batista, Thais, Cavalcante, Everton, and Dantas, Francisco
- Abstract
Due to dynamic variability, identifying the specific conditions under which non-functional requirements (NFRs) are satisfied may be only possible at runtime. Therefore, it is necessary to consider the dynamic treatment of relevant information during the requirements specifications. The associated data can be gathered by monitoring the execution of the application and its underlying environment to support reasoning about how the current application configuration is fulfilling the established requirements. This paper presents a dynamic decision-making infrastructure to support both NFRs representation and monitoring, and to reason about the degree of satisfaction of NFRs during runtime. The infrastructure is composed of: (i) an extended feature model aligned with a domain-specific language for representing NFRs to be monitored at runtime; (ii) a monitoring infrastructure to continuously assess NFRs at runtime; and (iii) a exible decision-making process to select the best available configuration based on the satisfaction degree of the NRFs. The evaluation of the approach has shown that it is able to choose application configurations that well fit user NFRs based on runtime information. The evaluation also revealed that the proposed infrastructure provided consistent indicators regarding the best application configurations that fit user NFRs. Finally, a benefit of our approach is that it allows us to quantify the level of satisfaction with respect to NFRs specification.
- Published
- 2015
37. Two-B or not Two-B?
- Author
-
Patelli, Alina, primary, Bencomo, Nelly, additional, Ekárt, Anikó, additional, Goldingay, Harry, additional, and Lewis, Peter, additional
- Published
- 2015
- Full Text
- View/download PDF
38. Requirements reflection
- Author
-
Nelly Bencomo, Jon Whittle, Emmanuel Letier, Anthony Finkelstein, and Pete Sawyer
- Subjects
Requirements management ,Requirement ,Business requirements ,Non-functional requirement ,Requirements traceability ,Computer science ,business.industry ,Runtime verification ,Software requirements specification ,System requirements specification ,System requirements ,Formal specification ,Systems engineering ,Non-functional testing ,Software system ,Software engineering ,business ,Software architecture ,Requirements analysis - Abstract
Computational reflection is a well-established technique that gives a program the ability to dynamically observe and possibly modify its behaviour. To date, however, reflection is mainly applied either to the software architecture or its implementation. We know of no approach that fully supports requirements reflection- that is, making requirements available as runtime objects. Although there is a body of literature on requirements monitoring, such work typically generates runtime artefacts from requirements and so the requirements themselves are not directly accessible at runtime. In this paper, we define requirements reflection and a set of research challenges. Requirements reflection is important because software systems of the future will be self-managing and will need to adapt continuously to changing environmental conditions. We argue requirements reflection can support such self-adaptive systems by making requirements first-class runtime entities, thus endowing software systems with the ability to reason about, understand, explain and modify requirements at runtime.
- Published
- 2010
- Full Text
- View/download PDF
39. Requirements reflection:requirements as runtime entities
- Author
-
Bencomo, Nelly, Whittle, Jon, Sawyer, Peter, Finkelstein, Anthony, and Letier, Emmanuel
- Abstract
Computational reflection is a well-established technique that gives a program the ability to dynamically observe and possibly modify its behaviour. To date, however, reflection is mainly applied either to the software architecture or its implementation. We know of no approach that fully supports requirements reflection- that is, making requirements available as runtime objects. Although there is a body of literature on requirements monitoring, such work typically generates runtime artefacts from requirements and so the requirements themselves are not directly accessible at runtime. In this paper, we define requirements reflection and a set of research challenges. Requirements reflection is important because software systems of the future will be self-managing and will need to adapt continuously to changing environmental conditions. We argue requirements reflection can support such self-adaptive systems by making requirements first-class runtime entities, thus endowing software systems with the ability to reason about, understand, explain and modify requirements at runtime.
- Published
- 2010
40. Dynamic decision-making based on NFR for managing software variability and configuration selection
- Author
-
Almeida, André, primary, Bencomo, Nelly, additional, Batista, Thais, additional, Cavalcante, Everton, additional, and Dantas, Francisco, additional
- Published
- 2015
- Full Text
- View/download PDF
41. Exploiting extreme heterogeneity in a flood warning scenario using the Gridkit middleware
- Author
-
Danny Hughes, Paul Grace, Gordon S. Blair, Geoff Coulson, Nelly Bencomo, and Barry Porter
- Subjects
Flood warning ,Key distribution in wireless sensor networks ,Flood myth ,Computer science ,Middleware (distributed applications) ,Environmental monitoring ,Control reconfiguration ,Computer security ,computer.software_genre ,computer ,Wireless sensor network - Abstract
This demonstration showcases the Gridkit middleware in a flood monitoring and warning scenario. Gridkit provides the services required to support wireless sensor network based environmental monitoring and also hosts lightweight hydraulic models which are used to provide timely alerts to local residents warning of imminent flood events. In this scenario, Gridkit abstracts over a number of low-level sensing, computational and networking technologies. This paper and associated demonstration will show that using distributed dynamic reconfiguration, these heterogeneous sensor network technologies may be safely adapted to cope with changing environmental conditions while providing timely flood warnings, resulting in significant power savings and performance improvements.
- Published
- 2008
- Full Text
- View/download PDF
42. Models, reflective mechanisms and family-based systems to support dynamic configuration
- Author
-
Nelly Bencomo, Paul Grace, and Gordon S. Blair
- Subjects
Domain-specific language ,Reflection (computer programming) ,Grid computing ,Computer science ,Middleware ,Component (UML) ,Distributed computing ,Message oriented middleware ,computer.software_genre ,Adaptation (computer science) ,computer ,Extensibility - Abstract
Middleware platforms must satisfy an increasingly broad and variable set of requirements arising from the needs of both applications and underlying systems deployed in dynamically changing environments such as environment monitoring and disaster management. To meet these requirements, middleware platforms must offer a high degree of configurability at deployment time and runtime. At Lancaster we use reflection, components and component frameworks, and middleware families as the basis of our approach to develop dynamically configurable middleware platforms. In our approach, components and component frameworks provide structure, and reflection provides support for dynamic configuration and extensibility for run-time evolution and adaptation. This approach however has contributed to make the development and operation of middleware platforms even more complex. Middleware developers deal with a large number of variability decisions when planning (re)configurations and adaptations. This paper examines how Model-Driven Engineering (MDE), Domain Specific Languages (DSLs) and System Family Engineering can be used to improve the development of middleware families, systematically generating middleware configurations from high level descriptions. We present Genie, a DSL-based prototype development-tool that supports the specification, validation and generation of artefacts for component-based reflective middleware. In particular, this paper describes how the Genie toolkit improves the development of the Gridkit middleware through the modelling and automated generation of middleware policies; that remove the complexity of handling large number of runtime adaptation policies.
- Published
- 2006
- Full Text
- View/download PDF
43. A world full of surprises: bayesian theory of surprise to quantify degrees of uncertainty
- Author
-
Bencomo, Nelly, primary and Belaggoun, Amel, additional
- Published
- 2014
- Full Text
- View/download PDF
44. Tracing requirements for adaptive systems using claims
- Author
-
Welsh, Kristopher, Bencomo, Nelly, Sawyer, Peter, Welsh, Kristopher, Bencomo, Nelly, and Sawyer, Peter
- Abstract
The complexity of environments faced by dynamically adaptive systems (DAS) means that the RE process will often be iterative with analysts revisiting the system specifications based on new environmental understanding product of experiences with experimental deployments, or even after final deployments. An ability to trace backwards to an identified environmental assumption, and to trace forwards to find the areas of a DAS's specification that are affected by changes in environmental understanding aids in supporting this necessarily iterative RE process. This paper demonstrates how claims can be used as markers for areas of uncertainty in a DAS specification. The paper demonstrates backward tracing using claims to identify faulty environmental understanding, and forward tracing to allow generation of new behaviour in the form of policy adaptations and models for transitioning the running system.
- Published
- 2011
45. Requirements reflection: requirements as runtime entities
- Author
-
Bencomo, Nelly, Whittle, Jon, Sawyer, Peter, Finkelstein, Anthony, Letier, Emmanuel, Bencomo, Nelly, Whittle, Jon, Sawyer, Peter, Finkelstein, Anthony, and Letier, Emmanuel
- Abstract
Computational reflection is a well-established technique that gives a program the ability to dynamically observe and possibly modify its behavior. To date, however, reflection is mainly applied either to the software architecture or its implementation. We know of no approach that fully supports requirements reflection– that is, making requirements available as runtime objects. Although there is a body of literature on requirements monitoring, such work typically generates runtime artifacts from requirements and so the requirements themselves are not directly accessible at runtime. In this paper, we define the notion of requirements reflection and set out a research agenda. Requirements reflection is important because software systems of the future will be self-managing and will need to adapt continuously to changing environmental conditions. We argue that requirements reflection can support such self-adaptive systems by making requirements first-class runtime entities, thus endowing software systems with the ability to reason about, understand, explain and modify requirements at runtime.
- Published
- 2010
46. Genie: supporting the model driven development of reflective, component-based adaptive systems
- Author
-
Bencomo, Nelly, Grace, P., Flores-Cortes, C., Hughes, Daniel, Blair, Gordon S., Bencomo, Nelly, Grace, P., Flores-Cortes, C., Hughes, Daniel, and Blair, Gordon S.
- Abstract
Engineering adaptive software is an increasingly complex task. Here, we demonstrate Genie, a tool that supports the modelling, generation, and operation of highly reconfigurable, component-based systems. We showcase how Genie is used in two case-studies: i) the development and operation of an adaptive flood warning system, and ii) a service discovery application. In this context, adaptation is enabled by the Gridkit reflective middleware platform.
- Published
- 2008
47. Engineering complex adaptations in highly heterogeneous distributed systems
- Author
-
Grace, P., Blair, Gordon S., Flores-Cortes, Carlos, Bencomo, Nelly, Grace, P., Blair, Gordon S., Flores-Cortes, Carlos, and Bencomo, Nelly
- Abstract
Distributed systems now encounter extreme heterogeneity in the form of diverse devices, network types etc., and also need to dynamically adapt to changing environmental conditions. Self-adaptive middleware is ideally situated to address these challenges. However, developing such software is a complex task. In this paper, we present the Gridkit self* approach to the engineering of reflective middleware; this embraces state of the art software engineering practices, and flexible dynamic adaptation mechanisms to better support system developers. Domain specific frameworks are modeled and developed to enhance configurability and reconfigurability. We evaluate this approach using case studies in the domains of service discovery and network overlays. These demonstrate the benefits of the approach in terms of aiding and simplifying the process of creating self-configuring and self-adaptive software.
- Published
- 2008
48. Summary of the 7th International Workshop on Models@run.time
- Author
-
Bencomo, Nelly, primary, Blair, Gordon, additional, Götz, Sebastian, additional, Morin, Brice, additional, and Rumpe, Bernhard, additional
- Published
- 2012
- Full Text
- View/download PDF
49. Satisfying requirements for pervasive service compositions
- Author
-
Cavallaro, Luca, primary, Sawyer, Pete, additional, Sykes, Daniel, additional, Bencomo, Nelly, additional, and Issarny, Valérie, additional
- Published
- 2012
- Full Text
- View/download PDF
50. Tracing requirements for adaptive systems using claims
- Author
-
Welsh, Kristopher, primary, Bencomo, Nelly, additional, and Sawyer, Pete, additional
- Published
- 2011
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.