90 results on '"Paulo Borba"'
Search Results
2. Privacy and security constraints for code contributions
- Author
-
Rodrigo Andrade and Paulo Borba
- Subjects
Computer science ,business.industry ,Code (cryptography) ,Collaborative software development ,Software engineering ,business ,Software - Published
- 2020
- Full Text
- View/download PDF
3. Textual merge based on language-specific syntactic separators
- Author
-
Guilherme Cavalcanti, Paulo Borba, and Jônatas Clementino
- Subjects
Future studies ,business.industry ,Computer science ,Syntactic structure ,Artificial intelligence ,Line (text file) ,business ,computer.software_genre ,computer ,Merge (version control) ,On Language ,Natural language processing - Abstract
In practice, developers mostly use purely textual, line-based, merge tools. Such tools, however, often report false conflicts. Researchers have then proposed AST-based tools that explore language syntactic structure to reduce false conflicts. Nevertheless, these approaches might negatively impact merge performance, and demand the creation of a new tool for each language. Trying to simulate the behavior of AST-based tools without their drawbacks, this paper proposes and analyzes a purely textual, separator-based, merge tool that aims to simulate AST-based tools by considering programming language syntactic separators, instead of just lines, when comparing and merging changes. The obtained results show that the separator-based textual approach might reduce the number of false conflicts when compared to the line-based approach. The new solution makes room for future studies and hybrid merge tools.
- Published
- 2021
- Full Text
- View/download PDF
4. Using acceptance tests to predict files changed by programming tasks
- Author
-
Thaís Rocha, João Pedro Santos, and Paulo Borba
- Subjects
Computer science ,business.industry ,05 social sciences ,Context (computing) ,Code coverage ,020207 software engineering ,02 engineering and technology ,Software quality ,Test (assessment) ,Task (computing) ,Hardware and Architecture ,Acceptance testing ,0502 economics and business ,0202 electrical engineering, electronic engineering, information engineering ,Code (cryptography) ,Software engineering ,business ,050203 business & management ,Software ,Information Systems - Abstract
In a collaborative development context, conflicting code changes might compromise software quality and developers productivity. To reduce conflicts, one could avoid the parallel execution of potentially conflicting tasks. Although hopeful, this strategy is challenging because it relies on the prediction of the required file changes to complete a task. As predicting such file changes is hard, we investigate its feasibility for BDD (Behaviour-Driven Development) projects, which write automated acceptance tests before implementing features. We develop a tool that, for a given task, statically analyzes Cucumber tests and infers test-based interfaces (files that could be executed by the tests), approximating files that would be changed by the task. To assess the accuracy of this approximation, we measure precision and recall of test-based interfaces of 513 tasks from 18 Rails projects on GitHub. We also compare such interfaces with randomly defined interfaces, interfaces obtained by textual similarity of test specifications with past tasks, and interfaces computed by executing tests. Our results give evidence that, in the specific context of BDD, Cucumber tests might help to predict files changed by tasks. We find that the better the test coverage, the better the predictive power. A hybrid approach for computing test-based interfaces is promising.
- Published
- 2019
- Full Text
- View/download PDF
5. Semi-Automated Test-Case Propagation in Fork Ecosystems
- Author
-
Paulo Borba, Thorsten Berger, and Mukelabai Mukelabai
- Subjects
Underpinning ,Test case ,Process management ,Market segmentation ,Computer science ,media_common.quotation_subject ,Quality (business) ,Reuse ,Software quality ,Fork (software development) ,media_common ,Test (assessment) - Abstract
Forking provides a flexible and low-cost strategy for developers to adapt an existing project to new requirements, for instance, when addressing different market segments, hardware constraints, or runtime environments. Then, small ecosystems of forked projects are formed, with each project in the ecosystem maintained by a separate team or organization. The software quality of projects in fork ecosystems varies with the resources available as well as team experience, and expertise, especially when the forked projects are maintained independently by teams that are unaware of the evolution of other's forks. Consequently, the quality of forked projects could be improved by reusing test cases as well as code, thereby leveraging community expertise and experience, and commonalities between the projects. We propose a novel technique for recommending and propagating test cases across forked projects. We motivate our idea with a pre-study we conducted to investigate the extent to which test cases are shared or can potentially be reused in a fork ecosystem. We also present the theoretical and practical implications underpinning the proposed idea, together with a research agenda.
- Published
- 2021
- Full Text
- View/download PDF
6. Detecting Semantic Conflicts via Automated Behavior Change Detection
- Author
-
Leuson Mario Pedro da Silva, Wardah Mahmood, Joao Moisakis, Thorsten Berger, and Paulo Borba
- Subjects
Teamwork ,Unit testing ,Computer science ,business.industry ,media_common.quotation_subject ,Behavior change ,020207 software engineering ,02 engineering and technology ,Static analysis ,Collaborative software development ,Data science ,Software ,Software deployment ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Program behavior ,business ,media_common - Abstract
Branching and merging are common practices in collaborative software development. They increase developer productivity by fostering teamwork, allowing developers to independently contribute to a software project. Despite such benefits, branching and merging comes at a cost—the need to merge software and to resolve merge conflicts, which often occur in practice. While modern merge techniques, such as 3-way or structured merge, can resolve many such conflicts automatically, they fail when the conflict arises not at the syntactic, but the semantic level. Detecting such conflicts requires understanding the behavior of the software, which is beyond the capabilities of most existing merge tools. As such, semantic conflicts can only be identified and fixed with significant effort and knowledge of the changes to be merged. While semantic merge tools have been proposed, they are usually heavyweight, based on static analysis, and need explicit specifications of program behavior. In this work, we take a different route and explore the automated creation of unit tests as partial specifications to detect unwanted behavior changes (conflicts) when merging software.We systematically explore the detection of semantic conflicts through unit-test generation. Relying on a ground-truth dataset of 38 software merge scenarios, which we extracted from GitHub, we manually analyzed them and investigated whether semantic conflicts exist. Next, we apply test-generation tools to study their detection rates. We propose improvements (code transformations) and study their effectiveness, as well as we qualitatively analyze the detection results and propose future improvements. For example, we analyze the generated test suites for false-negative cases to understand why the conflict was not detected. Our results evidence the feasibility of using test-case generation to detect semantic conflicts as a method that is versatile and requires only limited deployment effort in practice, as well as it does not require explicit behavior specifications.
- Published
- 2020
- Full Text
- View/download PDF
7. Detecting Overly Strong Preconditions in Refactoring Engines
- Author
-
Márcio Ribeiro, Leopoldo Teixeira, Paulo Borba, Rohit Gheyi, Gustavo Soares, and Melina Mongiovi
- Subjects
Java ,Computer science ,business.industry ,Programming language ,020207 software engineering ,Usability ,02 engineering and technology ,computer.software_genre ,Test case ,Transformation (function) ,Software bug ,Code refactoring ,Software_SOFTWAREENGINEERING ,TheoryofComputation_LOGICSANDMEANINGSOFPROGRAMS ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,business ,computer ,Implementation ,Software ,computer.programming_language ,Eclipse - Abstract
Refactoring engines may have overly strong preconditions preventing developers from applying useful transformations. We find that 32 percent of the Eclipse and JRRT test suites are concerned with detecting overly strong preconditions. In general, developers manually write test cases, which is costly and error prone. Our previous technique detects overly strong preconditions using differential testing. However, it needs at least two refactoring engines. In this work, we propose a technique to detect overly strong preconditions in refactoring engines without needing reference implementations. We automatically generate programs and attempt to refactor them. For each rejected transformation, we attempt to apply it again after disabling the preconditions that lead the refactoring engine to reject the transformation. If it applies a behavior preserving transformation, we consider the disabled preconditions overly strong. We evaluate 10 refactorings of Eclipse and JRRT by generating 154,040 programs. We find 15 overly strong preconditions in Eclipse and 15 in JRRT. Our technique detects 11 bugs that our previous technique cannot detect while missing 5 bugs. We evaluate the technique by replacing the programs generated by JDolly with the input programs of Eclipse and JRRT test suites. Our technique detects 14 overly strong preconditions in Eclipse and 4 in JRRT.
- Published
- 2018
- Full Text
- View/download PDF
8. Understanding semi-structured merge conflict characteristics in open-source Java projects
- Author
-
Paulo Borba, Guilherme Cavalcanti, and Paola Accioly
- Subjects
Copying ,Java ,Exploit ,Computer science ,020207 software engineering ,02 engineering and technology ,Collaborative software development ,Data science ,Syntax ,Open source ,Empirical research ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Merge (version control) ,computer ,Software ,computer.programming_language - Abstract
Empirical studies show that merge conflicts frequently occur, impairing developers' productivity, since merging conflicting contributions might be a demanding and tedious task. However, the structure of changes that lead to conflicts has not been studied yet. Understanding the underlying structure of conflicts, and the involved syntactic language elements might shed light on how to better avoid merge conflicts. To this end, in this paper we derive a catalog of conflict patterns expressed in terms of the structure of code changes that lead to merge conflicts. We focus on conflicts reported by a semistructured merge tool that exploits knowledge about the underlying syntax of the artifacts. This way, we avoid analyzing a large number of spurious conflicts often reported by typical line based merge tools. To assess the occurrence of such patterns in different systems, we conduct an empirical study reproducing 70,047 merges from 123 GitHub Java projects. Our results show that most semistructured merge conflicts in our sample happen because developers independently edit the same or consecutive lines of the same method. However, the probability of creating a merge conflict is approximately the same when editing methods, class fields, and modifier lists. Furthermore, we noticed that most part of conflicting merge scenarios, and merge conflicts, involve more than two developers. Also, that copying and pasting pieces of code, or even entire files, across different repositories is a common practice and cause of conflicts. Finally, we discuss how our results reveal the need for new research studies and suggest potential improvements to tools supporting collaborative software development.
- Published
- 2017
- Full Text
- View/download PDF
9. Evaluating and improving semistructured merge
- Author
-
Paulo Borba, Paola Accioly, and Guilherme Cavalcanti
- Subjects
Correctness ,Computer science ,False positives and false negatives ,020207 software engineering ,02 engineering and technology ,computer.software_genre ,Software merging ,Open source ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,False positive paradox ,Syntactic structure ,Data mining ,Safety, Risk, Reliability and Quality ,computer ,Merge (version control) ,Software - Abstract
While unstructured merge tools rely only on textual analysis to detect and resolve conflicts, semistructured merge tools go further by partially exploiting the syntactic structure and semantics of the involved artifacts. Previous studies compare these merge approaches with respect to the number of reported conflicts, showing, for most projects and merge situations, reduction in favor of semistructured merge. However, these studies do not investigate whether this reduction actually leads to integration effort reduction (productivity) without negative impact on the correctness of the merging process (quality). To analyze that, and better understand how merge tools could be improved, in this paper we reproduce more than 30,000 merges from 50 open source projects, identifying conflicts incorrectly reported by one approach but not by the other (false positives), and conflicts correctly reported by one approach but missed by the other (false negatives). Our results and complementary analysis indicate that, in the studied sample, the number of false positives is significantly reduced when using semistructured merge. We also find evidence that its false positives are easier to analyze and resolve than those reported by unstructured merge. However, we find no evidence that semistructured merge leads to fewer false negatives, and we argue that they are harder to detect and resolve than unstructured merge false negatives. Driven by these findings, we implement an improved semistructured merge tool that further combines both approaches to reduce the false positives and false negatives of semistructured merge. We find evidence that the improved tool, when compared to unstructured merge in our sample, reduces the number of reported conflicts by half, has no additional false positives, has at least 8% fewer false negatives, and is not prohibitively slower.
- Published
- 2017
- Full Text
- View/download PDF
10. An idiom to represent data types in Alloy
- Author
-
Rohit Gheyi, Márcio Ribeiro, Augusto Sampaio, and Paulo Borba
- Subjects
Interpretation (logic) ,Computer science ,Programming language ,020207 software engineering ,Context (language use) ,0102 computer and information sciences ,02 engineering and technology ,Type (model theory) ,computer.software_genre ,01 natural sciences ,Data type ,Computer Science Applications ,Alloy Analyzer ,Unified Modeling Language ,010201 computation theory & mathematics ,TheoryofComputation_LOGICSANDMEANINGSOFPROGRAMS ,0202 electrical engineering, electronic engineering, information engineering ,computer ,Software ,Axiom ,Language construct ,Information Systems ,computer.programming_language - Abstract
Context: It is common to consider Alloy signatures or UML classes as data types that have a canonical fixed interpretation: the elements of the type correspond to terms recursively generated by the type constructors. However, these language constructs resemble data types but, strictly, they are not. Objective: In this article, we propose an idiom to specify data types in Alloy. Method: We compare our approach to others in the context of checking data refinement using the Alloy Analyzer tool. Results: Some previous studies do not include the generation axiom and may perform unsound analysis. Other studies recommend some optimizations to overcome a limitation in the Alloy Analyzer tool. Conclusion: The problem is not related to the tool but the way data types must be represented in Alloy. This study shows the importance of using automated analyses to test translation between different language constructs.
- Published
- 2017
- Full Text
- View/download PDF
11. Safe Evolution of Product Lines Using Configuration Knowledge Laws
- Author
-
Paulo Borba, Leopoldo Teixeira, and Rohit Gheyi
- Subjects
Algebraic laws ,Set (abstract data type) ,Automated theorem proving ,Computer science ,Formal semantics (linguistics) ,Product (mathematics) ,Law ,Product line ,Feature (machine learning) ,Software product line - Abstract
When evolving a software product line, it is often important to ensure that we do it in a safe way, ensuring that the resulting product line remains well-formed and that the behavior of existing products is not affected. To ensure this, one usually has to analyze the different artifacts that constitute a product line, like feature models, configuration knowledge and assets. Manually analyzing these artifacts can be time-consuming and error prone, since a product line might consist of thousands of products. Existing works show that a non-negligible number of changes performed in commits deal only with the configuration knowledge, that is, the mapping between features and assets. This way, in this paper, we propose a set of algebraic laws, which correspond to bi-directional transformations for configuration knowledge models, that we can use to justify safe evolution of product lines, when only the configuration knowledge model changes. Using a theorem prover, we proved all laws sound with respect to a formal semantics. We also present a case study, where we use these laws to justify safe evolution scenarios of a non trivial industrial software product line.
- Published
- 2020
- Full Text
- View/download PDF
12. Semistructured Merge in JavaScript Systems
- Author
-
Sergio Soares, Alberto Trindade Tavares, Paulo Borba, and Guilherme Cavalcanti
- Subjects
Correctness ,Computer science ,Programming language ,020207 software engineering ,02 engineering and technology ,computer.software_genre ,JavaScript ,Scripting language ,Merge algorithm ,ComputingMethodologies_DOCUMENTANDTEXTPROCESSING ,0202 electrical engineering, electronic engineering, information engineering ,Merge (version control) ,computer ,computer.programming_language - Abstract
Industry widely uses unstructured merge tools that rely on textual analysis to detect and resolve conflicts between code contributions. Semistructured merge tools go further by partially exploring the syntactic structure of code artifacts, and, as a consequence, obtaining significant merge accuracy gains for Java-like languages. To understand whether semistructured merge and the observed gains generalize to other kinds of languages, we implement two semistructured merge tools for JavaScript, and compare them to an unstructured tool. We find that current semistructured merge algorithms and frameworks are not directly applicable for scripting languages like JavaScript. By adapting the algorithms, and studying 10,345 merge scenarios from 50 JavaScript projects on GitHub, we find evidence that our JavaScript tools report fewer spurious conflicts than unstructured merge, without compromising the correctness of the merging process. The gains, however, are much smaller than the ones observed for Java-like languages, suggesting that semistructured merge advantages might be limited for languages that allow both commutative and non-commutative declarations at the same syntactic level.
- Published
- 2019
- Full Text
- View/download PDF
13. The Impact of Structure on Software Merging: Semistructured Versus Structured Merge
- Author
-
Georg Seibt, Paulo Borba, Guilherme Cavalcanti, and Sven Apel
- Subjects
Software merging ,Information retrieval ,Computer science ,Data_FILES ,0202 electrical engineering, electronic engineering, information engineering ,False positive paradox ,020207 software engineering ,02 engineering and technology ,Merge (version control) - Abstract
Merge conflicts often occur when developers concurrently change the same code artifacts. While state of practice unstructured merge tools (e.g. Git merge) try to automatically resolve merge conflicts based on textual similarity, semistructured and structured merge tools try to go further by exploiting the syntactic structure and semantics of the artifacts involved. Although there is evidence that semistructured merge has significant advantages over unstructured merge, and that structured merge reports significantly fewer conflicts than unstructured merge, it is unknown how semistructured merge compares with structured merge. To help developers decide which kind of tool to use, we compare semistructured and structured merge in an empirical study by reproducing more than 40,000 merge scenarios from more than 500 projects. In particular, we assess how often the two merge strategies report different results: we identify conflicts incorrectly reported by one but not by the other (false positives), and conflicts correctly reported by one but missed by the other (false negatives). Our results show that semistructured and structured merge differ in 24% of the scenarios with conflicts. Semistructured merge reports more false positives, whereas structured merge has more false negatives. Finally, we found that adapting a semistructured merge tool to resolve a particular kind of conflict makes semistructured and structured merge even closer.
- Published
- 2019
- Full Text
- View/download PDF
14. Improving the prediction of files changed by programming tasks
- Author
-
Thaís Rocha, João Pedro Santos, and Paulo Borba
- Subjects
Development environment ,Recall ,business.industry ,Acceptance testing ,Computer science ,Software engineering ,business ,Precision and recall ,Merge (version control) ,Software quality - Abstract
Integration conflicts often damage software quality and developers' productivity in a collaborative development environment. For reducing merge conflicts, we could avoid asking developers to execute potentially conflicting tasks in parallel, as long as we can predict the files to be changed by each task. As manually predicting that is hard, the TAITI tool tries to do that in the context of BDD (Behaviour-Driven Development) projects, by statically analysing the automated acceptance tests that validate each task. TAITI computes the set of files that might be executed by the tests of a task (a so called test-based task interface), approximating the files that developers will change when performing the task. Although TAITI performs better than a random task interface, there is space for accuracy improvements. In this paper, we extend the interfaces computed by TAITI by including the dependences of the application files reached by the task tests. To understand the potential benefits of our extension, we evaluate precision and recall of 60 task interfaces from 8 Rails GitHub projects. The results bring evidence that the extended interface improves recall by slightly compromising precision.
- Published
- 2019
- Full Text
- View/download PDF
15. Assessing fine-grained feature dependencies
- Author
-
Márcio Ribeiro, Baldoino Fonseca, Rohit Gheyi, Flávio Medeiros, Paulo Borba, and Iran Rodrigues
- Subjects
Parsing ,Source code ,Computer science ,Programming language ,media_common.quotation_subject ,020207 software engineering ,Context (language use) ,02 engineering and technology ,computer.software_genre ,01 natural sciences ,Computer Science Applications ,Task (project management) ,Dependency theory (database theory) ,010104 statistics & probability ,0202 electrical engineering, electronic engineering, information engineering ,Feature (machine learning) ,Preprocessor ,0101 mathematics ,Function (engineering) ,computer ,Software ,Information Systems ,media_common - Abstract
Context: Maintaining software families is not a trivial task. Developers commonly introduce bugs when they do not consider existing dependencies among features. When such implementations share program elements, such as variables and functions, inadvertently using these elements may result in bugs. In this context, previous work focuses only on the occurrence of intraprocedural dependencies, that is, when features share program elements within a function. But at the same time, we still lack studies investigating dependencies that transcend the boundaries of a function, since these cases might cause bugs as well.Objective: This work assesses to what extent feature dependencies exist in actual software families, answering research questions regarding the occurrence of intraprocedural, global, and interprocedural dependencies and their characteristics.Method: We perform an empirical study covering 40 software families of different domains and sizes. We use a variability-aware parser to analyze families source code while retaining all variability information.Results: Intraprocedural and interprocedural feature dependencies are common in the families we analyze: more than half of functions with preprocessor directives have intraprocedural dependencies, while over a quarter of all functions have interprocedural dependencies. The median depth of interprocedural dependencies is 9.Conclusion: Given these dependencies are rather common, there is a need for tools and techniques to raise developers awareness in order to minimize or avoid problems when maintaining code in the presence of such dependencies. Problems regarding interprocedural dependencies with high depths might be harder to detect and fix.
- Published
- 2016
- Full Text
- View/download PDF
16. Understanding predictive factors for merge conflicts
- Author
-
Klissiomara L. Dias, Paulo Borba, and Marcos Barreto
- Subjects
Computer science ,business.industry ,020207 software engineering ,02 engineering and technology ,Modular design ,Python (programming language) ,Data science ,Computer Science Applications ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Predictive power ,Project management ,business ,Merge (version control) ,computer ,Software ,Predictive modelling ,Information Systems ,computer.programming_language - Abstract
Context: Merge conflicts often occur when developers change the same code artifacts. Such conflicts might be frequent in practice, and resolving them might be costly and is an error-prone activity. Objective: To minimize these problems by reducing merge conflicts, it is important to better understand how conflict occurrence is affected by technical and organizational factors. Method: With that aim, we investigate seven factors related to modularity, size, and timing of developers contributions. To do so, we reproduce and analyze 73504 merge scenarios in GitHub repositories of Ruby and Python MVC projects. Results: We find evidence that the likelihood of merge conflict occurrence significantly increases when contributions to be merged are not modular in the sense that they involve files from the same MVC slice (related model, view, and controller files). We also find bigger contributions involving more developers, commits, and changed files are more likely associated with merge conflicts. Regarding the timing factors, we observe contributions developed over longer periods of time are more likely associated with conflicts. No evaluated factor shows predictive power concerning both the number of merge conflicts and the number of files with conflicts. Conclusion: Our results could be used to derive recommendations for development teams and merge conflict prediction models. Project management and assistive tools could benefit from these models.
- Published
- 2020
- Full Text
- View/download PDF
17. Understanding Semi-structured merge conflict characteristics in open-source Java projects (journal-first abstract)
- Author
-
Paulo Borba, Paola Accioly, and Guilherme Cavalcanti
- Subjects
Open source ,Java ,Computer science ,Empirical process (process control model) ,Data science ,Merge (version control) ,computer ,computer.programming_language - Abstract
In a collaborative development environment, tasks are commonly assigned to developers working independent from each other. As a result, when trying to integrate these contributions, one might have to deal with conflicting changes. Such conflicts might be detected when merging contributions (merge conflicts), when building the system (build conflicts), or when running tests (semantic conflicts). Regarding such conflicts, previous studies show that they occur frequently, and impair developers’ productivity, as understanding and solving them is a demanding and tedious task that might introduce defects. However, despite the existing evidence in the literature, the structure of changes that lead to conflicts has not been studied yet. Understanding the underlying structure of conflicts, and the involved syntactic language elements, might shed light on how to better avoid them. For example, awareness tools that inform users about ongoing parallel changes such as Syde and Palantir can benefit from knowing the most common conflict patterns to become more efficient. With that aim, in this paper we focus on understanding the underlying structure of merge conflicts.
- Published
- 2018
- Full Text
- View/download PDF
18. Analyzing conflict predictors in open-source Java projects
- Author
-
Paola Accioly, Guilherme Cavalcanti, Paulo Borba, and Leuson Mario Pedro da Silva
- Subjects
Recall ,Java ,Computer science ,020207 software engineering ,02 engineering and technology ,Open source software ,Data science ,Empirical research ,Open source ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Precision and recall ,Empirical evidence ,Merge (version control) ,computer ,computer.programming_language - Abstract
In collaborative development environments integration conflicts occur frequently. To alleviate this problem, different awareness tools have been proposed to alert developers about potential conflicts before they become too complex. However, there is not much empirical evidence supporting the strategies used by these tools. Learning about what types of changes most likely lead to conflicts might help to derive more appropriate requirements for early conflict detection, and suggest improvements to existing conflict detection tools. To bring such evidence, in this paper we analyze the effectiveness of two types of code changes as conflict predictors. Namely, editions to the same method, and editions to directly dependent methods. We conduct an empirical study analyzing part of the development history of 45 Java projects from GitHub and Travis CI, including 5,647 merge scenarios, to compute the precision and recall for the conflict predictors aforementioned. Our results indicate that the predictors combined have a precision of 57.99% and a recall of 82.67%. Moreover, we conduct a manual analysis which provides insights about strategies that could further increase the precision and the recall.
- Published
- 2018
- Full Text
- View/download PDF
19. Empirical assessment of two approaches for specifying software product line use case scenarios
- Author
-
Paulo Borba, Cristiano Ferraz, Paola Accioly, and Rodrigo Bonifácio
- Subjects
Modularity (networks) ,Source code ,Requirements engineering ,Computer science ,business.industry ,media_common.quotation_subject ,020207 software engineering ,Cohesion (computer science) ,Context (language use) ,02 engineering and technology ,Reliability engineering ,020204 information systems ,Modeling and Simulation ,0202 electrical engineering, electronic engineering, information engineering ,Use case ,Relevance (information retrieval) ,Software product line ,Software engineering ,business ,Software ,media_common - Abstract
Modularity benefits, including the independent maintenance and comprehension of individual modules, have been widely advocated. However, empirical assessments to investigate those benefits have mostly focused on source code, and thus, the relevance of modularity to earlier artifacts is still not so clear (such as requirements and design models). In this paper, we use a multimethod technique, including designed experiments, to empirically evaluate the benefits of modularity in the context of two approaches for specifying product line use case scenarios: PLUSS and MSVCM. The first uses an annotative approach for specifying variability, whereas the second relies on aspect-oriented constructs for separating common and variant scenario specifications. After evaluating these approaches through the specifications of several systems, we find out that MSVCM reduces feature scattering and improves scenario cohesion. These results suggest that evolving a product line specification using MSVCM requires only localized changes. On the other hand, the results of six experiments reveal that MSVCM requires more time to derive the product line specifications and, contrasting with the modularity results, reduces the time to evolve a product line specification only when the subjects have been well trained and are used to the task of evolving product line specifications.
- Published
- 2015
- Full Text
- View/download PDF
20. Coevolution of variability models and related software artifacts
- Author
-
Andrzej Wąsowski, Paulo Borba, Nicolas Dintzner, Jianmei Guo, Leopoldo Teixeira, Leonardo Passos, Sven Apel, and Krzysztof Czarnecki
- Subjects
Source code ,Computer science ,business.industry ,media_common.quotation_subject ,020207 software engineering ,Linux kernel ,02 engineering and technology ,Artifact (software development) ,Variation (game tree) ,computer.software_genre ,Machine learning ,Personalization ,Kernel (image processing) ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Data mining ,Software system ,Artificial intelligence ,business ,computer ,Software ,Coevolution ,media_common - Abstract
Variant-rich software systems offer a large degree of customization, allowing users to configure the target system according to their preferences and needs. Facing high degrees of variability, these systems often employ variability models to explicitly capture user-configurable features (e.g., systems options) and the constraints they impose. The explicit representation of features allows them to be referenced in different variation points across different artifacts, enabling the latter to vary according to specific feature selections. In such settings, the evolution of variability models interplays with the evolution of related artifacts, requiring the two to evolve together, or coevolve. Interestingly, little is known about how such coevolution occurs in real-world systems, as existing research has focused mostly on variability evolution as it happens in variability models only. Furthermore, existing techniques supporting variability evolution are usually validated with randomly-generated variability models or evolution scenarios that do not stem from practice. As the community lacks a deep understanding of how variability evolution occurs in real-world systems and how it relates to the evolution of different kinds of software artifacts, it is not surprising that industry reports existing tools and solutions ineffective, as they do not handle the complexity found in practice. Attempting to mitigate this overall lack of knowledge and to support tool builders with insights on how variability models coevolve with other artifact types, we study a large and complex real-world variant-rich software system: the Linux kernel. Specifically, we extract variability-coevolution patterns capturing changes in the variability model of the Linux kernel with subsequent changes in Makefiles and C source code. From the analysis of the patterns, we report on findings concerning evolution principles found in the kernel, and we reveal deficiencies in existing tools and theory when handling changes captured by our patterns.
- Published
- 2015
- Full Text
- View/download PDF
21. Should We Replace Our Merge Tools?
- Author
-
Paola Accioly, Guilherme Cavalcanti, and Paulo Borba
- Subjects
Correctness ,Information retrieval ,Theoretical computer science ,Computer science ,0202 electrical engineering, electronic engineering, information engineering ,020207 software engineering ,020201 artificial intelligence & image processing ,02 engineering and technology ,Merge (version control) - Abstract
While unstructured merge tools try to automatically resolve merge conflicts via textual similarity, semistructured merge tools try to go further by partially exploiting the syntactic structure and semantics of the involved artefacts. Previous studies compare these merge approaches with respect to the number of reported conflicts, showing, for most projects and merge situations, a reduction in favor of semistructured merge. However, these studies do not investigate whether this reduction actually leads to integration effort reduction (Productivity) without negative impact on the correctness of the merging process (Quality). To analyze this, and to better understand how these tools could be improved, we propose empirical studies to identify spurious conflicts reported by one approach but not by the other, and interference reported as conflict by one approach but missed by the other.
- Published
- 2017
- Full Text
- View/download PDF
22. Making refactoring safer through impact analysis
- Author
-
Leopoldo Teixeira, Gustavo Soares, Paulo Borba, Melina Mongiovi, and Rohit Gheyi
- Subjects
Correctness ,Transformation (function) ,Code refactoring ,Computer science ,Programming language ,Semantics (computer science) ,SAFER ,Test suite ,Change impact analysis ,computer.software_genre ,computer ,Software ,Generator (mathematics) - Abstract
Currently most developers have to apply manual steps and use test suites to improve confidence that transformations applied to object-oriented (OO) and aspect-oriented (AO) programs are correct. However, it is not simple to do manual reasoning, due to the nontrivial semantics of OO and AO languages. Moreover, most refactoring implementations contain a number of bugs since it is difficult to establish all conditions required for a transformation to be behavior preserving. In this article, we propose a tool (SafeRefactorImpact) that analyzes the transformation and generates tests only for the methods impacted by a transformation identified by our change impact analyzer (Safira). We compare SafeRefactorImpact with our previous tool (SafeRefactor) with respect to correctness, performance, number of methods passed to the automatic test suite generator, change coverage, and number of relevant tests generated in 45 transformations. SafeRefactorImpact identifies behavioral changes undetected by SafeRefactor. Moreover, it reduces the number of methods passed to the test suite generator. Finally, SafeRefactorImpact has a better change coverage in larger subjects, and generates more relevant tests than SafeRefactor.
- Published
- 2014
- Full Text
- View/download PDF
23. A design rule language for aspect-oriented programming
- Author
-
Márcio Ribeiro, Carlos Eduardo Pontual, Fernando Castor, Rodrigo Bonifácio, Alberto Costa Neto, and Paulo Borba
- Subjects
COLA (software architecture) ,Class (computer programming) ,Modularity (networks) ,Computer science ,Programming language ,Process (engineering) ,Aspect-oriented programming ,AspectJ ,Specification language ,computer.software_genre ,Hardware and Architecture ,Compiler ,computer ,Software ,Information Systems ,computer.programming_language - Abstract
HighlightsWe present a design rule specification language for aspect-oriented systems.We explore its benefits to supporting the modular development of classes and aspects.We discuss how our language improves crosscutting modularity without breaking class modularity.We present a Compiler for LSD and AspectJ (COLA), a tool to automate design rules checking.We evaluate it using a real case study and compare it with other approaches. Aspect-oriented programming is known as a technique for modularizing crosscutting concerns. However, constructs aimed to support crosscutting modularity might actually break class modularity. As a consequence, class developers face changeability, parallel development and comprehensibility problems, because they must be aware of aspects whenever they develop or maintain a class. At the same time, aspects are vulnerable to changes in classes, since there is no contract specifying the points of interaction amongst these elements. These problems can be mitigated by using adequate design rules between classes and aspects. We present a design rule specification language and explore its benefits since the initial phases of the development process, specially with the aim of supporting modular development of classes and aspects. We discuss how our language improves crosscutting modularity without breaking class modularity. We evaluate it using a real case study and compare it with other approaches.
- Published
- 2013
- Full Text
- View/download PDF
24. SPL LIFT
- Author
-
Paulo Borba, Társis Tolêdo, Mira Mezini, Eric Bodden, Claus Brabrand, and Márcio Ribeiro
- Subjects
Class (computer programming) ,Java ,Computer science ,business.industry ,Programming language ,Reuse ,computer.software_genre ,Base (topology) ,Computer Graphics and Computer-Aided Design ,Software ,Product (mathematics) ,Code (cryptography) ,Software product line ,business ,computer ,computer.programming_language - Abstract
A software product line (SPL) encodes a potentially large variety of software products as variants of some common code base. Up until now, re-using traditional static analyses for SPLs was virtually intractable, as it required programmers to generate and analyze all products individually. In this work, however, we show how an important class of existing inter-procedural static analyses can be transparently lifted to SPLs. Without requiring programmers to change a single line of code, our approach SPLLIFT automatically converts any analysis formulated for traditional programs within the popular IFDS framework for inter-procedural, finite, distributive, subset problems to an SPL-aware analysis formulated in the IDE framework, a well-known extension to IFDS. Using a full implementation based on Heros, Soot, CIDE and JavaBDD, we show that with SPLLIFT one can reuse IFDS-based analyses without changing a single line of code. Through experiments using three static analyses applied to four Java-based product lines, we were able to show that our approach produces correct results and outperforms the traditional approach by several orders of magnitude.
- Published
- 2013
- Full Text
- View/download PDF
25. Safe composition of configuration knowledge-based software product lines
- Author
-
Paulo Borba, Leopoldo Teixeira, and Rohit Gheyi
- Subjects
Semantics (computer science) ,Programming language ,Computer science ,business.industry ,Scale (chemistry) ,Feature extraction ,Propositional calculus ,computer.software_genre ,Feature model ,Alloy Analyzer ,Software ,Feature (computer vision) ,Software_SOFTWAREENGINEERING ,Hardware and Architecture ,Formal specification ,Product (mathematics) ,Code (cryptography) ,Use case ,Software product line ,Software engineering ,business ,computer ,Information Systems - Abstract
Mistakes made when implementing or specifying the models of a Software Product Line (SPL) can result in ill-formed products - the safe composition problem. Such problem can hinder productivity and it might be hard to detect, since SPLs can have thousands of products. In this article, we propose a language independent approach for verifying safe composition of SPLs with dedicated Configuration Knowledge models. We translate feature model and Configuration Knowledge into propositional logic and use the Alloy Analyzer to perform the verification. To provide evidence for the generality of our approach, we instantiate this approach in different compositional settings. We deal with different kinds of assets such as use case scenarios and Eclipse RCP components. We analyze both the code and the requirements for a larger scale SPL, finding problems that affect thousands of products in minutes. Moreover, our evaluation suggests that the analysis time grows linearly with respect to the number of products in the analyzed SPLs.
- Published
- 2013
- Full Text
- View/download PDF
26. A theory of software product line refinement
- Author
-
Rohit Gheyi, Leopoldo Teixeira, and Paulo Borba
- Subjects
Soundness ,General Computer Science ,Basis (linear algebra) ,Refactoring ,Computer science ,Programming language ,Software evolution ,Software product lines ,Refinement ,computer.software_genre ,Feature model ,Theoretical Computer Science ,Transformation (function) ,Code refactoring ,Product (mathematics) ,Software product line ,computer ,Computer Science(all) - Abstract
To safely evolve a software product line, it is important to have a notion of product line refinement that assures behavior preservation of the original product line products. So in this article we present a language independent theory of product line refinement, establishing refinement properties that justify stepwise and compositional product line evolution. Moreover, we instantiate our theory with the formalization of specific languages for typical product lines artifacts, and then introduce and prove soundness of a number of associated product line refinement transformation templates. These templates can be used to reason about specific product lines and as a basis to derive comprehensive product line refinement catalogues.
- Published
- 2012
- Full Text
- View/download PDF
27. Modularity analysis of use case implementations
- Author
-
Paulo Borba and Fernanda d'Amorim
- Subjects
Object-oriented programming ,Theoretical computer science ,business.industry ,Computer science ,Aspect-oriented programming ,Multitier architecture ,Separation of concerns ,Empirical process (process control model) ,Cohesion (computer science) ,Software metric ,Hardware and Architecture ,Modular programming ,Information system ,Concurrent computing ,Use case ,Software engineering ,business ,Implementation ,Software ,Information Systems - Abstract
A component-based decomposition can result in implementations having use cases code tangled with other concerns and scattered across components. Modularity mechanisms such as aspects, mixins, and virtual classes have been proposed to address this kind of problem. One can use such mechanisms to group together code related to a single use case. This paper quantitatively analyzes the impact of this kind of use case modularization. We apply one specific technique, aspect oriented programming, to modularize the use case implementations of two information systems that conform to the layered architecture pattern. We extract traditional and contemporary metrics - including cohesion, coupling, and separation of concerns - to analyze modularity in terms of quality attributes such as changeability, support for independent development, and pluggability. Our findings indicate that the results of a given modularity analysis depend on other factors beyond the chosen system, metrics, and the applied modularity technique.
- Published
- 2012
- Full Text
- View/download PDF
28. A Static Semantics for Alloy and its Impact in Refactorings
- Author
-
Paulo Borba, Tiago Massoni, and Rohit Gheyi
- Subjects
Theoretical computer science ,General Computer Science ,Modeling language ,Computer science ,business.industry ,Programming language ,Formal semantics (linguistics) ,Software development ,computer.software_genre ,Semantics ,Operational semantics ,object models ,Theoretical Computer Science ,Automated theorem proving ,theorem proving ,Code refactoring ,type system ,Semantics of logic ,TheoryofComputation_LOGICSANDMEANINGSOFPROGRAMS ,Prototype Verification System ,business ,Failure semantics ,computer ,refactoring ,Computer Science(all) - Abstract
Refactorings are usually proposed in an ad hoc way because it is difficult to prove that they are sound with respect to a formal semantics, not guaranteeing the absence of type errors or semantic changes. Consequently, developers using refactoring tools must rely on compilation and tests to ensure type-correctness and semantics preservation, respectively, which may not be satisfactory to critical software development. In this paper, we formalize a static semantics for Alloy, which is a formal object-oriented modeling language, and encode it in Prototype Verification System (PVS). The static semantics' formalization can be useful for specifying and proving that transformations in general (not only refactorings) do not introduce type errors, for instance, as we show here.
- Published
- 2007
- Full Text
- View/download PDF
29. Assessing Semistructured Merge in Version Control Systems: A Replicated Experiment
- Author
-
Paulo Borba, Paola Accioly, and Guilherme Cavalcanti
- Subjects
Correctness ,Information retrieval ,Grammar ,Java ,Computer science ,business.industry ,media_common.quotation_subject ,Replicate ,computer.software_genre ,Conflict reduction ,Software ,Control system ,Data mining ,business ,computer ,Merge (version control) ,media_common ,computer.programming_language - Abstract
Context: To reduce the integration effort arising from conflicting changes resulting from collaborative software development tasks, unstructured merge tools try to automatically solve part of the conflicts via textual similarity, whereas structured and semistructured merge tools try to go further by exploiting the syntactic structure of the involved artifacts. Objective: In this study, aiming at increasing the existing body of evidence and assessing results for systems developed under an alternative version control paradigm, we replicate an experiment conducted by Apel et al. to compare the unstructured and semistructured approach with respect to the occurrence of conflicts reported by both approaches. Method: We used both semistructured and unstructured merge in a sample 2.5 times bigger than the original study regarding the number of projects and 18 times bigger regarding the number of merge scenarios, and we compared the occurrence of conflicts. Results: Similar to the original study, we observed that semistructured merge reduces the number of conflicts in 55% of the scenarios of the new sample. However, the observed average conflict reduction of 62% in these scenarios is far superior than what has been observed before. We also bring new evidence that the use of semistructured merge can reduce the occurrence of conflicting merge scenarios by half. Conclusions: Our findings reinforce the benefits of exploiting the syntactic structure of the artifacts involved in code integration. Besides, the reductions observed in the number and size of conflicts suggest that the use of semistructured merge, when compared to the unstructured approach, might decrease integration effort without compromising correctness.
- Published
- 2015
- Full Text
- View/download PDF
30. Improving Performance and Maintainability of Object Cloning with Lazy Clones: An Empirical Evaluation
- Author
-
Paulo Borba, Helio Fugimoto, Bruno Cartaxo, and Sergio Soares
- Subjects
Source code ,Computer science ,Programming language ,Serialization ,Design pattern ,media_common.quotation_subject ,Maintainability ,Dynamic priority scheduling ,Static analysis ,computer.software_genre ,Graph (abstract data type) ,computer ,Implementation ,media_common - Abstract
Object cloning is demanded by the prototype design pattern, copy-on-write strategy, some graph transformations, and many other scenarios. We have been developing a static analysis tool that clones objects frequently. In that context, issues related to performance, memory usage, and code maintainability might arise. Traditional deep cloning with dynamic allocation, reflection, and serialization, have not fulfilled those requirements. Thus, we developed novel implementations of lazy cloning with dynamic proxies and aspect-oriented programming (AOP). We defined benchmarks based on real workload to quantitatively assess the benefits of each implementation. AOP was chosen since it better harmonizes performance, memory usage and code maintainability. It was 88% faster than serialization, consumed 9 times less memory than reflection, and required 25 times less modifications on source code than dynamic allocation. In summary, we believe that the results can be extrapolated to broader contexts helping developers to make evidence-based decisions when object cloning is needed.
- Published
- 2015
- Full Text
- View/download PDF
31. A product line of theories for reasoning about safe evolution of product lines
- Author
-
Rohit Gheyi, Paulo Borba, Leopoldo Teixeira, and Vander Alves
- Subjects
Soundness ,Theoretical computer science ,Template ,Programming language ,Computer science ,Product (mathematics) ,Product line ,Prototype Verification System ,Reuse ,computer.software_genre ,ENCODE ,computer ,Abstraction layer - Abstract
A product line refinement theory formalizes safe evolution in terms of a refinement notion, which does not rely on particular languages for the elements that constitute a product line. Based on this theory, we can derive refinement templates to support safe evolution scenarios. To do so, we need to provide formalizations for particular languages, to specify and prove the templates. Without a systematic approach, this leads to many similar templates and thus repetitive verification tasks. We investigate and explore similarities between these concrete languages, which ultimately results in a product line of theories, where different languages correspond to features, and products correspond to theory instantiations. This also leads to specifying refinement templates at a higher abstraction level, which, in the long run, reduces the specification and proof effort, and also provides the benefits of reusing such templates for additional languages plugged into the theory. We use the Prototype Verification System to encode and prove soundness of the theories and their instantiations. Moreover, we also use the refinement theory to reason about safe evolution of the proposed product line of theories.
- Published
- 2015
- Full Text
- View/download PDF
32. AspectH: Uma Extensão Orientada a Aspectos de Haskel
- Author
-
Paulo Borba, Carlos A. R. Andrade, and André L. Santos
- Subjects
Functional programming ,Generic programming ,General Computer Science ,Functional logic programming ,Computer science ,Programming language ,Higher-order programming ,Software_PROGRAMMINGTECHNIQUES ,computer.software_genre ,Assignment ,TheoryofComputation_LOGICSANDMEANINGSOFPROGRAMS ,Haskell ,Software_PROGRAMMINGLANGUAGES ,computer ,Protocol (object-oriented programming) ,Declarative programming ,computer.programming_language - Abstract
This paper presents an extension of the Haskell programming language with the objective of improving modularization of functional programs. This extension, AspectH, extends Haskell with aspect oriented concepts. AspectH implements Aspect-Oriented Programming (AOP) through pointcuts and advice, like in AspectJ, and was designed to be used in Haskell programs that use monads. Keywords: Aspect-oriented programming, functional programming, Haskell, monads, Este artigo apresenta uma extensão da linguagem de programação Haskell com o objetivo de melhorar a modularização de programas funcionais. Esta extensão, chamada AspectH, estende Haskell com conceitos de orientação a aspectos. AspectH implementa Programação Orientada a Aspectos (AOP) através de pointcuts e advice, como em AspectJ, e foi projetada para atuar em programas Haskell que utilizam mônadas. Palavras-chave: Programação orientada a aspectos, programação funcional, Haskell, mônadas.
- Published
- 2004
- Full Text
- View/download PDF
33. Algebraic reasoning for object-oriented programming
- Author
-
Márcio Cornélio, Augusto Sampaio, Paulo Borba, and Ana Cavalcanti
- Subjects
Object-oriented programming ,Theoretical computer science ,Java ,business.industry ,Computer science ,Programming language ,Semantics (computer science) ,Computer programming ,Type (model theory) ,computer.software_genre ,Predicate transformer semantics ,Inheritance (object-oriented programming) ,TheoryofComputation_LOGICSANDMEANINGSOFPROGRAMS ,A-normal form ,business ,computer ,Software ,computer.programming_language - Abstract
We present algebraic laws for a language similar to a subset of sequential Java that includes inheritance, recursive classes, dynamic binding, access control, type tests and casts, assignment, but no sharing. These laws are proved sound with respect to a weakest precondition semantics. We also show that they are complete in the sense that they are sufficient to reduce an arbitrary program to a normal form substantially close to an imperative program; the remaining object-oriented constructs could be further eliminated if our language had recursive records. This suggests that our laws are expressive enough to formally derive behaviour preserving program transformations; we illustrate that through the derivation of provably-correct refactorings.
- Published
- 2004
- Full Text
- View/download PDF
34. Refactoring Alloy Specifications
- Author
-
Paulo Borba and Rohit Gheyi
- Subjects
General Computer Science ,Java ,Refactoring ,Programming language ,Modeling language ,Computer science ,business.industry ,Software development ,Computer Science::Software Engineering ,computer.software_genre ,Formal Methods ,Axiomatic semantics ,Model Transformations ,Model Checking ,Theoretical Computer Science ,Code refactoring ,Software_SOFTWAREENGINEERING ,Formal specification ,TheoryofComputation_LOGICSANDMEANINGSOFPROGRAMS ,Computer Science::Programming Languages ,business ,computer ,computer.programming_language ,Computer Science(all) - Abstract
This paper proposes modeling laws for Alloy, a formal object-oriented modeling language. These laws are important not only to define the axiomatic semantics of Alloy but also to guide and formalize popular software development practices. In particular, these laws can be used to formaly refactor specifications. As an example, we formally refactor a specification for Java types.
- Published
- 2004
- Full Text
- View/download PDF
35. Implementing distribution and persistence aspects with aspectJ
- Author
-
Eduardo Laureano, Paulo Borba, and Sergio Soares
- Subjects
Persistence (psychology) ,Standardization ,Java ,Programming language ,Computer science ,business.industry ,Separation of concerns ,Aspect-oriented programming ,Distribution (economics) ,AspectJ ,computer.software_genre ,Computer Graphics and Computer-Aided Design ,Information system ,The Internet ,Software engineering ,business ,Implementation ,computer ,Software ,computer.programming_language - Abstract
This paper reports our experience using AspectJ, a general-purpose aspect-oriented extension to Java, to implement distribution and persistence aspects in a web-based information system. This system was originally implemented in Java and restructured with AspectJ. Our main contribution is to show that AspectJ is useful for implementing several persistence and distribution concerns in the application considered, and other similar applications. We have also identified a few drawbacks in the language and suggest some minor modifications that could significantly improve similar implementations. Despite the drawbacks, we argue that the AspectJ implementation is superior to the pure Java implementation. Some of the aspects implemented in our experiment are abstract and constitute a simple aspect framework. The other aspects are application specific but we suggest that different implementations might follow the same aspect pattern. The framework and the pattern allow us to propose architecture-specific guidelines that provide practical advice for both restructuring and implementing certain kinds of persistent and distributed applications with AspectJ.
- Published
- 2002
- Full Text
- View/download PDF
36. Evaluating scenario-based SPL requirements approaches: the case for modularity, stability and expressiveness
- Author
-
Mauricio Alférez, Paola Accioly, Uirá Kulesza, João Araújo, Rodrigo Bonifácio, Paulo Borba, Ana Moreira, Leopoldo Teixeira, Diversity-centric Software Engineering (DiverSe), Inria Rennes – Bretagne Atlantique, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-LANGAGE ET GÉNIE LOGICIEL (IRISA-D4), Institut de Recherche en Informatique et Systèmes Aléatoires (IRISA), CentraleSupélec-Télécom Bretagne-Université de Rennes 1 (UR1), Université de Rennes (UNIV-RENNES)-Université de Rennes (UNIV-RENNES)-Institut National de Recherche en Informatique et en Automatique (Inria)-École normale supérieure - Rennes (ENS Rennes)-Université de Bretagne Sud (UBS)-Centre National de la Recherche Scientifique (CNRS)-Institut National des Sciences Appliquées - Rennes (INSA Rennes), Institut National des Sciences Appliquées (INSA)-Université de Rennes (UNIV-RENNES)-Institut National des Sciences Appliquées (INSA)-CentraleSupélec-Télécom Bretagne-Université de Rennes 1 (UR1), Institut National des Sciences Appliquées (INSA)-Université de Rennes (UNIV-RENNES)-Institut National des Sciences Appliquées (INSA)-Institut de Recherche en Informatique et Systèmes Aléatoires (IRISA), Université de Rennes (UNIV-RENNES)-Université de Rennes (UNIV-RENNES)-École normale supérieure - Rennes (ENS Rennes)-Université de Bretagne Sud (UBS)-Centre National de la Recherche Scientifique (CNRS)-Institut National des Sciences Appliquées - Rennes (INSA Rennes), Institut National des Sciences Appliquées (INSA)-Université de Rennes (UNIV-RENNES)-Institut National des Sciences Appliquées (INSA), TransLab, Department of Computer Science [Brasilia], Universidade de Brasilia [Brasília] (UnB), Universidade Federal de Pernambuco [Recife] (UFPE), Universidade Federal do Rio Grande do Norte [Natal] (UFRN), Universidade Nova de Lisboa = NOVA University Lisbon (NOVA), Centro de Informática e Tecnologia Informação (CITI), Departamento de Informática (DI), Faculdade de Ciências e Tecnologia = School of Science & Technology (FCT NOVA), Universidade Nova de Lisboa = NOVA University Lisbon (NOVA)-Universidade Nova de Lisboa = NOVA University Lisbon (NOVA)-Faculdade de Ciências e Tecnologia = School of Science & Technology (FCT NOVA), Universidade Nova de Lisboa = NOVA University Lisbon (NOVA)-Universidade Nova de Lisboa = NOVA University Lisbon (NOVA), Université de Rennes (UR)-Institut National des Sciences Appliquées - Rennes (INSA Rennes), Institut National des Sciences Appliquées (INSA)-Institut National des Sciences Appliquées (INSA)-Université de Bretagne Sud (UBS)-École normale supérieure - Rennes (ENS Rennes)-Institut National de Recherche en Informatique et en Automatique (Inria)-Télécom Bretagne-CentraleSupélec-Centre National de la Recherche Scientifique (CNRS)-Université de Rennes (UR)-Institut National des Sciences Appliquées - Rennes (INSA Rennes), Institut National des Sciences Appliquées (INSA)-Institut National des Sciences Appliquées (INSA)-Université de Bretagne Sud (UBS)-École normale supérieure - Rennes (ENS Rennes)-Institut National de Recherche en Informatique et en Automatique (Inria)-Télécom Bretagne-CentraleSupélec-Centre National de la Recherche Scientifique (CNRS)-Institut de Recherche en Informatique et Systèmes Aléatoires (IRISA), and Institut National des Sciences Appliquées (INSA)-Institut National des Sciences Appliquées (INSA)-Université de Bretagne Sud (UBS)-École normale supérieure - Rennes (ENS Rennes)-Télécom Bretagne-CentraleSupélec-Centre National de la Recherche Scientifique (CNRS)
- Subjects
Computer science ,media_common.quotation_subject ,Stability (learning theory) ,Software requirements specification ,02 engineering and technology ,Reuse ,Notation ,[SPI]Engineering Sciences [physics] ,Software ,requirements specification ,020204 information systems ,software product lines ,0202 electrical engineering, electronic engineering, information engineering ,Quality (business) ,[INFO]Computer Science [cs] ,use scenarios ,media_common ,Modularity (networks) ,business.industry ,Software development ,020207 software engineering ,[STAT]Statistics [stat] ,Systems engineering ,business ,Software engineering ,Information Systems ,variability modeling - Abstract
International audience; Software product lines (SPL) provide support for productivity gains through systematic reuse. Among the various quality attributes supporting these goals, modularity, stability and expressiveness of feature specifications, their composition and configuration knowledge emerge as strategic values in modern software development paradigms. This paper presents a metric-based evaluation aiming at assessing how well the chosen qualities are supported by scenario-based SPL requirements approaches. The selected approaches for this study span from type of notation (textual or graphical based), style to support variability (annotation or composition based), and specification expressiveness. They are compared using the metrics developed in a set of releases from an exemplar case study. Our major findings indicate that composition-based approaches have greater potential to support modularity and stability, and that quantification mechanisms simplify and increase expressiveness of configuration knowledge and composition specifications.
- Published
- 2014
- Full Text
- View/download PDF
37. Comparison with Other Approaches
- Author
-
Márcio Ribeiro, Paulo Borba, and Claus Brabrand
- Subjects
Modularity (networks) ,Dataflow ,business.industry ,Computer science ,Separation of concerns ,Maintainability ,Preprocessor ,Software engineering ,business - Abstract
In this chapter we discuss several previous work on topics like interfaces for non-annotative approaches (such as aspect-oriented programming), separation of concerns and modularity, analyses on preprocessor-based systems, and dataflow analysis. Besides discussing, we compare these works to our approach, pointing out the differences between them.
- Published
- 2014
- Full Text
- View/download PDF
38. Software Families, Software Products Lines, and Dataflow Analyses
- Author
-
Márcio Ribeiro, Claus Brabrand, and Paulo Borba
- Subjects
Programming language ,business.industry ,Mechanism (biology) ,Computer science ,Dataflow ,Separation of concerns ,computer.software_genre ,Modularity ,Software ,Conditional compilation ,Feature (machine learning) ,Product (category theory) ,Software engineering ,business ,computer - Abstract
In this chapter we review essential concepts we explore in this work. Firstly, we review software families and software product lines, since the problem we address here is critical in these contexts. We show the basic concepts and then move towards conditional compilation with preprocessors, a widely used mechanism to implement features in industrial practice. Despite the widespread usage, conditional compilation has several drawbacks. We then present the Virtual Separation of Concerns (VSoC) approach, which can minimize some of these drawbacks. In this work, we intend to address the lack of feature modularity. Thus, we need to catch dependencies between features and inform developers about them. To do so, we rely on dataflow analyses, the last topic we review in this chapter.
- Published
- 2014
- Full Text
- View/download PDF
39. A System for Translating Executable VDM Specifications into Lazy ML
- Author
-
Silvio Romero de Lemos Meira and Paulo Borba
- Subjects
Rapid prototyping ,Functional programming ,Computer science ,Programming language ,Specification language ,Compiler ,Executable ,computer.file_format ,Vienna Development Method ,computer.software_genre ,Formal methods ,computer ,Software - Published
- 1997
- Full Text
- View/download PDF
40. AspectJ-Based Idioms for Flexible Feature Binding
- Author
-
Paulo Borba, Márcio Ribeiro, Henrique Rebêlo, and Rodrigo Andrade
- Subjects
Programming language ,business.industry ,Computer science ,Aspect-oriented programming ,Software development ,AspectJ ,Software_PROGRAMMINGTECHNIQUES ,computer.software_genre ,Software metric ,Feature model ,Software framework ,Software_SOFTWAREENGINEERING ,Software construction ,Instrumentation (computer programming) ,Software_PROGRAMMINGLANGUAGES ,business ,computer ,computer.programming_language - Abstract
In Software Product Lines (SPL), we can bind reusable features to compose a product at different times, which in general are static or dynamic. The former allows customizability without any overhead at runtime. On the other hand, the latter allows feature activation or deactivation while running the application with the cost of performance and memory consumption. To implement features, we might use aspect-oriented programming (AOP), in which aspects enable a clear separation between base code and variable code. In this context, recent work provides AspectJ-based idioms to implement flexible feature binding. However, we identified some design deficiencies. Thus, to solve the issues of these idioms, we incrementally create three new AspectJ-based idioms. Moreover, to evaluate our new idioms, we quantitatively analyze them with respect to code cloning, scattering, tangling, and size by means of software metrics. Besides that, we qualitatively discuss our new idioms in terms of code reusability, changeability, and instrumentation overhead.
- Published
- 2013
- Full Text
- View/download PDF
41. Improving Modular Reasoning on Preprocessor-Based Systems
- Author
-
Jean Melo and Paulo Borba
- Subjects
Interface (Java) ,Computer science ,business.industry ,Separation of concerns ,Software maintenance ,computer.software_genre ,Feature model ,Feature (computer vision) ,Modular programming ,Preprocessor ,Data mining ,Software product line ,Software engineering ,business ,computer - Abstract
Preprocessors are often used to implement the variability of a Software Product Line (SPL). Despite their widespread use, they have several drawbacks like code pollution, no separation of concerns, and error-prone. Virtual Separationof Concerns (VSoC) has been used to address some of thesepreprocessor problems by allowing developers to hide featurecode not relevant to the current maintenance task. However, different features eventually share the same variables and methods, so VSoC does not modularize features, since developers do not know anything about hidden features. Thus, the maintenance of one feature might break another. Emergent Interfaces (EI) capture dependencies between a feature maintenance point and parts of other feature implementation, but they do not provide an overall feature interface considering all parts in an integrated way. Thus, we still have the feature modularization problem. To address that, we propose Emergent Feature Interfaces (EFI) that complement EI by treating feature as a module in order to improve modular reasoning on preprocessor-based systems. EFI capture dependencies among entire features, with the potential of improving productivity. Our proposal, implemented in an opensource tool called Emergo, is evaluated with preprocessor-based systems. The results of our study suggest the feasibility and usefulness of the proposed approach.
- Published
- 2013
- Full Text
- View/download PDF
42. Modular aspect-oriented design rule enforcement with XPIDRs
- Author
-
Paulo Borba, Ricardo Lima, Henrique Rebêlo, Márcio Ribeiro, and Gary T. Leavens
- Subjects
Class (computer programming) ,business.industry ,Programming language ,Computer science ,Aspect-oriented programming ,AspectJ ,Modular design ,computer.software_genre ,Modularity ,Control flow ,Simple (abstract algebra) ,business ,computer ,Advice (complexity) ,computer.programming_language - Abstract
Aspect-oriented programming (AOP) is a popular technique for modularizing crosscutting concerns. However, constructs aimed at supporting crosscutting modularity may break class modularity. For example, to understand a method call may require a whole-program analysis to determine what advice applies and what that advice does. Moreover, in AspectJ, advice is coupled to the parts of the program advised, the base code, so the meaning of advice may change when the base code changes. Such coupling also hinders parallel development between base code and aspects. We propose some simple modifications to the design of crosscut programming interfaces (XPIs) to include expressive design rule specifications. We call our form of XPIs crosscutting programming interfaces with design rules (XPIDRs). The XPIDR-based approach, by design, supports modular runtime checking and parallel development by decoupling aspects from base code. We also show how XPIDRs allow specification of interesting control flow effects, such as when advice does (or does not) proceed. We have implemented XPIDRs as a simple contract extension to AspectJ. Since XPIDRs do not require any new AspectJ constructs, they can be adopted easily by the AspectJ community.
- Published
- 2013
- Full Text
- View/download PDF
43. A Model-Driven Approach to Specifying and Monitoring Controlled Experiments in Software Engineering
- Author
-
Gustavo Sizílio, Uirá Kulesza, Paola Accioly, Eduardo Aranha, Marília Aranha Freire, Paulo Borba, and Edmilson Campos Neto
- Subjects
Process specification ,Workflow ,Digital subscriber line ,business.industry ,Computer science ,Change request ,Software construction ,Exploratory research ,Test suite ,Software product line ,Software engineering ,business - Abstract
This paper presents a process-oriented model-driven approach that supports the conduction of controlled experiments in software engineering. The approach consists of: (i) a domain specific language (DSL) for process specification and statistical design of controlled experiments; (ii) model-driven transformations that allow workflow models generations specific to each experiment participant and according to the experiment statistical design; and (iii) a workflow execution environment that allows the monitoring of participant activities in the experiment, besides gathering participants feedback from the experiment. The paper also presents the results of an exploratory study that analyzes the feasibility of the approach and the expressivity of the DSLs in the modeling of a non-trivial software engineering experiment.
- Published
- 2013
- Full Text
- View/download PDF
44. Making Software Product Line Evolution Safer
- Author
-
Rohit Gheyi, Gustavo Soares, Felype Ferreira, and Paulo Borba
- Subjects
business.industry ,Computer science ,Context (language use) ,Software maintenance ,computer.software_genre ,Software ,Code refactoring ,Software_SOFTWAREENGINEERING ,Product (mathematics) ,SAFER ,Systems engineering ,Software engineering ,business ,Software product line ,computer ,Formal verification - Abstract
Developers evolve software product lines (SPLs) manually or using typical program refactoring tools. However, when evolving a product line to introduce new features or to improve its design, it is important to make sure that the behavior of existing products is not affected. Typical program refactorings cannot guarantee that because the SPL context goes beyond code and other kinds of core assets, and involves additional artifacts such as feature models and configuration knowledge. Besides that, in a SPL we typically have to deal with a set of possibly alternative assets that do not constitute a well-formed program. As a result, manual changes and existing program refactoring tools may introduce behavioral changes or invalidate existing product configurations. To avoid that, we propose approaches and implement tools for making product line evolution safer; these tools check whether SPL transformations are refinements in the sense that they preserve the behavior of the original SPL products. They implement different and practical approximations of a formal definition of SPL refinement. We evaluate the approaches in concrete SPL evolution scenarios where existing product's behavior must be preserved. However, our tools found that some transformations introduced behavioral changes. Moreover, we evaluate defective refinements, and the toolset detects the behavioral changes.
- Published
- 2012
- Full Text
- View/download PDF
45. Recommending Mechanisms for Modularizing Mobile Software Variabilities
- Author
-
Márcio Ribeiro, Pedro Matos, and Paulo Borba
- Subjects
Software ,Computer science ,business.industry ,Real-time computing ,Software engineering ,business - Published
- 2012
- Full Text
- View/download PDF
46. Emergo
- Author
-
Márcio Ribeiro, Paulo Borba, Társis Tolêdo, Claus Brabrand, and Johnni Winther
- Subjects
Computer science ,business.industry ,Dataflow ,Programming language ,Separation of concerns ,Maintainability ,computer.software_genre ,Feature model ,Task (project management) ,Software ,Feature (computer vision) ,Preprocessor ,Software engineering ,business ,computer - Abstract
When maintaining a feature in preprocessor-based Software Product Lines (SPLs), developers are susceptible to introduce problems into other features. This is possible because features eventually share elements (like variables and methods) with the maintained one. This scenario might be even worse when hiding features by using techniques like Virtual Separation of Concerns (VSoC), since developers cannot see the feature dependencies and, consequently, they become unaware of them. Emergent Interfaces was proposed to minimize this problem by capturing feature dependencies and then providing information about other features that can be impacted during a maintenance task. In this paper, we present Emergo, a tool capable of computing emergent interfaces between the feature we are maintaining and the others. Emergo relies on feature-sensitive dataflow analyses in the sense it takes features and the SPL feature model into consideration when computing the interfaces.
- Published
- 2012
- Full Text
- View/download PDF
47. Intraprocedural dataflow analysis for software product lines
- Author
-
Társis Tolêdo, Márcio Ribeiro, Claus Brabrand, and Paulo Borba
- Subjects
Software ,Software_SOFTWAREENGINEERING ,Computer science ,business.industry ,Programming language ,Brute force ,Dataflow ,Conditional compilation ,Product (mathematics) ,business ,computer.software_genre ,computer - Abstract
Software product lines (SPLs) are commonly developed using annotative approaches such as conditional compilation that come with an inherent risk of constructing erroneous products. For this reason, it is essential to be able to analyze SPLs. However, as dataflow analysis techniques are not able to deal with SPLs, developers must generate and analyze all valid methods individually, which is expensive for non-trivial SPLs. In this paper, we demonstrate how to take any standard intraprocedural dataflow analysis and automatically turn it into a feature-sensitive dataflow analysis in three different ways. All are capable of analyzing all valid methods of an SPL without having to generate all of them explicitly. We have implemented all analyses as extensions of SOOT's intraprocedural dataflow analysis framework and experimentally evaluated their performance and memory characteristics on four qualitatively different SPLs. The results indicate that the feature-sensitive analyses are on average 5.6 times faster than the brute force approach on our SPLs, and that they have different time and space tradeoffs.
- Published
- 2012
- Full Text
- View/download PDF
48. On the impact of feature dependencies when maintaining preprocessor-based software product lines
- Author
-
Társis Tolêdo, Márcio Ribeiro, Paulo Borba, Felipe Queiroz, Claus Brabrand, and Sergio Soares
- Subjects
business.industry ,Computer science ,Programming language ,Separation of concerns ,Context (language use) ,computer.software_genre ,Feature model ,Software ,Feature (computer vision) ,Preprocessor ,Software engineering ,business ,Software product line ,Programmer ,computer - Abstract
During Software Product Line (SPL) maintenance tasks, Virtual Separation of Concerns (VSoC) allows the programmer to focus on one feature and hide the others. However, since features depend on each other through variables and control-flow, feature modularization is compromised since the maintenance of one feature may break another. In this context, emergent interfaces can capture dependencies between the feature we are maintaining and the others, making developers aware of dependencies. To better understand the impact of code level feature dependencies during SPL maintenance, we have investigated the following two questions: how often methods with preprocessor directives contain feature dependencies? How feature dependencies impact maintenance effort when using VSoC and emergent interfaces? Answering the former is important for assessing how often we may face feature dependency problems. Answering the latter is important to better understand to what extent emergent interfaces complement VSoC during maintenance tasks. To answer them, we analyze 43 SPLs of different domains, size, and languages. The data we collect from them complement previous work on preprocessor usage. They reveal that the feature dependencies we consider in this paper are reasonably common in practice; and that emergent interfaces can reduce maintenance effort during the SPL maintenance tasks we regard here.
- Published
- 2011
- Full Text
- View/download PDF
49. Investigating the safe evolution of software product lines
- Author
-
Demóstenes Sena, Paulo Borba, Leopoldo Teixeira, Vander Alves, Uirá Kulesza, and Laís Neves
- Subjects
Product design specification ,Software ,Product design ,business.industry ,Computer science ,Process (engineering) ,New product development ,Product (category theory) ,business ,Software engineering ,Product engineering - Abstract
The adoption of a product line strategy can bring significant productivity and time to market improvements. However, evolving a product line is risky because it might impact many products and their users. So when evolving a product line to introduce new features or to improve its design, it is important to make sure that the behavior of existing products is not affected. In fact, to preserve the behavior of existing products one usually has to analyze different artifacts, like feature models, configuration knowledge and the product line core assets. To better understand this process, in this paper we discover and analyze concrete product line evolution scenarios and, based on the results of this study, we describe a number of safe evolution templates that developers can use when working with product lines. For each template, we show examples of their use in existing product lines. We evaluate the templates by also analyzing the evolution history of two different product lines and demonstrating that they can express the corresponding modifications and then help to avoid the mistakes that we identified during our analysis.
- Published
- 2011
- Full Text
- View/download PDF
50. A RUP-Based Software Process Supporting Progressive Implementation
- Author
-
Paulo Borba, Augusto Sampaio, and Tiago Massoni
- Subjects
Software development process ,Debugging ,business.industry ,Computer science ,media_common.quotation_subject ,Control (management) ,Systems engineering ,Software development ,Information system ,business ,Software engineering ,media_common ,Rational Unified Process - Abstract
This chapter introduces an extension of the Rational Unified Process (RUP) with a method that supports the progressive, and separate, implementation of three different aspects: persistence, distribution, and concurrence control. This complements RUP with a specific implementation method, called Progressive Implementation Method (Pim), and helps to tame the complexity of applications that are persistent, distributed, and concurrent. By gradually and separately implementing, testing, and validating such applications, we obtain two major benefits: the impact caused by the requirements changes during development is reduced and testing and debugging are simplified. In addition, the authors hope to contribute to solving the lack of a specific implementation method in RUP.
- Published
- 2011
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.