186 results on '"Paulo Borba"'
Search Results
2. Leveraging Structure in Software Merge: An Empirical Study
- Author
-
Guilherme Cavalcanti, Sven Apel, Florian Heck, Georg Seibt, and Paulo Borba
- Subjects
Structure (mathematical logic) ,Software ,Empirical research ,business.industry ,Computer science ,Software engineering ,business ,Merge (linguistics) - Published
- 2022
- Full Text
- View/download PDF
3. Georg-Friedrich Von Martens (1756-1821) e a consolidação histórica do direito internacional clássico
- Author
-
Paulo Borba Casella
- Subjects
General Medicine - Abstract
A obra de Von Martens se inscreve como o final da fase ‘clássica’ do direito internacional. Na evolução da disciplina, muitos ainda parecem considerar o modelo ‘clássico’, como o único possível, para o direito internacional. Essa experiência, temporalmente circunscrita, historicamente prepara e dá sustentação, para a ulterior evolução do direito internacional, na fase seguinte, do Direito internacional no tempo do concerto europeu.
- Published
- 2021
- Full Text
- View/download PDF
4. Using acceptance tests to predict merge conflict risk
- Author
-
Thaís Rocha and Paulo Borba
- Subjects
Software - Published
- 2023
- Full Text
- View/download PDF
5. The Private Life of Merge Conflicts
- Author
-
Marcela Cunha, Paola Accioly, and Paulo Borba
- Published
- 2022
- Full Text
- View/download PDF
6. Semantic conflict detection with overriding assignment analysis
- Author
-
Matheus Barbosa, Paulo Borba, Rodrigo Bonifacio, and Galileu Santos
- Published
- 2022
- Full Text
- View/download PDF
7. Emer de Vattel (1714-1767) e o direito internacional clássico
- Author
-
Paulo Borba Casella
- Subjects
Moment (mathematics) ,Work (electrical) ,Political science ,Universality (philosophy) ,Subject (philosophy) ,General Medicine ,International law ,Measure (mathematics) ,Law and economics - Abstract
A principal obra (1758) de Vattel se insere com destaque na fase ‘clássica’ do direito internacional, que marca profundamente a história da evolução da disciplina. A tal ponto, que muitos ainda parecem pensar e ver somente esse modelo ‘clássico’, como o único possível, para o direito internacional. Mas essa experiência, além de temporalmente circunscrita, como momento histórico, prepara e dá sustentação, para a ulterior evolução do direito internacional. Na fase seguinte, se configura o paradoxo de se perder a vocação de universalidade do direito internacional, na exata medida e proporção, em que ganha em desenvolvimentos conceituais e institucionais. Esse paradoxo ainda parece marcar o direito internacional, até os nossos tempos pós-modernos.
- Published
- 2020
- Full Text
- View/download PDF
8. Privacy and security constraints for code contributions
- Author
-
Rodrigo Andrade and Paulo Borba
- Subjects
Computer science ,business.industry ,Code (cryptography) ,Collaborative software development ,Software engineering ,business ,Software - Published
- 2020
- Full Text
- View/download PDF
9. Build conflicts in the wild
- Author
-
Léuson Da Silva, Paulo Borba, and Arthur Pires
- Subjects
Software - Published
- 2022
- Full Text
- View/download PDF
10. Textual merge based on language-specific syntactic separators
- Author
-
Guilherme Cavalcanti, Paulo Borba, and Jônatas Clementino
- Subjects
Future studies ,business.industry ,Computer science ,Syntactic structure ,Artificial intelligence ,Line (text file) ,business ,computer.software_genre ,computer ,Merge (version control) ,On Language ,Natural language processing - Abstract
In practice, developers mostly use purely textual, line-based, merge tools. Such tools, however, often report false conflicts. Researchers have then proposed AST-based tools that explore language syntactic structure to reduce false conflicts. Nevertheless, these approaches might negatively impact merge performance, and demand the creation of a new tool for each language. Trying to simulate the behavior of AST-based tools without their drawbacks, this paper proposes and analyzes a purely textual, separator-based, merge tool that aims to simulate AST-based tools by considering programming language syntactic separators, instead of just lines, when comparing and merging changes. The obtained results show that the separator-based textual approach might reduce the number of false conflicts when compared to the line-based approach. The new solution makes room for future studies and hybrid merge tools.
- Published
- 2021
- Full Text
- View/download PDF
11. BRICS in polar regions: Brazil’s interests and prospects
- Author
-
Maria Lagutina, Paulo Borba Casella, and Arthur Roberto Capella Giannattasio
- Subjects
Power (social and political) ,Negotiation ,Economy ,Structural balance ,Dominance (economics) ,media_common.quotation_subject ,Political science ,Agency (sociology) ,Public power ,Context (language use) ,Emerging markets ,media_common - Abstract
The current international legal regulation of the Arctic and Antarctica was organized during the second half of the XX century to establish an international public power over the two regions, the Arctic Council (AC) and the Antarctic Treaty System (ATS), which is characterized by Euro-American dominance. However, the rise of emerging countries at the beginning of the XXI century suggests a progressive redefinition of the structural balance of international power in favor of states not traditionally perceived as European and Western. This article examines the role of Brazil within the AC and the ATS to address various polar issues, even institutional ones. As a responsible country in the area of cooperation in science and technology in the oceans and polar regions in BRICS, Brazil appeals to its rich experience in Antarctica and declares its interest in joining the Arctic cooperation. For Brazil, participation in polar cooperation is a way to increase its role in global affairs and BRICS as a negotiating platform. It is seen in this context as a promising tool to achieve this goal. This article highlights new paths in the research agenda concerning interests and prospects of Brazilian agency in the polar regions.
- Published
- 2020
- Full Text
- View/download PDF
12. Importância da proteção internacional dos direitos fundamentais – reflexões pelos 70 anos da Declaração Universal
- Author
-
Paulo Borba Casella
- Subjects
Genetics ,Animal Science and Zoology - Published
- 2019
- Full Text
- View/download PDF
13. Negociação e conflito no direito internacional: cinco mil anos de registro da história
- Author
-
Paulo Borba Casella
- Subjects
General Medicine - Abstract
‘Negociação’ e ‘conflito’ seriam termos essenciais para resumir, em poucas palavras, o sentido da evolução do Direito Internacional. Na sua longa, diversificada e complexa trajetória na história da humanidade, o Direito Internacional se estende por mais de cinco mil anos, desde quando se conservam registros históricos por escrito. Não por acaso, o Direito Internacional se faz presente e necessário, tanto no encaminhamento de posições divergentes, tendendo a alcançar composição, se não amigável, ao menos pacífica – em todas as esferas e manifestações possíveis da ‘negociação’, desde sempre e em toda parte, como também o Direito Internacional se inscreve e tem papel relevante a desempenhar, quando estes mecanismos ‘pacíficos’, quer de cunho jurídico ou diplomático, não conseguem dar conta das questões presentes (“jus ad bellum”), e pode ocorrer a necessidade de se ter de passar quer a meios coercitivos para enfrentamento de divergências, ou mesmo chegar ao extremo do conflito armado (“jus in bello”) – coarctado pelo Direito Internacional vigente às hipóteses de “legítima defesa”, como regulado no art. 51 da Carta da ONU. Como em outros ramos do conhecimento e da atividade humana, os conceitos têm de ser precisados, também no Direito Internacional: seus conteúdos precisam ser claramente delimitados, para que sejam efetivamente entendidos e aplicados. Já advertia Tucídides (c. 465 – c. 395 a.C.), na História da guerra do Peloponeso, “o sentido normal das palavras, em relação aos atos, muda segundo as veleidades dos homens”.
- Published
- 2019
- Full Text
- View/download PDF
14. Using acceptance tests to predict files changed by programming tasks
- Author
-
Thaís Rocha, João Pedro Santos, and Paulo Borba
- Subjects
Computer science ,business.industry ,05 social sciences ,Context (computing) ,Code coverage ,020207 software engineering ,02 engineering and technology ,Software quality ,Test (assessment) ,Task (computing) ,Hardware and Architecture ,Acceptance testing ,0502 economics and business ,0202 electrical engineering, electronic engineering, information engineering ,Code (cryptography) ,Software engineering ,business ,050203 business & management ,Software ,Information Systems - Abstract
In a collaborative development context, conflicting code changes might compromise software quality and developers productivity. To reduce conflicts, one could avoid the parallel execution of potentially conflicting tasks. Although hopeful, this strategy is challenging because it relies on the prediction of the required file changes to complete a task. As predicting such file changes is hard, we investigate its feasibility for BDD (Behaviour-Driven Development) projects, which write automated acceptance tests before implementing features. We develop a tool that, for a given task, statically analyzes Cucumber tests and infers test-based interfaces (files that could be executed by the tests), approximating files that would be changed by the task. To assess the accuracy of this approximation, we measure precision and recall of test-based interfaces of 513 tasks from 18 Rails projects on GitHub. We also compare such interfaces with randomly defined interfaces, interfaces obtained by textual similarity of test specifications with past tasks, and interfaces computed by executing tests. Our results give evidence that, in the specific context of BDD, Cucumber tests might help to predict files changed by tasks. We find that the better the test coverage, the better the predictive power. A hybrid approach for computing test-based interfaces is promising.
- Published
- 2019
- Full Text
- View/download PDF
15. Semi-Automated Test-Case Propagation in Fork Ecosystems
- Author
-
Paulo Borba, Thorsten Berger, and Mukelabai Mukelabai
- Subjects
Underpinning ,Test case ,Process management ,Market segmentation ,Computer science ,media_common.quotation_subject ,Quality (business) ,Reuse ,Software quality ,Fork (software development) ,media_common ,Test (assessment) - Abstract
Forking provides a flexible and low-cost strategy for developers to adapt an existing project to new requirements, for instance, when addressing different market segments, hardware constraints, or runtime environments. Then, small ecosystems of forked projects are formed, with each project in the ecosystem maintained by a separate team or organization. The software quality of projects in fork ecosystems varies with the resources available as well as team experience, and expertise, especially when the forked projects are maintained independently by teams that are unaware of the evolution of other's forks. Consequently, the quality of forked projects could be improved by reusing test cases as well as code, thereby leveraging community expertise and experience, and commonalities between the projects. We propose a novel technique for recommending and propagating test cases across forked projects. We motivate our idea with a pre-study we conducted to investigate the extent to which test cases are shared or can potentially be reused in a fork ecosystem. We also present the theoretical and practical implications underpinning the proposed idea, together with a research agenda.
- Published
- 2021
- Full Text
- View/download PDF
16. Detecting Semantic Conflicts via Automated Behavior Change Detection
- Author
-
Leuson Mario Pedro da Silva, Wardah Mahmood, Joao Moisakis, Thorsten Berger, and Paulo Borba
- Subjects
Teamwork ,Unit testing ,Computer science ,business.industry ,media_common.quotation_subject ,Behavior change ,020207 software engineering ,02 engineering and technology ,Static analysis ,Collaborative software development ,Data science ,Software ,Software deployment ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Program behavior ,business ,media_common - Abstract
Branching and merging are common practices in collaborative software development. They increase developer productivity by fostering teamwork, allowing developers to independently contribute to a software project. Despite such benefits, branching and merging comes at a cost—the need to merge software and to resolve merge conflicts, which often occur in practice. While modern merge techniques, such as 3-way or structured merge, can resolve many such conflicts automatically, they fail when the conflict arises not at the syntactic, but the semantic level. Detecting such conflicts requires understanding the behavior of the software, which is beyond the capabilities of most existing merge tools. As such, semantic conflicts can only be identified and fixed with significant effort and knowledge of the changes to be merged. While semantic merge tools have been proposed, they are usually heavyweight, based on static analysis, and need explicit specifications of program behavior. In this work, we take a different route and explore the automated creation of unit tests as partial specifications to detect unwanted behavior changes (conflicts) when merging software.We systematically explore the detection of semantic conflicts through unit-test generation. Relying on a ground-truth dataset of 38 software merge scenarios, which we extracted from GitHub, we manually analyzed them and investigated whether semantic conflicts exist. Next, we apply test-generation tools to study their detection rates. We propose improvements (code transformations) and study their effectiveness, as well as we qualitatively analyze the detection results and propose future improvements. For example, we analyze the generated test suites for false-negative cases to understand why the conflict was not detected. Our results evidence the feasibility of using test-case generation to detect semantic conflicts as a method that is versatile and requires only limited deployment effort in practice, as well as it does not require explicit behavior specifications.
- Published
- 2020
- Full Text
- View/download PDF
17. Missão do Direito Internacional no mundo pós-moderno -- reflexão pelos 190 anos do Direito Internacional nas Arcadas
- Author
-
Paulo Borba Casella
- Subjects
General Medicine - Abstract
Desde o início, em março de 1828, o Direito Internacional tem sido ensinado, sem interrupção, na Faculdade de Direito de São Paulo, como então se chamava; mais do que nunca, o Direito Internacional tem papel a cumprir na formação dos profissionais do direito e na busca da inserção competitiva do Brasil.
- Published
- 2018
- Full Text
- View/download PDF
18. The Amerindians and International Law
- Author
-
Paulo Borba Casella
- Subjects
Parliament ,General assembly ,Constitution ,media_common.quotation_subject ,Political science ,Law ,Perspective (graphical) ,Subject (philosophy) ,General Medicine ,International law ,Colonial period ,media_common - Abstract
The paper strives to bring a historic perspective about the International Law treatment on Amerindians, from the «discovery», going through the renowned authors of the American colonial period, such as Francisco de Vitoria and Bartolomé de las Casas, to the present condition of the Amerindians in Brazil, within the Constitution (1988) and with the multilateral international efforts to regulate the subject, via the UN´s General Assembly or via the European Parliament.
- Published
- 2018
- Full Text
- View/download PDF
19. VÖLKERRECHT AUS BRASILIANISCHER SICHT (ODER MIT BRASILIANISCHEM AKZENT) – ZWISCHEN UNIVERSALISMUS UND REGIONALISMUS
- Author
-
Paulo Borba Casella
- Abstract
Der Artikel ist ein kurzer Bericht einer Vorlesung, die am 04. Juni 2012 an der Humboldt-Universität zu Berlin gehalten wurde.
- Published
- 2018
- Full Text
- View/download PDF
20. Detecting Overly Strong Preconditions in Refactoring Engines
- Author
-
Márcio Ribeiro, Leopoldo Teixeira, Paulo Borba, Rohit Gheyi, Gustavo Soares, and Melina Mongiovi
- Subjects
Java ,Computer science ,business.industry ,Programming language ,020207 software engineering ,Usability ,02 engineering and technology ,computer.software_genre ,Test case ,Transformation (function) ,Software bug ,Code refactoring ,Software_SOFTWAREENGINEERING ,TheoryofComputation_LOGICSANDMEANINGSOFPROGRAMS ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,business ,computer ,Implementation ,Software ,computer.programming_language ,Eclipse - Abstract
Refactoring engines may have overly strong preconditions preventing developers from applying useful transformations. We find that 32 percent of the Eclipse and JRRT test suites are concerned with detecting overly strong preconditions. In general, developers manually write test cases, which is costly and error prone. Our previous technique detects overly strong preconditions using differential testing. However, it needs at least two refactoring engines. In this work, we propose a technique to detect overly strong preconditions in refactoring engines without needing reference implementations. We automatically generate programs and attempt to refactor them. For each rejected transformation, we attempt to apply it again after disabling the preconditions that lead the refactoring engine to reject the transformation. If it applies a behavior preserving transformation, we consider the disabled preconditions overly strong. We evaluate 10 refactorings of Eclipse and JRRT by generating 154,040 programs. We find 15 overly strong preconditions in Eclipse and 15 in JRRT. Our technique detects 11 bugs that our previous technique cannot detect while missing 5 bugs. We evaluate the technique by replacing the programs generated by JDolly with the input programs of Eclipse and JRRT test suites. Our technique detects 14 overly strong preconditions in Eclipse and 4 in JRRT.
- Published
- 2018
- Full Text
- View/download PDF
21. Understanding semi-structured merge conflict characteristics in open-source Java projects
- Author
-
Paulo Borba, Guilherme Cavalcanti, and Paola Accioly
- Subjects
Copying ,Java ,Exploit ,Computer science ,020207 software engineering ,02 engineering and technology ,Collaborative software development ,Data science ,Syntax ,Open source ,Empirical research ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Merge (version control) ,computer ,Software ,computer.programming_language - Abstract
Empirical studies show that merge conflicts frequently occur, impairing developers' productivity, since merging conflicting contributions might be a demanding and tedious task. However, the structure of changes that lead to conflicts has not been studied yet. Understanding the underlying structure of conflicts, and the involved syntactic language elements might shed light on how to better avoid merge conflicts. To this end, in this paper we derive a catalog of conflict patterns expressed in terms of the structure of code changes that lead to merge conflicts. We focus on conflicts reported by a semistructured merge tool that exploits knowledge about the underlying syntax of the artifacts. This way, we avoid analyzing a large number of spurious conflicts often reported by typical line based merge tools. To assess the occurrence of such patterns in different systems, we conduct an empirical study reproducing 70,047 merges from 123 GitHub Java projects. Our results show that most semistructured merge conflicts in our sample happen because developers independently edit the same or consecutive lines of the same method. However, the probability of creating a merge conflict is approximately the same when editing methods, class fields, and modifier lists. Furthermore, we noticed that most part of conflicting merge scenarios, and merge conflicts, involve more than two developers. Also, that copying and pasting pieces of code, or even entire files, across different repositories is a common practice and cause of conflicts. Finally, we discuss how our results reveal the need for new research studies and suggest potential improvements to tools supporting collaborative software development.
- Published
- 2017
- Full Text
- View/download PDF
22. Evaluating and improving semistructured merge
- Author
-
Paulo Borba, Paola Accioly, and Guilherme Cavalcanti
- Subjects
Correctness ,Computer science ,False positives and false negatives ,020207 software engineering ,02 engineering and technology ,computer.software_genre ,Software merging ,Open source ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,False positive paradox ,Syntactic structure ,Data mining ,Safety, Risk, Reliability and Quality ,computer ,Merge (version control) ,Software - Abstract
While unstructured merge tools rely only on textual analysis to detect and resolve conflicts, semistructured merge tools go further by partially exploiting the syntactic structure and semantics of the involved artifacts. Previous studies compare these merge approaches with respect to the number of reported conflicts, showing, for most projects and merge situations, reduction in favor of semistructured merge. However, these studies do not investigate whether this reduction actually leads to integration effort reduction (productivity) without negative impact on the correctness of the merging process (quality). To analyze that, and better understand how merge tools could be improved, in this paper we reproduce more than 30,000 merges from 50 open source projects, identifying conflicts incorrectly reported by one approach but not by the other (false positives), and conflicts correctly reported by one approach but missed by the other (false negatives). Our results and complementary analysis indicate that, in the studied sample, the number of false positives is significantly reduced when using semistructured merge. We also find evidence that its false positives are easier to analyze and resolve than those reported by unstructured merge. However, we find no evidence that semistructured merge leads to fewer false negatives, and we argue that they are harder to detect and resolve than unstructured merge false negatives. Driven by these findings, we implement an improved semistructured merge tool that further combines both approaches to reduce the false positives and false negatives of semistructured merge. We find evidence that the improved tool, when compared to unstructured merge in our sample, reduces the number of reported conflicts by half, has no additional false positives, has at least 8% fewer false negatives, and is not prohibitively slower.
- Published
- 2017
- Full Text
- View/download PDF
23. Safe Evolution of Product Lines Using Configuration Knowledge Laws
- Author
-
Paulo Borba, Leopoldo Teixeira, and Rohit Gheyi
- Subjects
Algebraic laws ,Set (abstract data type) ,Automated theorem proving ,Computer science ,Formal semantics (linguistics) ,Product (mathematics) ,Law ,Product line ,Feature (machine learning) ,Software product line - Abstract
When evolving a software product line, it is often important to ensure that we do it in a safe way, ensuring that the resulting product line remains well-formed and that the behavior of existing products is not affected. To ensure this, one usually has to analyze the different artifacts that constitute a product line, like feature models, configuration knowledge and assets. Manually analyzing these artifacts can be time-consuming and error prone, since a product line might consist of thousands of products. Existing works show that a non-negligible number of changes performed in commits deal only with the configuration knowledge, that is, the mapping between features and assets. This way, in this paper, we propose a set of algebraic laws, which correspond to bi-directional transformations for configuration knowledge models, that we can use to justify safe evolution of product lines, when only the configuration knowledge model changes. Using a theorem prover, we proved all laws sound with respect to a formal semantics. We also present a case study, where we use these laws to justify safe evolution scenarios of a non trivial industrial software product line.
- Published
- 2020
- Full Text
- View/download PDF
24. Semistructured Merge in JavaScript Systems
- Author
-
Sergio Soares, Alberto Trindade Tavares, Paulo Borba, and Guilherme Cavalcanti
- Subjects
Correctness ,Computer science ,Programming language ,020207 software engineering ,02 engineering and technology ,computer.software_genre ,JavaScript ,Scripting language ,Merge algorithm ,ComputingMethodologies_DOCUMENTANDTEXTPROCESSING ,0202 electrical engineering, electronic engineering, information engineering ,Merge (version control) ,computer ,computer.programming_language - Abstract
Industry widely uses unstructured merge tools that rely on textual analysis to detect and resolve conflicts between code contributions. Semistructured merge tools go further by partially exploring the syntactic structure of code artifacts, and, as a consequence, obtaining significant merge accuracy gains for Java-like languages. To understand whether semistructured merge and the observed gains generalize to other kinds of languages, we implement two semistructured merge tools for JavaScript, and compare them to an unstructured tool. We find that current semistructured merge algorithms and frameworks are not directly applicable for scripting languages like JavaScript. By adapting the algorithms, and studying 10,345 merge scenarios from 50 JavaScript projects on GitHub, we find evidence that our JavaScript tools report fewer spurious conflicts than unstructured merge, without compromising the correctness of the merging process. The gains, however, are much smaller than the ones observed for Java-like languages, suggesting that semistructured merge advantages might be limited for languages that allow both commutative and non-commutative declarations at the same syntactic level.
- Published
- 2019
- Full Text
- View/download PDF
25. The Impact of Structure on Software Merging: Semistructured Versus Structured Merge
- Author
-
Georg Seibt, Paulo Borba, Guilherme Cavalcanti, and Sven Apel
- Subjects
Software merging ,Information retrieval ,Computer science ,Data_FILES ,0202 electrical engineering, electronic engineering, information engineering ,False positive paradox ,020207 software engineering ,02 engineering and technology ,Merge (version control) - Abstract
Merge conflicts often occur when developers concurrently change the same code artifacts. While state of practice unstructured merge tools (e.g. Git merge) try to automatically resolve merge conflicts based on textual similarity, semistructured and structured merge tools try to go further by exploiting the syntactic structure and semantics of the artifacts involved. Although there is evidence that semistructured merge has significant advantages over unstructured merge, and that structured merge reports significantly fewer conflicts than unstructured merge, it is unknown how semistructured merge compares with structured merge. To help developers decide which kind of tool to use, we compare semistructured and structured merge in an empirical study by reproducing more than 40,000 merge scenarios from more than 500 projects. In particular, we assess how often the two merge strategies report different results: we identify conflicts incorrectly reported by one but not by the other (false positives), and conflicts correctly reported by one but missed by the other (false negatives). Our results show that semistructured and structured merge differ in 24% of the scenarios with conflicts. Semistructured merge reports more false positives, whereas structured merge has more false negatives. Finally, we found that adapting a semistructured merge tool to resolve a particular kind of conflict makes semistructured and structured merge even closer.
- Published
- 2019
- Full Text
- View/download PDF
26. Improving the prediction of files changed by programming tasks
- Author
-
Thaís Rocha, João Pedro Santos, and Paulo Borba
- Subjects
Development environment ,Recall ,business.industry ,Acceptance testing ,Computer science ,Software engineering ,business ,Precision and recall ,Merge (version control) ,Software quality - Abstract
Integration conflicts often damage software quality and developers' productivity in a collaborative development environment. For reducing merge conflicts, we could avoid asking developers to execute potentially conflicting tasks in parallel, as long as we can predict the files to be changed by each task. As manually predicting that is hard, the TAITI tool tries to do that in the context of BDD (Behaviour-Driven Development) projects, by statically analysing the automated acceptance tests that validate each task. TAITI computes the set of files that might be executed by the tests of a task (a so called test-based task interface), approximating the files that developers will change when performing the task. Although TAITI performs better than a random task interface, there is space for accuracy improvements. In this paper, we extend the interfaces computed by TAITI by including the dependences of the application files reached by the task tests. To understand the potential benefits of our extension, we evaluate precision and recall of 60 task interfaces from 8 Rails GitHub projects. The results bring evidence that the extended interface improves recall by slightly compromising precision.
- Published
- 2019
- Full Text
- View/download PDF
27. An idiom to represent data types in Alloy
- Author
-
Rohit Gheyi, Márcio Ribeiro, Augusto Sampaio, and Paulo Borba
- Subjects
Interpretation (logic) ,Computer science ,Programming language ,020207 software engineering ,Context (language use) ,0102 computer and information sciences ,02 engineering and technology ,Type (model theory) ,computer.software_genre ,01 natural sciences ,Data type ,Computer Science Applications ,Alloy Analyzer ,Unified Modeling Language ,010201 computation theory & mathematics ,TheoryofComputation_LOGICSANDMEANINGSOFPROGRAMS ,0202 electrical engineering, electronic engineering, information engineering ,computer ,Software ,Axiom ,Language construct ,Information Systems ,computer.programming_language - Abstract
Context: It is common to consider Alloy signatures or UML classes as data types that have a canonical fixed interpretation: the elements of the type correspond to terms recursively generated by the type constructors. However, these language constructs resemble data types but, strictly, they are not. Objective: In this article, we propose an idiom to specify data types in Alloy. Method: We compare our approach to others in the context of checking data refinement using the Alloy Analyzer tool. Results: Some previous studies do not include the generation axiom and may perform unsound analysis. Other studies recommend some optimizations to overcome a limitation in the Alloy Analyzer tool. Conclusion: The problem is not related to the tool but the way data types must be represented in Alloy. This study shows the importance of using automated analyses to test translation between different language constructs.
- Published
- 2017
- Full Text
- View/download PDF
28. Assessing fine-grained feature dependencies
- Author
-
Márcio Ribeiro, Baldoino Fonseca, Rohit Gheyi, Flávio Medeiros, Paulo Borba, and Iran Rodrigues
- Subjects
Parsing ,Source code ,Computer science ,Programming language ,media_common.quotation_subject ,020207 software engineering ,Context (language use) ,02 engineering and technology ,computer.software_genre ,01 natural sciences ,Computer Science Applications ,Task (project management) ,Dependency theory (database theory) ,010104 statistics & probability ,0202 electrical engineering, electronic engineering, information engineering ,Feature (machine learning) ,Preprocessor ,0101 mathematics ,Function (engineering) ,computer ,Software ,Information Systems ,media_common - Abstract
Context: Maintaining software families is not a trivial task. Developers commonly introduce bugs when they do not consider existing dependencies among features. When such implementations share program elements, such as variables and functions, inadvertently using these elements may result in bugs. In this context, previous work focuses only on the occurrence of intraprocedural dependencies, that is, when features share program elements within a function. But at the same time, we still lack studies investigating dependencies that transcend the boundaries of a function, since these cases might cause bugs as well.Objective: This work assesses to what extent feature dependencies exist in actual software families, answering research questions regarding the occurrence of intraprocedural, global, and interprocedural dependencies and their characteristics.Method: We perform an empirical study covering 40 software families of different domains and sizes. We use a variability-aware parser to analyze families source code while retaining all variability information.Results: Intraprocedural and interprocedural feature dependencies are common in the families we analyze: more than half of functions with preprocessor directives have intraprocedural dependencies, while over a quarter of all functions have interprocedural dependencies. The median depth of interprocedural dependencies is 9.Conclusion: Given these dependencies are rather common, there is a need for tools and techniques to raise developers awareness in order to minimize or avoid problems when maintaining code in the presence of such dependencies. Problems regarding interprocedural dependencies with high depths might be harder to detect and fix.
- Published
- 2016
- Full Text
- View/download PDF
29. Understanding predictive factors for merge conflicts
- Author
-
Klissiomara L. Dias, Paulo Borba, and Marcos Barreto
- Subjects
Computer science ,business.industry ,020207 software engineering ,02 engineering and technology ,Modular design ,Python (programming language) ,Data science ,Computer Science Applications ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Predictive power ,Project management ,business ,Merge (version control) ,computer ,Software ,Predictive modelling ,Information Systems ,computer.programming_language - Abstract
Context: Merge conflicts often occur when developers change the same code artifacts. Such conflicts might be frequent in practice, and resolving them might be costly and is an error-prone activity. Objective: To minimize these problems by reducing merge conflicts, it is important to better understand how conflict occurrence is affected by technical and organizational factors. Method: With that aim, we investigate seven factors related to modularity, size, and timing of developers contributions. To do so, we reproduce and analyze 73504 merge scenarios in GitHub repositories of Ruby and Python MVC projects. Results: We find evidence that the likelihood of merge conflict occurrence significantly increases when contributions to be merged are not modular in the sense that they involve files from the same MVC slice (related model, view, and controller files). We also find bigger contributions involving more developers, commits, and changed files are more likely associated with merge conflicts. Regarding the timing factors, we observe contributions developed over longer periods of time are more likely associated with conflicts. No evaluated factor shows predictive power concerning both the number of merge conflicts and the number of files with conflicts. Conclusion: Our results could be used to derive recommendations for development teams and merge conflict prediction models. Project management and assistive tools could benefit from these models.
- Published
- 2020
- Full Text
- View/download PDF
30. Understanding Semi-structured merge conflict characteristics in open-source Java projects (journal-first abstract)
- Author
-
Paulo Borba, Paola Accioly, and Guilherme Cavalcanti
- Subjects
Open source ,Java ,Computer science ,Empirical process (process control model) ,Data science ,Merge (version control) ,computer ,computer.programming_language - Abstract
In a collaborative development environment, tasks are commonly assigned to developers working independent from each other. As a result, when trying to integrate these contributions, one might have to deal with conflicting changes. Such conflicts might be detected when merging contributions (merge conflicts), when building the system (build conflicts), or when running tests (semantic conflicts). Regarding such conflicts, previous studies show that they occur frequently, and impair developers’ productivity, as understanding and solving them is a demanding and tedious task that might introduce defects. However, despite the existing evidence in the literature, the structure of changes that lead to conflicts has not been studied yet. Understanding the underlying structure of conflicts, and the involved syntactic language elements, might shed light on how to better avoid them. For example, awareness tools that inform users about ongoing parallel changes such as Syde and Palantir can benefit from knowing the most common conflict patterns to become more efficient. With that aim, in this paper we focus on understanding the underlying structure of merge conflicts.
- Published
- 2018
- Full Text
- View/download PDF
31. Analyzing conflict predictors in open-source Java projects
- Author
-
Paola Accioly, Guilherme Cavalcanti, Paulo Borba, and Leuson Mario Pedro da Silva
- Subjects
Recall ,Java ,Computer science ,020207 software engineering ,02 engineering and technology ,Open source software ,Data science ,Empirical research ,Open source ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Precision and recall ,Empirical evidence ,Merge (version control) ,computer ,computer.programming_language - Abstract
In collaborative development environments integration conflicts occur frequently. To alleviate this problem, different awareness tools have been proposed to alert developers about potential conflicts before they become too complex. However, there is not much empirical evidence supporting the strategies used by these tools. Learning about what types of changes most likely lead to conflicts might help to derive more appropriate requirements for early conflict detection, and suggest improvements to existing conflict detection tools. To bring such evidence, in this paper we analyze the effectiveness of two types of code changes as conflict predictors. Namely, editions to the same method, and editions to directly dependent methods. We conduct an empirical study analyzing part of the development history of 45 Java projects from GitHub and Travis CI, including 5,647 merge scenarios, to compute the precision and recall for the conflict predictors aforementioned. Our results indicate that the predictors combined have a precision of 57.99% and a recall of 82.67%. Moreover, we conduct a manual analysis which provides insights about strategies that could further increase the precision and the recall.
- Published
- 2018
- Full Text
- View/download PDF
32. Grupo de estudos sobre a proteção internacional de minorias da Faculdade de Direito da Universidade de São Paulo – Gepim/USP
- Author
-
Felipe Nicolau Pimentel Alamino and Paulo Borba Casella
- Subjects
General Medicine - Published
- 2019
- Full Text
- View/download PDF
33. Safe evolution templates for software product lines
- Author
-
Leopoldo Teixeira, L. Neves, Paulo Borba, L. Turnes, U. Kulesza, D. Sena, and Vander Alves
- Subjects
Engineering ,business.industry ,media_common.quotation_subject ,Work (physics) ,Product engineering ,Lead (geology) ,Template ,Software ,Hardware and Architecture ,Systems engineering ,Quality (business) ,Product (category theory) ,Software engineering ,business ,Set (psychology) ,Information Systems ,media_common - Abstract
We extend our investigation of compositional product lines with more subjectsWe also investigate annotative product lines, and propose templates for this contextWe contribute to the body of evidence on safe evolution of product linesWe bring additional evidence of the expressiveness of the proposed templates Software product lines enable generating related software products from reusable assets. Adopting a product line strategy can bring significant quality and productivity improvements. However, evolving a product line can be risky, since it might impact many products. When introducing new features or improving its design, it is important to make sure that the behavior of existing products is not affected. To ensure that, one usually has to analyze different types of artifacts, an activity that can lead to errors. To address this issue, in this work we discover and analyze concrete evolution scenarios from five different product lines. We discover a total of 13 safe evolution templates, which are generic transformations that developers can apply when evolving compositional and annotative product lines, with the goal of preserving the behavior of existing products. We also evaluate the templates by analyzing the evolution history of these product lines. In this evaluation, we observe that the templates can address the modifications that developers performed in the analyzed scenarios, which corroborates the expressiveness of our template set. We also observe that the templates could also have helped to avoid the errors that we identified during our analysis.
- Published
- 2015
- Full Text
- View/download PDF
34. Empirical assessment of two approaches for specifying software product line use case scenarios
- Author
-
Paulo Borba, Cristiano Ferraz, Paola Accioly, and Rodrigo Bonifácio
- Subjects
Modularity (networks) ,Source code ,Requirements engineering ,Computer science ,business.industry ,media_common.quotation_subject ,020207 software engineering ,Cohesion (computer science) ,Context (language use) ,02 engineering and technology ,Reliability engineering ,020204 information systems ,Modeling and Simulation ,0202 electrical engineering, electronic engineering, information engineering ,Use case ,Relevance (information retrieval) ,Software product line ,Software engineering ,business ,Software ,media_common - Abstract
Modularity benefits, including the independent maintenance and comprehension of individual modules, have been widely advocated. However, empirical assessments to investigate those benefits have mostly focused on source code, and thus, the relevance of modularity to earlier artifacts is still not so clear (such as requirements and design models). In this paper, we use a multimethod technique, including designed experiments, to empirically evaluate the benefits of modularity in the context of two approaches for specifying product line use case scenarios: PLUSS and MSVCM. The first uses an annotative approach for specifying variability, whereas the second relies on aspect-oriented constructs for separating common and variant scenario specifications. After evaluating these approaches through the specifications of several systems, we find out that MSVCM reduces feature scattering and improves scenario cohesion. These results suggest that evolving a product line specification using MSVCM requires only localized changes. On the other hand, the results of six experiments reveal that MSVCM requires more time to derive the product line specifications and, contrasting with the modularity results, reduces the time to evolve a product line specification only when the subjects have been well trained and are used to the task of evolving product line specifications.
- Published
- 2015
- Full Text
- View/download PDF
35. Coevolution of variability models and related software artifacts
- Author
-
Andrzej Wąsowski, Paulo Borba, Nicolas Dintzner, Jianmei Guo, Leopoldo Teixeira, Leonardo Passos, Sven Apel, and Krzysztof Czarnecki
- Subjects
Source code ,Computer science ,business.industry ,media_common.quotation_subject ,020207 software engineering ,Linux kernel ,02 engineering and technology ,Artifact (software development) ,Variation (game tree) ,computer.software_genre ,Machine learning ,Personalization ,Kernel (image processing) ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Data mining ,Software system ,Artificial intelligence ,business ,computer ,Software ,Coevolution ,media_common - Abstract
Variant-rich software systems offer a large degree of customization, allowing users to configure the target system according to their preferences and needs. Facing high degrees of variability, these systems often employ variability models to explicitly capture user-configurable features (e.g., systems options) and the constraints they impose. The explicit representation of features allows them to be referenced in different variation points across different artifacts, enabling the latter to vary according to specific feature selections. In such settings, the evolution of variability models interplays with the evolution of related artifacts, requiring the two to evolve together, or coevolve. Interestingly, little is known about how such coevolution occurs in real-world systems, as existing research has focused mostly on variability evolution as it happens in variability models only. Furthermore, existing techniques supporting variability evolution are usually validated with randomly-generated variability models or evolution scenarios that do not stem from practice. As the community lacks a deep understanding of how variability evolution occurs in real-world systems and how it relates to the evolution of different kinds of software artifacts, it is not surprising that industry reports existing tools and solutions ineffective, as they do not handle the complexity found in practice. Attempting to mitigate this overall lack of knowledge and to support tool builders with insights on how variability models coevolve with other artifact types, we study a large and complex real-world variant-rich software system: the Linux kernel. Specifically, we extract variability-coevolution patterns capturing changes in the variability model of the Linux kernel with subsequent changes in Makefiles and C source code. From the analysis of the patterns, we report on findings concerning evolution principles found in the kernel, and we reveal deficiencies in existing tools and theory when handling changes captured by our patterns.
- Published
- 2015
- Full Text
- View/download PDF
36. Products go Green
- Author
-
Marco Couto, Paulo Borba, João Paulo Fernandes, João Saraiva, Jácome Cunha, and Rui Pereira
- Subjects
Measure (data warehouse) ,Engineering ,business.industry ,020206 networking & telecommunications ,020207 software engineering ,Static program analysis ,02 engineering and technology ,Energy consumption ,Standard deviation ,Reliability engineering ,Software ,Worst-case execution time ,0202 electrical engineering, electronic engineering, information engineering ,business ,Energy (signal processing) ,Efficient energy use - Abstract
The optimization of software to be (more) energy efficient is becoming a major concern for the software industry. Although several techniques have been presented to measure energy consumption for software, none has addressed software product lines (SPLs). Thus, to measure energy consumption of a SPL, the products must be generated and measured individually, which is too costly.In this paper, we present a technique and a prototype tool to statically estimate the worst case energy consumption for SPL. The goal is to provide developers with techniques and tools to reason about the energy consumption of all products in a SPL, without having to produce, run and measure the energy in all of them.Our technique combines static program analysis techniques and worst case execution time prediction with energy consumption analysis. This technique analyzes all products in a feature-sensitive manner, that is, a feature used in several products is analyzed only once, while the energy consumption is estimated once per product.We implemented our technique in a tool called Serapis. We did a preliminary evaluation using a product line for image processing implemented in C. Our experiments considered 7 products from such line and our initial results show that the tool was able to estimate the worst-case energy consumption with a mean error percentage of 9.4% and standard deviation of 6.2% when compared with the energy measured when running the products.
- Published
- 2017
- Full Text
- View/download PDF
37. Should We Replace Our Merge Tools?
- Author
-
Paola Accioly, Guilherme Cavalcanti, and Paulo Borba
- Subjects
Correctness ,Information retrieval ,Theoretical computer science ,Computer science ,0202 electrical engineering, electronic engineering, information engineering ,020207 software engineering ,020201 artificial intelligence & image processing ,02 engineering and technology ,Merge (version control) - Abstract
While unstructured merge tools try to automatically resolve merge conflicts via textual similarity, semistructured merge tools try to go further by partially exploiting the syntactic structure and semantics of the involved artefacts. Previous studies compare these merge approaches with respect to the number of reported conflicts, showing, for most projects and merge situations, a reduction in favor of semistructured merge. However, these studies do not investigate whether this reduction actually leads to integration effort reduction (Productivity) without negative impact on the correctness of the merging process (Quality). To analyze this, and to better understand how these tools could be improved, we propose empirical studies to identify spurious conflicts reported by one approach but not by the other, and interference reported as conflict by one approach but missed by the other.
- Published
- 2017
- Full Text
- View/download PDF
38. Making refactoring safer through impact analysis
- Author
-
Leopoldo Teixeira, Gustavo Soares, Paulo Borba, Melina Mongiovi, and Rohit Gheyi
- Subjects
Correctness ,Transformation (function) ,Code refactoring ,Computer science ,Programming language ,Semantics (computer science) ,SAFER ,Test suite ,Change impact analysis ,computer.software_genre ,computer ,Software ,Generator (mathematics) - Abstract
Currently most developers have to apply manual steps and use test suites to improve confidence that transformations applied to object-oriented (OO) and aspect-oriented (AO) programs are correct. However, it is not simple to do manual reasoning, due to the nontrivial semantics of OO and AO languages. Moreover, most refactoring implementations contain a number of bugs since it is difficult to establish all conditions required for a transformation to be behavior preserving. In this article, we propose a tool (SafeRefactorImpact) that analyzes the transformation and generates tests only for the methods impacted by a transformation identified by our change impact analyzer (Safira). We compare SafeRefactorImpact with our previous tool (SafeRefactor) with respect to correctness, performance, number of methods passed to the automatic test suite generator, change coverage, and number of relevant tests generated in 45 transformations. SafeRefactorImpact identifies behavioral changes undetected by SafeRefactor. Moreover, it reduces the number of methods passed to the test suite generator. Finally, SafeRefactorImpact has a better change coverage in larger subjects, and generates more relevant tests than SafeRefactor.
- Published
- 2014
- Full Text
- View/download PDF
39. Abdominal Manifestations of Lymphoma: Spectrum of Imaging Features
- Author
-
Paulo Borba-Filho, Marcella Farias, Adonis Manzella, and Giuseppe D'Ippolito
- Subjects
Gastrointestinal tract ,Pathology ,medicine.medical_specialty ,Genitourinary system ,business.industry ,Review Article ,medicine.disease ,Lymphoma ,Extranodal Disease ,Abdominal wall ,medicine.anatomical_structure ,immune system diseases ,Biliary tract ,hemic and lymphatic diseases ,medicine ,Abdomen ,Pancreas ,business - Abstract
Non-Hodgkin and Hodgkin lymphomas frequently involve many structures in the abdomen and pelvis. Extranodal disease is more common with Non-Hodgkin’s lymphoma than with Hodgkin's lymphoma. Though it may be part of a systemic lymphoma, single onset of nodal lymphoma is not rare. Extranodal lymphoma has been described in virtually every organ and tissue. In decreasing order of frequency, the spleen, liver, gastrointestinal tract, pancreas, abdominal wall, genitourinary tract, adrenal, peritoneal cavity, and biliary tract are involved. The purpose of this review is to discuss and illustrate the spectrum of appearances of nodal and extranodal lymphomas, including AIDS-related lymphomas, in the abdominopelvic region using a multimodality approach, especially cross-sectional imaging techniques. The most common radiologic patterns of involvement are illustrated. Familiarity with the imaging manifestations that are diagnostically specific for lymphoma is important because imaging plays an important role in the noninvasive management of disease.
- Published
- 2013
- Full Text
- View/download PDF
40. A design rule language for aspect-oriented programming
- Author
-
Márcio Ribeiro, Carlos Eduardo Pontual, Fernando Castor, Rodrigo Bonifácio, Alberto Costa Neto, and Paulo Borba
- Subjects
COLA (software architecture) ,Class (computer programming) ,Modularity (networks) ,Computer science ,Programming language ,Process (engineering) ,Aspect-oriented programming ,AspectJ ,Specification language ,computer.software_genre ,Hardware and Architecture ,Compiler ,computer ,Software ,Information Systems ,computer.programming_language - Abstract
HighlightsWe present a design rule specification language for aspect-oriented systems.We explore its benefits to supporting the modular development of classes and aspects.We discuss how our language improves crosscutting modularity without breaking class modularity.We present a Compiler for LSD and AspectJ (COLA), a tool to automate design rules checking.We evaluate it using a real case study and compare it with other approaches. Aspect-oriented programming is known as a technique for modularizing crosscutting concerns. However, constructs aimed to support crosscutting modularity might actually break class modularity. As a consequence, class developers face changeability, parallel development and comprehensibility problems, because they must be aware of aspects whenever they develop or maintain a class. At the same time, aspects are vulnerable to changes in classes, since there is no contract specifying the points of interaction amongst these elements. These problems can be mitigated by using adequate design rules between classes and aspects. We present a design rule specification language and explore its benefits since the initial phases of the development process, specially with the aim of supporting modular development of classes and aspects. We discuss how our language improves crosscutting modularity without breaking class modularity. We evaluate it using a real case study and compare it with other approaches.
- Published
- 2013
- Full Text
- View/download PDF
41. SPL LIFT
- Author
-
Paulo Borba, Társis Tolêdo, Mira Mezini, Eric Bodden, Claus Brabrand, and Márcio Ribeiro
- Subjects
Class (computer programming) ,Java ,Computer science ,business.industry ,Programming language ,Reuse ,computer.software_genre ,Base (topology) ,Computer Graphics and Computer-Aided Design ,Software ,Product (mathematics) ,Code (cryptography) ,Software product line ,business ,computer ,computer.programming_language - Abstract
A software product line (SPL) encodes a potentially large variety of software products as variants of some common code base. Up until now, re-using traditional static analyses for SPLs was virtually intractable, as it required programmers to generate and analyze all products individually. In this work, however, we show how an important class of existing inter-procedural static analyses can be transparently lifted to SPLs. Without requiring programmers to change a single line of code, our approach SPLLIFT automatically converts any analysis formulated for traditional programs within the popular IFDS framework for inter-procedural, finite, distributive, subset problems to an SPL-aware analysis formulated in the IDE framework, a well-known extension to IFDS. Using a full implementation based on Heros, Soot, CIDE and JavaBDD, we show that with SPLLIFT one can reuse IFDS-based analyses without changing a single line of code. Through experiments using three static analyses applied to four Java-based product lines, we were able to show that our approach produces correct results and outperforms the traditional approach by several orders of magnitude.
- Published
- 2013
- Full Text
- View/download PDF
42. Safe composition of configuration knowledge-based software product lines
- Author
-
Paulo Borba, Leopoldo Teixeira, and Rohit Gheyi
- Subjects
Semantics (computer science) ,Programming language ,Computer science ,business.industry ,Scale (chemistry) ,Feature extraction ,Propositional calculus ,computer.software_genre ,Feature model ,Alloy Analyzer ,Software ,Feature (computer vision) ,Software_SOFTWAREENGINEERING ,Hardware and Architecture ,Formal specification ,Product (mathematics) ,Code (cryptography) ,Use case ,Software product line ,Software engineering ,business ,computer ,Information Systems - Abstract
Mistakes made when implementing or specifying the models of a Software Product Line (SPL) can result in ill-formed products - the safe composition problem. Such problem can hinder productivity and it might be hard to detect, since SPLs can have thousands of products. In this article, we propose a language independent approach for verifying safe composition of SPLs with dedicated Configuration Knowledge models. We translate feature model and Configuration Knowledge into propositional logic and use the Alloy Analyzer to perform the verification. To provide evidence for the generality of our approach, we instantiate this approach in different compositional settings. We deal with different kinds of assets such as use case scenarios and Eclipse RCP components. We analyze both the code and the requirements for a larger scale SPL, finding problems that affect thousands of products in minutes. Moreover, our evaluation suggests that the analysis time grows linearly with respect to the number of products in the analyzed SPLs.
- Published
- 2013
- Full Text
- View/download PDF
43. The crosscutting impact of the AOSD Brazilian research community
- Author
-
Fabiano Cutigi Ferrari, Uirá Kulesza, Paulo Cesar Masiero, Cecília M. F. Rubira, Thais Batista, Cláudio Sant'Anna, Fábio Fagundes Silveira, Eduardo Figueiredo, Marco Valente, Jaelson Castro, Rodrigo Bonifácio, Fernanda M. R. Alencar, Paulo F. Pires, Eduardo Kessler Piveta, Vander Alves, Valter Vieira de Camargo, Lyrene Fernandes da Silva, Sergio Soares, Carla Silva, Ricardo Argenton Ramos, Rosana Teresinha Vaccare Braga, Otávio Augusto Lazzarini Lemos, Fernando Castor, Nabor C. Mendonça, Paulo Borba, Flavia C. Delicato, Arndt von Staa, Roberta Coelho, Rosangela Aparecida Dellosso Penteado, Julio Cesar Sampaio do Prado Leite, Christina Chavez, Nelio Cacho, and Carlos José Pereira de Lucena
- Subjects
Engineering ,Data collection ,Process management ,Management science ,business.industry ,Separation of concerns ,Software development ,International community ,Timeline ,Aspect-oriented software development ,SISTEMAS DE INFORMAÇÃO ,Variety (cybernetics) ,Software development process ,Hardware and Architecture ,business ,Software ,Information Systems - Abstract
Texto completo. Acesso restrito. p. 905-933 Submitted by Santiago Fabio (fabio.ssantiago@hotmail.com) on 2013-06-17T14:56:46Z No. of bitstreams: 1 5555555555555.pdf: 2051198 bytes, checksum: e608df6a58cf62557b1b28f9f9fc890d (MD5) Made available in DSpace on 2013-06-17T14:56:46Z (GMT). No. of bitstreams: 1 5555555555555.pdf: 2051198 bytes, checksum: e608df6a58cf62557b1b28f9f9fc890d (MD5) Previous issue date: 2013 Background: Aspect-Oriented Software Development (AOSD) is a paradigm that promotes advanced separation of concerns and modularity throughout the software development lifecycle, with a distinctive emphasis on modular structures that cut across traditional abstraction boundaries. In the last 15 years, research on AOSD has boosted around the world. The AOSD-BR research community (AOSD-BR stands for AOSD in Brazil) emerged in the last decade, and has provided different contributions in a variety of topics. However, despite some evidence in terms of the number and quality of its outcomes, there is no organized characterization of the AOSD-BR community that positions it against the international AOSD Research community and the Software Engineering Research community in Brazil. Aims: In this paper, our main goal is to characterize the AOSD-BR community with respect to the research developed in the last decade, confronting it with the AOSD international community and the Brazilian Software Engineering community. Method: Data collection, validation and analysis were performed in collaboration with several researchers of the AOSD-BR community. The characterization was presented from three different perspectives: (i) a historical timeline of events and main milestones achieved by the community; (ii) an overview of the research developed by the community, in terms of key challenges, open issues and related work; and (iii) an analysis on the impact of the AOSD-BR community outcomes in terms of well-known indicators, such as number of papers and number of citations. Results: Our analysis showed that the AOSD-BR community has impacted both the international AOSD Research community and the Software Engineering Research community in Brazil. Salvador
- Published
- 2013
- Full Text
- View/download PDF
44. Safe Evolution of Software Product Lines: Feature Extraction Scenarios
- Author
-
Leopoldo Teixeira, Paulo Borba, and Fernando Benbassat
- Subjects
Engineering ,Java ,business.industry ,media_common.quotation_subject ,Feature extraction ,Context (language use) ,Product engineering ,Reliability engineering ,Software ,Template ,Product (mathematics) ,Quality (business) ,Software engineering ,business ,computer ,media_common ,computer.programming_language - Abstract
Software Product Lines can improve productivity and product quality, but product line maintenance is not simple, since a single change can impact several products. In many situations, it is desirable to provide some assurance that we can safely change a SPL in the sense that the behaviour of existing products is preserved. Developers can rely on previously proposed safe evolution notions, by means of transformation templates to ensure safe evolution. However, the existing templates were only applied in scenarios where a product line expands, and have not been evaluated in the context of extracting features from existing code. Therefore, we conducted a study using an industrial system developed in Java with 400 KLOC. This study revealed the need for new templates to address feature extraction scenarios, as well as improving the existing templates notation to address more expressive mappings between features and code assets. As a result of this study, we successfully extracted a product line from this existing system using the proposed templates, and also found evidence that the new templates can help to prevent defects during product line evolution.
- Published
- 2016
- Full Text
- View/download PDF
45. A theory of software product line refinement
- Author
-
Rohit Gheyi, Leopoldo Teixeira, and Paulo Borba
- Subjects
Soundness ,General Computer Science ,Basis (linear algebra) ,Refactoring ,Computer science ,Programming language ,Software evolution ,Software product lines ,Refinement ,computer.software_genre ,Feature model ,Theoretical Computer Science ,Transformation (function) ,Code refactoring ,Product (mathematics) ,Software product line ,computer ,Computer Science(all) - Abstract
To safely evolve a software product line, it is important to have a notion of product line refinement that assures behavior preservation of the original product line products. So in this article we present a language independent theory of product line refinement, establishing refinement properties that justify stepwise and compositional product line evolution. Moreover, we instantiate our theory with the formalization of specific languages for typical product lines artifacts, and then introduce and prove soundness of a number of associated product line refinement transformation templates. These templates can be used to reason about specific product lines and as a basis to derive comprehensive product line refinement catalogues.
- Published
- 2012
- Full Text
- View/download PDF
46. Brain Magnetic Resonance Imaging Findings in Young Patients with Hepatosplenic Schistosomiasis Mansoni without Overt Symptoms
- Author
-
Adonis Manzella, Carlos Teixeira Brandt, Keyla Fontes de Oliveira, and Paulo Borba-Filho
- Subjects
Adult ,Male ,Pathology ,medicine.medical_specialty ,Adolescent ,Schistosomiasis ,Basal Ganglia ,Young Adult ,Virology ,Basal ganglia ,medicine ,Humans ,Prospective Studies ,Young adult ,Child ,Prospective cohort study ,Splenic Diseases ,medicine.diagnostic_test ,business.industry ,Hepatosplenic schistosomiasis ,Brain ,Magnetic resonance imaging ,Articles ,medicine.disease ,Magnetic Resonance Imaging ,Schistosomiasis mansoni ,Hyperintensity ,Infectious Diseases ,Female ,Parasitology ,Splenic disease ,business ,Brain Stem - Abstract
The purpose of this study was to describe the brain magnetic resonance imaging (MRI) findings in young patients with hepatosplenic schistosomiasis mansoni without overt neurologic manifestations. This study included 34 young persons (age range = 9–25 years) with hepatosplenic schistosomiasis mansoni who had been previously treated. Patients were scanned on a 1.5-T system that included multiplanar pre-contrast and post-contrast sequences, and reports were completed by two radiologists after a consensus review. Twenty (58.8%) patients had MRI signal changes that were believed to be related to schistosomiasis mansoni. Twelve of the 20 patients had small focal hyperintensities on T2WI in the cerebral white matter, and eight patients had symmetric hyperintense basal ganglia on T1WI. There was a high frequency of brain MRI signal abnormalities in this series. Although not specific, these findings may be related to schistosomiasis.
- Published
- 2012
- Full Text
- View/download PDF
47. Modularity analysis of use case implementations
- Author
-
Paulo Borba and Fernanda d'Amorim
- Subjects
Object-oriented programming ,Theoretical computer science ,business.industry ,Computer science ,Aspect-oriented programming ,Multitier architecture ,Separation of concerns ,Empirical process (process control model) ,Cohesion (computer science) ,Software metric ,Hardware and Architecture ,Modular programming ,Information system ,Concurrent computing ,Use case ,Software engineering ,business ,Implementation ,Software ,Information Systems - Abstract
A component-based decomposition can result in implementations having use cases code tangled with other concerns and scattered across components. Modularity mechanisms such as aspects, mixins, and virtual classes have been proposed to address this kind of problem. One can use such mechanisms to group together code related to a single use case. This paper quantitatively analyzes the impact of this kind of use case modularization. We apply one specific technique, aspect oriented programming, to modularize the use case implementations of two information systems that conform to the layered architecture pattern. We extract traditional and contemporary metrics - including cohesion, coupling, and separation of concerns - to analyze modularity in terms of quality attributes such as changeability, support for independent development, and pluggability. Our findings indicate that the results of a given modularity analysis depend on other factors beyond the chosen system, metrics, and the applied modularity technique.
- Published
- 2012
- Full Text
- View/download PDF
48. Imaging of Gossypibomas: Self-Assessment Module
- Author
-
Fabiana Farias, Adonis Manzella, Paulo Borba Filho, Eolo Albuquerque, and João Kaercher
- Subjects
Diagnostic Imaging ,Surgical Sponges ,Self-assessment ,medicine.medical_specialty ,business.industry ,Gossypibomas ,MEDLINE ,Contrast Media ,General Medicine ,Foreign Bodies ,Textilomas ,Postoperative Complications ,Predictive Value of Tests ,Risk Factors ,Predictive value of tests ,Medical imaging ,medicine ,Humans ,Radiology, Nuclear Medicine and imaging ,Medical physics ,business - Abstract
The educational objectives of this self-assessment module are for the participants to exercise, self-assess, and improve their understanding of the most important features of gossypibomas and the role of imaging in the diagnosis of these masses.
- Published
- 2009
- Full Text
- View/download PDF
49. Imaging of Gossypibomas: Pictorial Review
- Author
-
Eolo Albuquerque, Adonis Manzella, Paulo Borba Filho, Fabiana Farias, and João Kaercher
- Subjects
Diagnostic Imaging ,Surgical Sponges ,medicine.medical_specialty ,Surgical complication ,business.industry ,Gossypibomas ,Gossypiboma ,General Medicine ,Foreign Bodies ,medicine.disease ,Left behind ,Surgery ,Diagnosis, Differential ,Postoperative Complications ,medicine.anatomical_structure ,medicine ,Humans ,Abdomen ,Radiology, Nuclear Medicine and imaging ,Radiology ,Foreign body ,business - Abstract
Objective Textiloma and gossypiboma are terms used to describe a mass of cotton matrix that is left behind in a body cavity during an operation. This is an uncommon surgical complication. Gossypibomas are most frequently discovered in the abdomen. Such foreign bodies can often mimic tumors or abscesses clinically or radiologically; however, they are rarely reported because of the medicolegal implications. The manifestations and complications of gossypibomas are so variable that diagnosis is difficult and patient morbidity is significant. Conclusion This article discusses the clinical manifestations, pathophysiologic aspects, and most important complications related to gossypibomas; presents the classic imaging features of gossypibomas using a multitechnique approach; and shows some of the typical and atypical sites of gossypibomas.
- Published
- 2009
- Full Text
- View/download PDF
50. A Static Semantics for Alloy and its Impact in Refactorings
- Author
-
Paulo Borba, Tiago Massoni, and Rohit Gheyi
- Subjects
Theoretical computer science ,General Computer Science ,Modeling language ,Computer science ,business.industry ,Programming language ,Formal semantics (linguistics) ,Software development ,computer.software_genre ,Semantics ,Operational semantics ,object models ,Theoretical Computer Science ,Automated theorem proving ,theorem proving ,Code refactoring ,type system ,Semantics of logic ,TheoryofComputation_LOGICSANDMEANINGSOFPROGRAMS ,Prototype Verification System ,business ,Failure semantics ,computer ,refactoring ,Computer Science(all) - Abstract
Refactorings are usually proposed in an ad hoc way because it is difficult to prove that they are sound with respect to a formal semantics, not guaranteeing the absence of type errors or semantic changes. Consequently, developers using refactoring tools must rely on compilation and tests to ensure type-correctness and semantics preservation, respectively, which may not be satisfactory to critical software development. In this paper, we formalize a static semantics for Alloy, which is a formal object-oriented modeling language, and encode it in Prototype Verification System (PVS). The static semantics' formalization can be useful for specifying and proving that transformations in general (not only refactorings) do not introduce type errors, for instance, as we show here.
- Published
- 2007
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.