67 results
Search Results
2. Balanced knowledge distribution among software development teams—Observations from open‐ and closed‐source software development.
- Author
-
Shafiq, Saad, Mayr‐Dorn, Christoph, Mashkoor, Atif, and Egyed, Alexander
- Subjects
- *
COMPUTER software development , *SOFTWARE engineering , *COMPUTER software developers , *COMPUTER software , *AWARENESS - Abstract
Summary: In software development, developer turnover is among the primary reasons for project failures, leading to a great void of knowledge and strain for newcomers. Unfortunately, no established methods exist to measure how the problem domain knowledge is distributed among developers. Awareness of how this knowledge evolves and is owned by key developers in a project helps stakeholders reduce risks caused by turnover. To this end, this paper introduces a novel, realistic representation of problem domain knowledge distribution: the ConceptRealm. To construct the ConceptRealm, we employ a latent Dirichlet allocation model to represent textual features obtained from 300 K issues and 1.3 M comments from 518 open‐source projects. We analyze whether the newly emerged issues and developers share similar concepts or how aligned the individual developers' concepts are with the team over time. We also investigate the impact of leaving developers on the frequency of concepts. Finally, we also evaluate the soundness of our approach on a closed‐source software project, thus allowing the validation of the results from a practical standpoint. We find out that the ConceptRealm can represent the problem domain knowledge within a project and can be utilized to predict the alignment of developers with issues. We also observe that projects exhibit many keepers independent of project maturity and that abruptly leaving keepers correlates with a decline of their core concepts as the remaining developers cannot quickly familiarize themselves with those concepts. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Towards effective feature selection in estimating software effort using machine learning.
- Author
-
Jadhav, Akshay and Kumar Shandilya, Shishir
- Subjects
- *
FEATURE selection , *COMPUTER software industry , *COMPUTER software development , *COMPUTER software , *RANDOM forest algorithms , *MACHINE learning - Abstract
Software effort estimation is a vital process in the software industry for successfully administering 5Ds of the software development life cycle (SDLC). The 5Ds stand for demand, development, direction, deployment, and designated cost of the software. Software development effort estimation (SDEE) is an effort prediction mechanism to calculate the effort for the development of the software product in order to minimize the challenges in the software field. Academics and practitioners are striving to identify which machine learning estimation technique yields more accurate results based on evaluation metrics, datasets, and other pertinent aspects. The feature selection techniques impact accuracy by selecting the main and relevant features in the dataset and eliminating the redundant and irrelevant features in the dataset. To achieve accurate estimations, the paper utilizes feature selection algorithms, along with various machine learning techniques, which predict the desired effort and the performance of the model has been measured in terms of prediction accuracy, R2 value, relative error, and mean absolute error. The datasets China and Maxwell are trained with the relevant features by applying feature selection algorithms, and estimation techniques are applied to predict the effort. The performance is compared with the regression models and feature selection techniques utilized by many authors previously. The result of the proposed methodology significantly gives the best performance with the combination of feature selection and estimation models than all regression models when applied alone, to both datasets. From the results, it is perceptible that random forest is performing well with the feature selection techniques and obtains the highest prediction accuracy of 99.33% with the China and 89.47% with the Maxwell datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Grammar engineering for multiple front‐ends for Python.
- Author
-
Malloy, Brian A. and Power, James F.
- Subjects
PYTHON programming language ,PARSING (Computer grammar) ,PROGRAMMING languages ,SOFTWARE engineering ,COMPUTER software - Abstract
Summary: In this paper, we describe our experience in grammar engineering to construct multiple parsers and front ends for the Python language. We present a metrics‐based study of the evolution of the Python grammars through the multiple versions of the language in an effort to distinguish and measure grammar evolution and to provide a basis of comparison with related research in grammar engineering. To conduct this research, we have built a toolkit, pygrat, which builds on tools developed in other research. We use pygrat to build a system that automates much of the process needed to translate the Python grammars from EBNF to a formalism acceptable to the bison parser generator. We exploit the suite of Python test cases, used by the Python developers, to validate our parser generation. Finally, we describe our use of the menhir parser generator to facilitate the parser and front‐end construction, eliminating some of the transformations and providing practical support for grammar modularisation. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
5. Software component identification and selection: A research review.
- Author
-
Gholamshahi, Shabnam and Hasheminejad, Seyed Mohammad Hossein
- Subjects
SOFTWARE engineering ,COMPUTER software ,HUMAN-computer interaction ,COMPUTER software development ,ARTIFICIAL intelligence ,TECHNOLOGICAL innovations - Abstract
Summary: Nowadays, with the development of software reuse, software developers are paying more attention to component‐related technologies, which have been mostly applied in the development of large‐scale complex applications to enhance the productivity of software development and accelerate time to market. Component‐based software development is well acknowledged as a methodology, which establishes the reusability of software and reduces the development cost effectively. Two crucial problems in component‐based software development are component identification and component selection. The main purpose of this paper is to provide a reference point for future research by categorizing and classifying different component identification and component selection methods and emphasizing their respective strengths and weaknesses. We hope that it can help researchers find the current status of this issue and serve as a basis for future activities. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
6. A novel multi‐view ordinal classification approach for software bug prediction.
- Subjects
COMPUTER software quality control ,CLASSIFICATION algorithms ,COMPUTER software ,FORECASTING ,COMPUTER software testing ,PREDICTION models - Abstract
Software bug prediction aims to enhance software quality and testing efficiency by constructing predictive classification models using code properties. This enables the prompt detection of fault‐prone modules. There are several machine learning‐based software bug prediction studies, which mainly focus on single view data by disregarding the natural ordering relation among the class labels in the literature. Thus, these studies cause losing each view's own intrinsic structure and the inherent order of the labels that positively affect the prediction performance. To overcome this drawback, this study focuses on integrating ordering information and a multi‐view learning strategy. This paper proposes a novel approach multi‐view ordinal classification (MVOC), which learns from different views (complexity, coupling, cohesion, inheritance and scale) of the software dataset separately and predicts software bugs taking the inherent order of class labels (non‐buggy, less buggy and more buggy) into consideration. To demonstrate its prediction performance, the MVOC approach was executed on the 40 different real‐world software datasets using six different classification algorithms as base learners. In the experiments, the MVOC approach was compared with traditional classifiers and their multi‐view implementations in terms of precision, recall, f‐measure and accuracy rate metrics. The results indicate that the MVOC approach presents better prediction performance on average than the multi‐view‐based and traditional classifiers. It is also observed from the results that the MVOC.RF model achieved the highest classification performance with an average accuracy rate of 85.65%. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
7. Five recommendations for software evolvability.
- Author
-
Rajlich, Václav
- Subjects
COMPUTER software ,COMPUTER software developers ,SOFTWARE engineering ,SOCIAL evolution ,TECHNOLOGICAL innovations - Abstract
Abstract: Evolvability of software lies in intersection of 3 factors: evolving system properties, human factors present in the developer team, and evolution demands. The paper presents 5 recommendations that enhance software evolvability: defined processes of software change, distinction between evolving and stabilized part of the code, analyzable code segments, significant concept encapsulation, and avoidance of wrapping. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
8. Taxonomy for software teamwork measurement.
- Author
-
Robillard, Pierre N., Lavallée, Mathieu, Ton‐That, Yvan, and Chiocchio, François
- Subjects
COMPUTER software development ,SOFTWARE engineering ,INDUSTRIAL psychology ,TEAMS in the workplace ,COMPUTER software - Abstract
ABSTRACT Despite the fact that software is mostly a team endeavor, the software engineering (SE) literature has not tapped into organizational psychology's conceptual and empirical writings on teams. This paper presents a model of team dynamics adapted to the specificities of SE project teams. The taxonomy is composed of nine episodes that are likely to be found in any software team process. Each episode is described in terms of the input-process-output cycle and illustrated with examples. The measurability of the episodes is validated on a capstone student project carried out with an industrial partner. The team activities are recorded by each developer, throughout the project's duration, in the form of work tokens. These work tokens are then associated with episodes by two independent coders. The results show that all the episodes of the proposed taxonomy are measurable, and very few (less than 5% in this field study) remain ambiguous. Most of the ambiguities arise from short episodes that alternate during team process activities. This paper's contribution to software team process research is to synthesize the team literature and draw up a theoretically driven taxonomy of team dynamics specific to SE teams and to provide initial evidence of measurability of the taxonomy. Copyright © 2014 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
9. International conference on enabling technologies: Infrastructure for collaborative enterprises (WETICE).
- Author
-
Drira, Khalil and Reddy, Sumitra
- Subjects
CONFERENCES & conventions ,ELECTRONIC data processing ,SOFTWARE engineering ,GLOBALIZATION ,COMPUTER software - Published
- 2015
- Full Text
- View/download PDF
10. Post golden jubilee year of the software journal: New research trends and strengthening advisory editorial team.
- Author
-
Srirama, Satish Narayana and Buyya, Rajkumar
- Subjects
WIRELESS sensor networks ,SOFTWARE engineering ,COMPUTER software - Published
- 2022
- Full Text
- View/download PDF
11. An empirical study of the effect of file editing patterns on software quality.
- Author
-
Zhang, Feng, Khomh, Foutse, Zou, Ying, and Hassan, Ahmed E.
- Subjects
COMPUTER files ,COMPUTER software ,SOFTWARE engineering ,SOURCE code ,COMPUTER programming - Abstract
Developers might follow different file editing patterns when handling change requests. Existing research has warned the community about the potential negative impacts of some file editing patterns on software quality. However, very few studies have provided quantitative evidence to support these claims. In this paper, we propose four metrics to identify four file editing patterns: concurrent editing pattern, parallel editing pattern, extended editing pattern, and interrupted editing pattern. Our empirical study on three open source projects shows that 90% (i.e. 1935 out of 2140) of files exhibit at least one file editing pattern. More specifically (1) files that are edited concurrently by many developers are 1.8 times more likely to experience future bugs than files that are not concurrently edited; (2) files edited in parallel with too many other files by the same developer are 2.9 times more likely to exhibit future bugs than files individually edited; (3) files edited over an extended period of time are 1.9 times more likely to experience future bugs than other files; and (4) files edited with long interruptions have 2.0 times more future bugs than other files. We also observe that the likelihood of future bugs in files experiencing all the four file editing patterns is 3.9 times higher than in files that are never involved in any of the four patterns. We further investigate factors impacting the occurrence of these file editing patterns along three dimensions: the ownership of files, the type of change requests in which the files were involved, and the initial code quality of the files. Results show that a file with a major owner is 0.6 times less likely to exhibit the concurrent editing pattern than files without major owners. Files with bad code quality (e.g. highMcCabe's complexity, high coupling between objects, and lack of cohesion) are more likely to experience the four editing patterns. By ensuring a clear ownership and improving code quality, the negative impact of the four patterns could be reduced. Overall, our findings could be used by software development teams to warn developers about risky file editing patterns. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
12. LDMBL: An architecture for reducing code duplication in heavyweight binary instrumentations.
- Author
-
Momeni, Behnam and Kharrazi, Mehdi
- Subjects
SOFTWARE engineering ,COMPUTER software ,COMPUTER software security ,COMPUTER software development ,SOFTWARE architecture - Abstract
Summary: Emergence of instrumentation frameworks has vastly contributed to the software engineering practices. As the instrumentation use cases become more complex, complexity of instrumenting programs also increases, leading to a higher risk of software defects, increased development time, and decreased maintainability. In security applications such as symbolic execution and taint analysis, which need to instrument a large number of instruction types, this complexity is prominent. This paper presents an architecture based on the Pin binary instrumentation framework to abstract the low‐level OS and hardware‐dependent implementation details, facilitate code reuse in heavyweight instrumentation use cases, and improve instrumenting program development time. Instructions of x86 and x86‐64 hardware architectures are formally categorized using the Z language based on the Pin framework API. This categorization is used to automate the instrumentation phase on the basis of a configuration list. Furthermore, instrumentation context data such as register data are modeled in an object‐oriented scheme. This makes it possible to focus the instrumenting program development time on writing the essential analysis logics while access to low‐level OS and hardware dependencies are streamlined. The proposed architecture is evaluated by instrumenting 135 instruction types in a concrete symbolic execution engine, resulting in a reduction of the instrumenting program size by 59.7%. Furthermore, performance overhead measure against the SPEC CINT2006 programs is limited to 8.7%. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
13. Improving quality of software product line by analysing inconsistencies in feature models using an ontological rule‐based approach.
- Author
-
Bhushan, Megha, Goel, Shivani, and Kumar, Ajay
- Subjects
SOFTWARE productivity ,SOFTWARE engineering ,COMPUTER software development ,INCONSISTENCY (Logic) ,COMPUTER software - Abstract
Abstract: In software product line engineering, feature models (FMs) represent the variability and commonality of a family of software products. The development of FMs may introduce inaccurate feature relationships. These relationships may cause various types of defects such as inconsistencies, which deteriorate the quality of software products. Several researchers have worked on the identification of defects due to inconsistency in FMs, but only a few of them have explained their causes. In this paper, FM is transformed to predicate‐based feature model ontology using Prolog. Further, first‐order logic is employed for defining rules to identify defects due to inconsistency, the explanations for their causes, and suggestions for their corrections. The proposed approach is explained using an FM available in Software Product Line Online Tools repository. It is validated using 26 FMs of discrete sizes up to 5,543 features, generated using the FeatureIDE tool and real‐world FMs. Results indicate that the proposed methodology is effective, accurate, and scalable and improves software product line. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
14. Possibility of cost reduction by mutant clustering according to the clustering scope.
- Author
-
Yu, Misun and Ma, Yu‐Seung
- Subjects
MUTATION testing of computer software ,COMPUTER software testing ,SOFTWARE engineering ,DEBUGGING ,COMPUTER software - Abstract
Summary: Mutation testing offers developers a good way to improve the quality of a test set. However, the high cost of executing a large number of mutants remains an issue. This paper examines the possibility of reducing the cost of statement‐level mutant clustering by comparing the number of mutant executions with those of expression‐level and block‐level mutant clustering. The goal is to investigate to what extent the clustering scope should be extended. The experimental results using nine real‐world programs show that statement‐level clustering can reduce the mutant executions that are required by expression‐level clustering by 10.51% on average. Block‐level clustering exhibits an unexpected result; the number of mutant executions with block‐level clustering is only 1.06% times less than that with statement‐level clustering. That is, statement‐level clustering is more cost‐effective than block‐level clustering when considering their clustering overheads. A compound expression plays a major role in providing a cost‐reduction effect in statement‐level clustering. With a compound expression, the number of candidate mutants to be clustered in a statement scope increases, and state change can be comprehensively examined, thereby increasing the possibility of cost reduction. © 2018 John Wiley & Sons, Ltd. This paper examines the possibility of reducing the cost of statement‐level mutant clustering by comparing the number of mutant executions with those of expression‐level and block‐level mutant clustering. The experimental results using nine real‐world programs show that statement‐level clustering can reduce the mutant executions that are required by expression‐level clustering by 10.51% while block‐level clustering can reduce only 1.06% of mutant executions with statement‐level clustering. Considering the additional state‐saving cost incurred by widening the comparison scope, we can conclude that statement‐level mutant clustering is most cost‐effective among the three clustering levels. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
15. Probabilistic reasoning in diagnosing causes of program failures.
- Author
-
Xu, Junjie, Chen, Rong, and Du, Zhenjun
- Subjects
FAULT location (Engineering) ,SOFTWARE failures ,SOFTWARE engineering ,PROBABILISTIC inference ,COMPUTER software - Abstract
Fault localization is sensitive to program runs, and the pattern of fault propagation and manifestation in real software is extremely complex and uncertain. To accommodate the complexity and uncertainty, this paper presents a novel probabilistic graph model - the probabilistic cause-effect graph (PCEG) is built upon dynamic dependencies generated from running the faulty program against failed test cases and performs probabilistic inference with coverage information from the whole test suite. PCEG is an extension of the traditional probabilistic graph both in structural and inferential terms and is different from earlier probabilistic approaches to software diagnosis by introducing two forms of evidences (i.e. apparent faults and real faults). The proposed probabilistic reasoning algorithm works on the PCEG converted from a dynamic program dependency graph and diagnoses the causes with both top-down and bottom-up inference. The experimental results have shown the improvements on diagnostic effectiveness and accuracy in both single-fault and multiple-fault context, even when a program yields similar program runs through loop statements. Copyright © 2015 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
16. Co-located and distributed natural-language requirements specification: traditional versus reuse-based techniques.
- Author
-
Carrillo de Gea, Juan M., Nicolás, Joaquín, Fernández Alemán, José L., Toval, Ambrosio, Ouhbi, Sofia, and Idri, Ali
- Subjects
REQUIREMENTS engineering ,TECHNICAL specifications ,SOFTWARE engineering ,COMPUTER software development ,COMPUTER software - Abstract
Requirements Engineering (RE) includes processes intended to elicit, analyse, specify and validate systems and software requirements throughout the software life cycle. Mastering the principles of RE is key to achieving the goals of better, cheaper and quicker systems and software development projects. It is also important to be prepared to work with remote teammates, as distributed and global projects are becoming more common. This paper presents an experiment with a total of 31 students from two universities in Spain and Morocco who were assigned to either a co-located or a distributed team. Both traditional and reuse-based requirements specification techniques were applied by the participants to produce requirements documents. Their outcomes were then analysed, and the approaches were compared from the point of view of their effect on a set of performance-based and perception-based variables in co-located and distributed settings. We found significant differences in only productivity (Z =-2.320, p = 0.020) and difficulty (Z =-2.124, p = 0.034) as regards the scores attained for non-reuse and reuse conditions, both in the co-located modality. Our findings show that, in general, the participants attained similar results for requirements specification when using the two strategies in both distributed and non-distributed environments. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
17. Development of reconfigurable distributed embedded systems with a model-driven approach.
- Author
-
Krichen, Fatma, Hamid, Brahim, Zalila, Bechir, Jmaiel, Mohamed, and Coulette, Bernard
- Subjects
ADAPTIVE computing systems ,MODEL-driven software architecture ,REAL-time computing ,COMPUTER software ,SOFTWARE engineering ,MIDDLEWARE - Abstract
In this paper, we propose a model-driven approach allowing to build reconfigurable distributed real-time embedded (DRE) systems. The constant growth of the complexity and the required autonomy of embedded software systems management give the dynamic reconfiguration a big importance. New challenges to apply the dynamic reconfiguration at model level as well as runtime support level are required. In this direction, the development of reconfigurable DRE systems according to traditional processes is not applicable. New methods are required to build and to supply reconfigurable embedded software architectures. In this context, we propose an model-driven engineering based approach that enables to design reconfigurable DRE systems with execution framework support. This approach leads the designer to specify step by step his/her system from a model to another one more refined until the targeted model is reached. This targeted model is related to a specific platform leading to the generation of the most part of the system implementation. We also develop a new middleware that supports reconfigurable DRE systems. Copyright © 2013 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
18. Using blog‐like documents to investigate software practice: Benefits, challenges, and research directions.
- Author
-
Rainer, Austen and Williams, Ashley
- Subjects
SOFTWARE engineering ,GREY literature ,VALUE engineering ,COMPUTER software - Abstract
Background An emerging body of research is using grey literature to investigate software practice. One frequently occurring type of grey literature is the blog post. Whilst there are prospective benefits to using grey literature and blog posts to investigate software practice, there are also concerns about the quality of such material. Objectives To identify and describe the benefits and challenges to using blog‐like content to investigate software practice, and to scope directions for further research. Methods We conduct a review of previous research, mainly within software engineering, to identify benefits, challenges, and directions and use that review to complement our experiences of using blog posts in research. Results and Conclusion We identify and organise benefits and challenges of using blog‐like documents in software engineering research. We develop a definition of the type of blog‐like document that should be of (more) value to software engineering researchers. We identify and scope several directions in which to progress research into and with blog‐like documents. We discuss similarities and differences in secondary and primary studies that use blog‐like documents and similarities and differences between the use of blog‐like documents and the use of already established research methods, eg, interview and survey. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
19. Editorial: Machine learning, software process, and global software engineering.
- Author
-
Steinmacher, Igor, Clarke, Paul, Tuzun, Eray, and Britto, Ricardo
- Subjects
SOFTWARE engineers ,MACHINE learning ,COVID-19 pandemic ,SOFTWARE engineering ,COMPUTER software ,SYSTEMS software - Abstract
On June 26–28, 2020, the International Conference on Software and Systems Processes (ICSSP 2020) and the International Conference on Global Software Engineering (ICGSE 2020) were held in virtual settings during the first year of the COVID pandemic. Several submissions to the joint event have been selected for inclusion in this special issue, focusing on impactful and timely contributions to machine learning (ML). At present, many in our field are enthusiastic about the potential of ML, yet some risks should not be casually overlooked or summarily dismissed. Each ML implementation is subtly different from any other implementation, and the risk profile varies greatly based on the approach adopted and the implementation context. The ICSSP/ICGSE 2020 Program Committees have encouraged submissions that explore the risks and benefits associated with ML so that the important discussion regarding ML efficacy and advocacy can be further elaborated. Four contributions have been included in this special issue. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
20. A generalized duration forecasting model of test-and-fix cycles.
- Author
-
Houston, Dan
- Subjects
COMPUTER software development ,COMPUTER software ,SOFTWARE engineering ,DYNAMIC models ,MATHEMATICAL models of time-varying systems - Abstract
ABSTRACT Rework is inherent in product development, but estimation of the duration and required resources for rework cycles is inhibited by task size variation, process capacity variation, and the process characteristics of dynamism and concurrency. This difficulty is especially evident in testing of software-intensive systems. Dynamic modeling of test-and-fix cycles has addressed this problem by providing good forecasting of test phase durations and by providing good results in supporting resource allocation and process improvement decisions. A generalized model of test-and-fix phases has been produced from an analysis of commonality and variation in six project models. The generalized model can be configured to support test phase planning of duration and resources. This paper extends the work reported in ICSSP 2012 by reporting results of experiments on managing test-and-fix projects through work-in-process limits and through test case prioritization. Copyright © 2014 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
21. Effort estimation model for software development projects based on use case reuse.
- Author
-
Rak, Katija, Car, Željka, and Lovrek, Ignac
- Subjects
COMPUTER software development ,ESTIMATION theory ,CONTROL theory (Engineering) ,COMPUTER software ,SOFTWARE engineering - Abstract
This paper describes a new effort estimation model based on use case reuse, called the use case reusability (UCR), intended for the projects that are reusing artifacts previously developed in past projects with similar scope. Analysis of the widely spread effort estimation techniques for software development projects shows that these techniques were primarily intended for the development of new software solutions. The baseline for the new effort estimation model is the use case points model. The UCR model introduces new classification of use cases based on their reusability, and it includes only those technical and environmental factors that according to the effort estimation experts have significant impact on effort for the target projects. This paper also presents a study which validates the usage of UCR model. The study is conducted within industry and academic environments using industry project teams and postgraduate students as subjects. The analysis results show that UCR model can be applied in different project environments and that according to the observed mean magnitude relative error, it produced very promising effort estimates. This paper describes a new effort estimation model based on use case reuse, called the use case reusability (UCR), intended for the projects that are reusing artifacts previously developed in past projects with similar scope. The baseline for the new effort estimation model is the use case points model. The UCR model introduces new classification of use cases based on their reusability, and it includes only those technical and environmental factors that according to the effort estimation experts have significant impact on effort for the target projects. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
22. A model‐based solution for process modeling in practice environments: PLM4BS.
- Author
-
García‐García, Julián Alberto, García‐Borgoñón, Laura, Escalona, María José, and Mejías, Manuel
- Subjects
INTERNATIONAL economic relations ,COST ,QUALITY ,CORPORATE profits ,COMPUTER software - Abstract
Today's world economic situation is ruled by issues such as reducing cost, improving quality, maximizing profit, and improving and optimizing processes at organizations. In this context, business process management can be an essential strategy, but it is not usually consolidated at software organizations because software process properties involve a complex business process management application on software lifecycle. Consequently, software organizations often focus on Software Process Modeling (SPM), and each involved role performs process execution and orchestration independently and manually. This fact makes software processes maintenance, monitoring, and measurement become difficult tasks. This paper proposes a model‐based approach for SPM taking into account concepts related to process execution, orchestration, and monitoring. It is framed into a model‐driven engineering‐based and tool‐based framework: Process Lifecycle Management for Business Software (PLM4BS). We present a SPM metamodel and its concrete syntax (through Unified Modeling Language profiles) that lays the foundation for extending PLM4BS. Its underlying metamodel allows managing processes automatically. Furthermore, PLM4BS improves current state‐of‐the‐art proposals in 6 dimensions: expressiveness, understandability, granularity, measurability, orchestrability, and business variables and rules. Also, PLM4BS has been evaluated in a multiple‐case study, in which the 6 mentioned dimensions were already validated. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
23. Suitability analysis of software reliability models for its applicability on NPP systems.
- Author
-
Kumar, Pramod, Singh, Lalit Kumar, and Kumar, Chiranjeev
- Subjects
RELIABILITY in engineering ,NUCLEAR power plants ,COMPUTER software ,SOFTWARE engineering ,SOFTWARE reliability - Abstract
The advent of software components in the safety critical systems of nuclear power plant has introduced new challenges for software professionals to provide increased software reliability. Several proposals have been proposed for ensuring software reliability in different phases of software development life cycle. The present article is a novel attempt in providing an exhaustive survey of software reliability models for their applicability on safety critical systems of nuclear power plants. Our systematic review shows that none of the proposals for ensuring software reliability are applicable on such systems. The issues and challenges are discussed, and a software reliability model for safety critical system of nuclear power plants is proposed. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
24. Special issue on software engineering for Connected Health: Challenges and research roadmap.
- Author
-
Carroll, Noel, Kuziemsky, Craig, and Richardson, Ita
- Subjects
HEALTH information technology ,MEDICAL innovations ,SOFTWARE engineering ,MEDICAL care ,COMPUTER software - Abstract
Abstract: Over the past decade, there have been increasing expectations and pressures placed on health care providers to deliver more efficient, quality, and safe health care services. As a result, this shifts the balance between supply‐and‐demand of health care services. It also brings about new challenges for health care professionals' capabilities to deliver safe and quality care in a timely manner. There are significant opportunities to exploit information and communications technology and transform how health care service is provided. Connected Health is one such transformation for health care management and changes in health care practice. However, the field of Connected Health is still in its infancy. This Special Issue in Software Engineering for Connected Health begins to address this and presents 5 quality contributions to demonstrate how software engineering research plays an important role in Connected Health research. These contributions also identify the limitations of the existing theories and to develop new or revised theories of Software Engineering for Connected Health. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
25. USQA: A User Story Quality Analyzer prototype for supporting software engineering students.
- Author
-
Jiménez, Samantha, Alanis, Arnulfo, Beltrán, Claudio, Juárez‐Ramírez, Reyes, Ramírez‐Noriega, Alan, and Tona, Claudia
- Subjects
ENGINEERING students ,SOFTWARE engineers ,SOFTWARE engineering ,NATURAL language processing ,COMPUTER science students ,REQUIREMENTS engineering ,COMPUTER software ,CONSTRUCTION project management - Abstract
The Standish Group Reports 83.9% of IT Projects fail, and one of the top factors in failed projects is the incomplete requirements or user stories. Therefore, it is essential to teach undergraduate students from computer science degree programs how to create complete user stories. Computer science programs include some subjects or topics involving requirements or user stories collection and writing, such as Requirements Engineering, Software Engineering, Project Management, or Quality Software Assurance. For that reason, we designed a web application called User Story Quality Analyzer (USQA) that uses Natural Language Processing modules to detect errors regarding aspects of usefulness, completeness, and polysemes in the user stories creation. The tool was proved from three perspectives: (1) a reliability test, where 35 user stories developed by experts were tested in the app to prove the prototype's reliability; (2) usability and utility analysis; 48 students interacted with the tool and responded a Satisfaction Usability Scale and an open‐ended question, the students reported a high usability score; (3) finally, error classification, we gathered 159 user stories processed by the system, and we classified the students' common errors considering incompleteness and polysemes. After the evaluations, we concluded that USQA could evaluate the user stories as an expert, which could help the professors/teachers/instructors in their courses by providing feedback to the students when they are writing user stories. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
26. Analysis of cluster center initialization of 2FA‐kprototypes analogy‐based software effort estimation.
- Author
-
Amazal, Fatima Azzahra, Idri, Ali, and Abran, Alain
- Subjects
FUZZY clustering technique ,CATEGORIES (Mathematics) ,SOFTWARE engineering ,COMPUTER software ,FUZZY algorithms - Abstract
Analogy‐based estimation is one of the most widely used techniques for effort prediction in software engineering. However, existing analogy‐based techniques suffer from an inability to correctly handle nonquantitative data. To deal with this limitation, a new technique called 2FA‐kprototypes was proposed and evaluated. 2FA‐kprototypes is based on the use of the fuzzy k‐prototypes clustering technique. Although fuzzy k‐prototypes algorithms are well known for their efficiency in clustering numerical and categorical data, they are sensitive to the selection of initial cluster centers. In this paper, the impact of cluster center initialization on improving the prediction accuracy of 2FA‐kprototypes was analyzed and discussed using two cluster initialization techniques: centrality‐based initialization and density‐based initialization. The performance of 2FA‐kprototypes using these two initialization techniques was evaluated and compared with that of 2FA‐kprototypes using random initialization over four datasets: ISBSG, COCOMO81, USP05‐FT, and USP05‐RQ. The results showed an improvement in the performance of 2FA‐kprototypes in terms of estimation accuracy when the all‐in method is used. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
27. Release conventions of open‐source software: An exploratory study.
- Author
-
Chakroborti, Debasish, Nath, Sristy Sumana, Schneider, Kevin A., and Roy, Chanchal K.
- Subjects
SOFTWARE engineering ,COMPUTER software development ,COMPUTER software ,SOFTWARE engineers ,COMPUTER software industry ,PROJECT management software - Abstract
Software engineering (SE) methodologies are widely used in both academia and industry to manage the software development life cycle. A number of studies of SE methodologies involve interviewing stakeholders to explore the real‐world practice. Although these interview‐based studies provide us with a user's perspective of an organization's practice, they do not describe the concrete summary of releases in open‐source social coding platforms. In particular, no existing studies investigated how releases are evolved in open‐source coding platforms, which assist release planners to a large extent. This study explores software development patterns followed in open‐source projects to see the overall management's reflection on software release decisions rather than concentrating on a particular methodology. Our experiments on 51 software origins (with 1777k revisions and 12k releases) from the Software Heritage Graph Dataset (SWHGD) and their GitHub project boards (with 23k cards) reveal reasonably active project management with phase simplicity can release software versions more frequently and can follow the small release conventions of Extreme Programming. Additionally, the study also reveals that a combination of development and management activities can be applied to predict the possible number of software releases in a month (ρ<0.05). [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
28. TuneR: a framework for tuning software engineering tools with hands-on instructions in R.
- Author
-
Borg, Markus
- Subjects
SOFTWARE engineering ,ENGINEERING students ,ENGINEERING services ,COMPUTER software ,EMPIRICAL research - Abstract
Numerous tools automating various aspects of software engineering have been developed, and many of the tools are highly configurable through parameters. Understanding the parameters of advanced tools often requires deep understanding of complex algorithms. Unfortunately, suboptimal parameter settings limit the performance of tools and hinder industrial adaptation, but still few studies address the challenge of tuning software engineering tools. We present TuneR, an experiment framework that supports finding feasible parameter settings using empirical methods. The framework is accompanied by practical guidelines of how to use R to analyze the experimental outcome. As a proof-of-concept, we apply TuneR to tune ImpRec, a recommendation system for change impact analysis in a software system that has evolved for more than two decades. Compared with the output from the default setting, we report a 20.9% improvement in the response variable reflecting recommendation accuracy. Moreover, TuneR reveals insights into the interaction among parameters, as well as nonlinear effects. TuneR is easy to use, thus the framework has potential to support tuning of software engineering tools in both academia and industry. Copyright © 2016 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
29. MuMonDE: A framework for evaluating model clone detectors using model mutation analysis.
- Author
-
Stephan, Matthew and Cordy, James R.
- Subjects
DETECTORS ,MUTATION testing of computer software ,COMPUTER software testing ,SOFTWARE engineering ,COMPUTER software - Abstract
Summary: Model‐driven engineering is an increasingly prevalent approach in software engineering where models are the primary artifacts throughout a project's life cycle. A growing form of analysis and quality assurance in these projects is model clone detection, which identifies similar model elements. As model clone detection research and tools emerge, methods must be established to assess model clone detectors and techniques. In this paper, we describe the MuMonDE framework, which researchers and practitioners can use to evaluate model clone detectors using mutation analysis on the models each detector is geared towards. MuMonDE applies mutation testing in a novel way by randomly mutating model elements within existing projects to emulate various types of clones that can exist within that domain. It consists of 2 main phases. The mutation phase involves determining the mutation targets, selecting the appropriate mutation operations, and injecting mutants. The second phase, evaluation, involves detecting model clones, preprocessing clone reports, analyzing those reports to calculate recall and precision, and visualizing the data. We introduce MuMonDE by describing each phase in detail. We present our experiences and examples in successfully developing a MuMonDE implementation capable of evaluating Simulink model clone detectors. We validate MuMonDE by demonstrating its ability to answer evaluation questions and provide insights based on the data it generates. With this research using mutation analysis, our goal is to improve model clone detection and its analytical capabilities, thus improving model‐driven engineering as a whole. As model clone detection research and tools emerge, model clone detectors must be evaluated. The MuMonDE framework applies mutation testing in a novel way by randomly mutating model elements to emulate various types of clones that can exist through its 2 main phases: mutation and evaluation. We present our experiences in successfully developing a MuMonDE implementation capable of evaluating Simulink model clone detectors and demonstrate its ability to answer questions and provide insights based on the data it generates. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
30. Framework for empirical examination and modeling structural dependencies among inhibitors that impact SPI implementation initiatives in software SMEs.
- Author
-
Sharma, Pooja and Sangal, Amrit Lal
- Subjects
COMPUTER software ,SOFTWARE engineering ,STATISTICAL correlation ,ANTINEOPLASTIC agents - Abstract
Context Since last more than one‐decade software process improvement, widely known as SPI has received the attention of the software engineering community. Objective: The objective of the paper is to empirically examine and develop a framework to model structural dependences among inhibitors that impact SPI initiatives in software SMEs. Methods: Authors used mixed method approach (empirical analysis and interpretive structural modeling, ISM) to evaluate, model, and analyze the association between SPI inhibitors in software SMEs. Results: The results from empirical analysis and ISM show that lack of management commitment, lack of resources, and lack of communication and information sharing are the key SPI inhibitors. The association of various inhibitors with SPI implementation is found statistically significant with effect size (0.42 < Ø < 0.75, P < 0.05). Also, Spearman's coefficient of correlation for rankings of SPI inhibitors is found to be moderate to high, ie, 0.601 (P = 0.01 < 0.05) for SLR and interviews; 0.794 (P = 0.00) for SLR and empirical analysis; and 0.711 (P = 0.002 < 0.01) for interviews and empirical analysis. Conclusions: Analysis of SPI inhibitors through the proposed triangulation approach and development of software process improvement inhibitors model highlights the importance of structural dependencies among the inhibitors. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
31. MLPNN‐RF: Software fault prediction based on robust weight based optimization and Jacobian adaptive neural network.
- Author
-
Thirukonda Krishnamoorthy Sivakumar Babu, Rathish Babu, Sivasubramanian, Suresh, and Natarajan, Sankarram
- Subjects
SOFTWARE engineering ,ERROR rates ,JACOBIAN matrices ,COMPUTER software ,SOFTWARE engineers - Abstract
Summary: Software fault prediction (SFP) is a vital objective in software engineering. This might permit effective resource allocation and also enhance informed decisions about the release quality. SFP is being a critical issue for software professionals as well as the tech industry. Thus, SFP is necessary. The study intends to perform efficient error rate estimation using the proposed hybrid robust weight based optimization and Jacobian adaptive neural network (RWO‐JANN). It also aims to classify the software faults in an efficient way using multi‐layer perceptron neural network‐random forest (MLPNN‐RF). Various processes are involved to accomplish SFP. At first, the dataset is taken as input. After this, data preprocessing is performed. Subsequently, weights are initialized using the proposed RWO‐JANN. Weight initialization is performed through a series of steps. Then, the position and the weight parameter are updated to perform weight initialization. After this, the error rate is estimated and the weight is updated on the basis of the learning rate and Jacobian matrix calculation. Lastly, the decay rate is analyzed. If the error rate extends beyond the threshold value, the process repeats from weight initialization. If not, the testing process is performed and lastly the classified output for SFP is obtained by the proposed MLPNN‐RF. The proposed system is comparatively analyzed with the existing methods in terms of accuracy, precision, recall, F1 score, sensitivity, specificity, and error rate. The analytical results revealed effective outcomes of proposed system than the existing techniques with accuracy of 99.01%. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
32. inDev: A software to generate an MVC architecture based on the ER model.
- Author
-
Ramírez‐Noriega, Alan, Martínez‐Ramírez, Yobani, Jiménez, Samantha, Soto‐Vega, Jesús, and Figueroa‐Pérez, J. Francisco
- Subjects
COMPUTER software development ,SOFTWARE engineering ,SOFTWARE development tools ,EXPERIMENTAL groups ,SOFTWARE engineers ,COMPUTER software ,SOFTWARE architecture - Abstract
Model‐view‐controller (MVC) design pattern is employed as software architecture. This pattern has the objective of separating the code into three elements, maintaining layers with defined functions. MVC pattern is used to structure and organize code in software development; therefore, it is an important topic in the teaching of software engineering. However, understanding and implementing this design pattern is not easy for students. Therefore, this investigation proposes a Computer Aided Software Engineering tool called inDev, which is capable of generating an application based on an Entity Relationship (ER) diagram, generating the model, the controller, and the view. The student can interact with the system by visualizing the changes produced by the inputs in the ER diagram in the output as the MVC architecture. To test the scope of the project as a teaching strategy, an experiment was designed with a control group and an experimental group. The experimental group that used the application, inDev, showed better results in learning than the control group, which did not use it. The inDev tool proved to be a useful educational tool for dealing with a topic like the MVC design pattern. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
33. Evolving software forges: An experience report from Apache Allura.
- Author
-
Tamburri, Damian A. and Palomba, Fabio
- Subjects
SOFTWARE engineering ,SOCIAL learning ,SOFTWARE engineers ,INDUSTRIAL surveys ,SOFTWARE refactoring ,COMPUTER software - Abstract
The open‐source phenomenon has reached unimaginable proportions to a point in which it is virtually impossible to find large applications that do not rely on open‐source as well. However, such proportions may turn into a risk if the organizational and socio‐technical aspects (e.g., the contribution and release schemes) behind open‐source communities are not explicitly supported by open‐source forges by‐design. In an effort to make such aspects explicit and supported by‐design in open‐source forges, we conducted empirical software engineering as follows: (a) Through online industrial surveying, we elicited organizational and social aspects relevant in open‐source communities; (b) through action research, we extended a widely known open‐source support system and top‐level Apache project Allura; (c) through ethnography, we studied the Allura community, and learning from its social and organizational structure, (d) we elicited a metrics framework that support more explicit organizational and socio‐technical design principles around open‐source communities. This article is an experience report on these results and the lessons we learned in obtaining them. We found that the extensions provided to Apache Allura formed the basis for community awareness by design, providing valuable and usable community characteristics. Ultimately, however, the extensions we provided to Apache Allura were deactivated by its core developers because of performance overheads. Our results and lessons learned allow us to provide recommendations for designing forges, like Github. Architecting a forge is a participatory process that requires active engagement, hence remarking the need for mechanisms enabling it. At the same time, we conclude that a more active support for the governance is required to avoid the failure of the forge. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
34. Readiness model for DevOps implementation in software organizations.
- Author
-
Rafi, Saima, Yu, Wu, Akbar, Muhammad Azeem, Mahmood, Sajjad, Alsanad, Ahmed, and Gumaei, Abdu
- Subjects
COMPUTER software quality control ,SOFTWARE engineering ,PREPAREDNESS ,COMPUTER software ,SOFTWARE engineers ,TIME management - Abstract
DevOps is a new software engineering paradigm adopted by various software organizations to develop the quality software within time and budget. The implementation of DevOps practices is critical, and there are no guidelines to assess and improve the DevOps activities in software organizations. Hence, there is a need to develop a readiness model for DevOps (RMDevOps) with an aim to assist the practitioners for implementation of DevOps practices in software firms. To achieve the study objective, we conducted a systematic literature review (SLR) study to identify the critical challenges and associated best practices of DevOps. A total of 18 challenges and 73 best practices were identified from the 69 primary studies. The identified challenges and best practices were further evaluated by conducting a survey with industry practitioners. The RMDevOps was developed based on other well‐established models in software engineering domain, for example, software process improvement readiness model (SPIRM) and software outsourcing vendor readiness model (SOVRM). Finally, case studies were conducted with three different organizations with an aim to validate the developed model. The results show that the RMDevOps is effective to assess and improve the DevOps practices in software organizations. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
35. A comprehensive model for code readability.
- Author
-
Scalabrino, Simone, Linares‐Vásquez, Mario, Oliveto, Rocco, and Poshyvanyk, Denys
- Subjects
- *
READABILITY formulas , *CONTENT analysis , *COMPUTER software , *SOFTWARE engineering , *COMPUTER software development - Abstract
Abstract: Unreadable code could compromise program comprehension, and it could cause the introduction of bugs. Code consists of mostly natural language text, both in identifiers and comments, and it is a particular form of text. Nevertheless, the models proposed to estimate code readability take into account only structural aspects and visual nuances of source code, such as line length and alignment of characters. In this paper, we extend our previous work in which we use textual features to improve code readability models. We introduce 2 new textual features, and we reassess the readability prediction power of readability models on more than 600 code snippets manually evaluated, in terms of readability, by 5K+ people. We also replicate a study by Buse and Weimer on the correlation between readability and FindBugs warnings, evaluating different models on 20 software systems, for a total of 3M lines of code. The results demonstrate that (1) textual features complement other features and (2) a model containing all the features achieves a significantly higher accuracy as compared with all the other state‐of‐the‐art models. Also, readability estimation resulting from a more accurate model, ie, the combined model, is able to predict more accurately FindBugs warnings. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
36. Reusing process patterns in software process models modification.
- Author
-
Hachemi, Asma and Ahmed‐Nacer, Mohamed
- Subjects
COMPUTER software ,SOFTWARE engineering ,PRODUCTION (Economic theory) ,PRODUCT quality ,ALGORITHMS - Abstract
Abstract: Process patterns offer proven solutions to reuse in process modeling. This reuse can take many forms; however, we are particularly interested in the form where patterns are merged with already existing process models to enrich or to correct them to satisfy certain constraints or to increase their efficiency. We aim through the present work to propose an approach that proceeds the merging automatically. The difficulty is that some conflicts may arise between the pattern being reused and the model being modified, resulting in an incoherent process model. We start this work by studying possible conflicts (especially that of first order), which can be encountered when merging, and then we propose an algorithm ensuring the automatic reuse and managing these conflicts. We highlight through a comparison the benefits of our proposal. The difficulty of patterns reuse to modify existing models gives rise to very few works in the literature. Our approach aims to offer a possible solution that ensures conflicts management. Automating such form of reuse will enhance patterns exploitation within the software community, as well as it will open many perspectives based on process models merging. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
37. Model refactoring by example: A multi‐objective search based software engineering approach.
- Author
-
Ghannem, Adnane, Kessentini, Marouane, Hamdi, Mohammad Salah, and El Boussaidi, Ghizlane
- Subjects
COMPUTER systems ,COMPUTER software ,SOFTWARE engineering ,COMPUTER programming ,SOFTWARE architecture - Abstract
Abstract: Declarative rules are frequently used in model refactoring in order to detect refactoring opportunities and to apply the appropriate ones. However, a large number of rules is required to obtain a complete specification of refactoring opportunities. Companies usually have accumulated examples of refactorings from past maintenance experiences. Based on these observations, we consider the model refactoring problem as a multi objective problem by suggesting refactoring sequences that aim to maximize both structural and textual similarity between a given model (the model to be refactored) and a set of poorly designed models in the base of examples (models that have undergone some refactorings) and minimize the structural similarity between a given model and a set of well‐designed models in the base of examples (models that do not need any refactoring). To this end, we use the Non‐dominated Sorting Genetic Algorithm (NSGA‐II) to find a set of representative Pareto optimal solutions that present the best trade‐off between structural and textual similarities of models. The validation results, based on 8 real world models taken from open‐source projects, confirm the effectiveness of our approach, yielding refactoring recommendations with an average correctness of over 80%. In addition, our approach outperforms 5 of the state‐of‐the‐art refactoring approaches. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
38. Issue Information.
- Subjects
SOFTWARE engineering ,COMPUTER software - Abstract
No abstract is available for this article. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
39. In two minds: how reflections influence software design thinking.
- Author
-
Razavian, Maryam, Tang, Antony, Capilla, Rafael, and Lago, Patricia
- Subjects
COMPUTER software ,PROBLEM solving ,SOFTWARE engineering ,ENGINEERING ,INDUSTRIAL design - Abstract
We theorize a two-mind model of design thinking. Mind 1 is about logical design reasoning, and Mind 2 is about the reflection on our reasoning and judgments. The problem solving ability of Mind 1 has often been emphasized in software engineering. The reflective Mind 2, however, has not received much attention. In this study, we want to find out if Mind 2, or reflection, can improve design discourse, a prerequisite of design quality. We conducted multiple case studies with 12 student groups, divided into test groups and control groups. We provided external reflections to the test groups. No reflections were given to the control groups. We analyzed the quality of the design discourse in both groups. We found that reflection (Mind 2) improves the quality of design discourse (Mind 1) under certain preconditions. The results highlight the significance of reflection as a mean to improve the quality of design discourse. We conclude that software designers need both Mind 1 and Mind 2 to obtain a higher quality design discourse, as a foundation for a good design. Copyright © 2016 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
40. Model-based guidelines for user-centric satellite control software development.
- Author
-
Dori, Dov and Thipphayathetthana, Somwang
- Subjects
COMPUTER software development ,COMPUTER software ,ARTIFICIAL satellites ,CONCEPTUAL models ,SOFTWARE engineering - Abstract
Three persistent common problems in satellite ground control software are obsolescence, lack of desired features and flexibilities, and endless software bug fixing. The obsolescence problem occurs when computer and ground equipment hardware become obsolete, usually after only one-third into the satellite mission lifetime. The software needs to be updated to accommodate changes on the hardware side, requiring significant work of satellite operators to test, verify, and validate these software updates. Trying to help solve these problems, we have proposed an object-process methodology model and guidelines for developing satellite ground control software. The system makes use of a database-driven application and concepts of object-process orientation and modularity. In the new proposed framework, instead of coding each software function separately, the common base functions will be coded, and combining them in various ways will provide the different required functions. The formation and combination of these base functions will be governed by the main code, definitions, and database parameters. These design principles will make sure that the new software framework would provide satellite operators with the flexibility to create new features and enable software developer to find bugs quicker and fix them more effectively. Copyright © 2015 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
41. On the effectiveness of weighted moving windows: Experiment on linear regression based software effort estimation.
- Author
-
Amasaki, S. and Lokan, C.
- Subjects
COMPUTER software ,ESTIMATION theory ,SOFTWARE engineering ,REGRESSION analysis ,MULTIVARIATE analysis - Abstract
In construction of an effort estimation model, it seems effective to use a window of training data so that the model is trained with only recent projects. Considering the chronological order of projects within the window, and weighting projects according to their order within the window, may also affect estimation accuracy. In this study, we examined the effects of weighted moving windows on effort estimation accuracy. We compared weighted and non-weighted moving windows under the same experimental settings. We confirmed that weighting methods significantly improved estimation accuracy in larger windows, although the methods also significantly worsened accuracy in smaller windows. This result contributes to understanding properties of moving windows. Copyright © 2014 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
42. An integrated implementation framework for compile-time metaprogramming.
- Author
-
Lilis, Yannis and Savidis, Anthony
- Subjects
COMPUTER programming ,PROGRAMMING languages ,SOURCE code ,SOFTWARE engineering ,COMPUTER software - Abstract
Compile-time metaprograms are programs executed during the compilation of a source file, usually targeting to update its source code. Even though metaprograms are essentially programs, they are typically treated as exceptional cases without sharing common practices and development tools. Toward this direction, we identify a set of primary requirements related to language implementation, metaprogramming features, software engineering support, and programming environments and elaborate on addressing these requirements in the implementation of a metaprogramming language. In particular, we introduce the notion of integrated compile-time metaprograms, as coherent programs assembled from specific metacode fragments present in the source code. We show the expressiveness of this programming model and illustrate its advantages through various metaprogram scenarios. Additionally, we present an integrated tool chain, supporting full-scale build features and compile-time metaprogram debugging. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
43. Directed test suite augmentation: an empirical investigation.
- Author
-
Xu, Zhihong, Kim, Yunho, Kim, Moonzoo, Cohen, Myra B., and Rothermel, Gregg
- Subjects
REGRESSION testing (Computer science) ,GENETIC algorithms ,COMPUTER software ,DYNAMIC testing ,SOFTWARE engineering - Abstract
Test suite augmentation techniques are used in regression testing to identify code elements in a modified program that are not adequately tested and to generate test cases to cover those elements. A defining feature of test suite augmentation techniques is the potential for reusing existing regression test suites. Our preliminary work suggests that several factors influence the efficiency and effectiveness of augmentation techniques that perform such reuse. These include the order in which target code elements are considered while generating test cases, the manner in which existing regression test cases and newly generated test cases are used, and the algorithm used to generate test cases. In this work, we present the results of two empirical studies examining these factors, considering two test case generation algorithms (concolic and genetic). The results of our studies show that the primary factor affecting augmentation using these approaches is the test case generation algorithm utilized; this affects both cost and effectiveness. The manner in which existing and newly generated test cases are utilized also has a substantial effect on efficiency and in some cases a substantial effect on effectiveness. The order in which target code elements are considered turns out to have relatively few effects when using concolic test case generation but in some cases influences the efficiency of genetic test case generation. The results of our first study, on four relatively small programs using a large number of test suites, are supported by our second study of a much larger program available in multiple versions. Together, the studies reveal a potential opportunity for creating a more cost-effective hybrid augmentation approach leveraging both concolic and genetic test case generation techniques, while appropriately utilizing our understanding of the factors that affect them. Copyright © 2014 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
44. Reducing execution profiles: techniques and benefits.
- Author
-
Farjo, Joan, Assi, Rawad Abou, and Masri, Wes
- Subjects
COMPUTER software execution ,DATA mining ,GENETIC algorithms ,COMPUTER software ,SOFTWARE engineering - Abstract
The interest in leveraging data mining and statistical techniques to enable dynamic program analysis has increased tremendously in recent years. Researchers have presented numerous techniques that mine and analyze execution profiles to assist software testing and other reliability enhancing approaches. Previous empirical studies have shown that the effectiveness of such techniques is likely to be impacted by the type of profiled program elements. This work further studies the impact of the characteristics of execution profiles by focusing on their size; noting that a typical profile comprises a large number of program elements, in the order of thousands or higher. Specifically, the authors devised six reduction techniques and comparatively evaluated them by measuring the following: (1) reduction rate; (2) information loss; (3) impact on two applications of dynamic program analysis, namely, cluster-based test suite minimization ( App-I), and profile-based online failure and intrusion detection ( App-II). The results were promising as the following: (a) the average reduction rate ranged from 92% to 98%; (b) three techniques were lossless and three were slightly lossy; (c) reducing execution profiles exhibited a major positive impact on the effectiveness and efficiency of App-I; and (d) reduction exhibited a positive impact on the efficiency of App-II, but a minor negative impact on its effectiveness. Copyright © 2014 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
45. Automated metamorphic testing of variability analysis tools.
- Author
-
Segura, Sergio, Durán, Amador, Sánchez, Ana B., Berre, Daniel Le, Lonca, Emmanuel, and Ruiz‐Cortés, Antonio
- Subjects
COMPUTER software ,RELIABILITY in engineering ,TEST reliability ,MULTIVARIABLE testing ,SOFTWARE engineering - Abstract
Variability determines the capability of software applications to be configured and customized. A common need during the development of variability-intensive systems is the automated analysis of their underlying variability models, for example, detecting contradictory configuration options. The analysis operations that are performed on variability models are often very complex, which hinders the testing of the corresponding analysis tools and makes difficult, often infeasible, to determine the correctness of their outputs, that is, the well-known oracle problem in software testing. In this article, we present a generic approach for the automated detection of faults in variability analysis tools overcoming the oracle problem. Our work enables the generation of random variability models together with the exact set of valid configurations represented by these models. These test data are generated from scratch using stepwise transformations and assuring that certain constraints (a.k.a. metamorphic relations) hold at each step. To show the feasibility and generalizability of our approach, it has been used to automatically test several analysis tools in three variability domains: feature models, common upgradeability description format documents and Boolean formulas. Among other results, we detected 19 real bugs in 7 out of the 15 tools under test. Copyright © 2015 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
46. Horizontal traceability for just-in-time requirements: the case for open source feature requests.
- Author
-
Heck, Petra and Zaidman, Andy
- Subjects
JUST-in-time systems ,SOFTWARE engineering ,OPEN source software ,VECTOR spaces ,COMPUTER software - Abstract
ABSTRACT Agile projects typically employ just-in-time requirements engineering and record their requirements (so-called feature requests) in an issue tracker. In open source projects, we observed large networks of feature requests that are linked to each other. Both when trying to understand the current state of the system and to understand how a new feature request should be implemented, it is important to know and understand all these (tightly) related feature requests. However, we still lack tool support to visualize and navigate these networks of feature requests. A first step in this direction is to see whether we can identify additional links that are not made explicit in the feature requests, by measuring the text-based similarity with a vector space model (VSM) using term frequency-inverse document frequency (TF-IDF) as a weighting factor. We show that a high text-based similarity score is a good indication for related feature requests. This means that with a TF-IDF VSM, we can establish horizontal traceability links, thereby providing new insights for users or developers exploring the feature request space. Copyright © 2014 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
47. Extending value stream mapping through waste definition beyond customer perspective.
- Author
-
Khurum, Mahvish, Petersen, Kai, and Gorschek, Tony
- Subjects
VALUE stream mapping ,LEAN management ,COMPUTER software ,NEW product development ,SOFTWARE engineering - Abstract
ABSTRACT Value stream mapping (VSM) is one of the several Lean practices, which has recently attracted interest in the software engineering community. In other contexts (such as military, health and production), VSM has achieved considerable improvements in processes and products. The goal is to capitalize on these benefits in the software intensive product development context. The primary contribution is that we are extending the definition of waste to fit in the software intensive product development context. As traditionally in VSM everything that is not considered valuable is waste, we do this practically by looking at value beyond the customer perspective and using the software value map. An evaluation has been conducted through an industrial case study. First, the instantiation and motivations for selecting certain strategies have been provided. Second, the outcome of the VSM is described in detail. The instantiation of VSM via workshops was considered good as workshops allowed active interaction and discussion stakeholders' groups that are distant from each other in the regular work. With respect to waste and improvement identification, the participants were able to identify similar improvement suggestions. In a retrospective, the value stream approach was perceived positively by the practitioners with respect to process and outcome. Copyright © 2014 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
48. Supporting collaborative development using process models: a tooled integration-focused approach.
- Author
-
Kedji, Komlan Akpédjé, Lbath, Redouane, Coulette, Bernard, Nassar, Mahmoud, Baresse, Laurent, and Racaru, Florin
- Subjects
SOFTWARE engineering ,COMPUTER software development ,SWITCHING costs ,COMPUTER software ,SOFTWARE architecture ,CLIENT/SERVER computing - Abstract
ABSTRACT Collaboration in software engineering projects is usually intensive and requires adequate support by well-integrated tools. However, process-centered software engineering environments (PSEE) have traditionally been designed to exploit integration facilities in other tools, while offering themselves little to no such facilities. This is in line with the vision of the PSEE as the central orchestrator of project support tools. We argue that this view has hindered the widespread adoption of process-based collaboration support tools by incurring too much adoption and switching costs. We propose a new process-based collaboration support architecture, backed by a process metamodel, that can easily be integrated with existing tools. The proposed architecture revolves around the central concepts of 'deep links' and 'hooks'. Our approach is validated by analyzing a collection of open-source projects, and integration utilities based on the implemented process model server have been developed. Copyright © 2014 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
49. Software process simulation-at a crossroads?
- Author
-
Zhang, He, Raffo, David, Birkhöltzer, Thomas, Houston, Dan, Madachy, Raymond, Münch, Jürgen, and Sutton, Stanley M
- Subjects
COMPUTER simulation ,COMPUTER software ,SOFTWARE engineering ,CAPABILITY maturity model ,COMPUTER software development - Abstract
ABSTRACT Software process simulation (SPS) has been evolving over the past two decades after being introduced to the software engineering community in the 1980s. At that time the SPS technology attracted a great deal of interest from both academics and practitioners in the software process community-even to the extent of being one of the recommended techniques for achieving multiple Key Process Areas of Level 4 of the Capability Maturity Model Integration. However, in recent years, the growth of SPS seems to have slowed along with the number of reported applications in industry. This article summarizes the special panel that was held during ICSSP 2012 whose goals were to assess whether this technology remains applicable to today's software engineering projects and challenges and to point out the most beneficial opportunities for future research and industry application. Copyright © 2014 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
50. Opportunities for software reuse in an uncertain world: From past to emerging trends.
- Author
-
Capilla, Rafael, Gallina, Barbara, Cetina, Carlos, and Favaro, John
- Subjects
SOFTWARE engineering ,COMPUTER software reusability ,COMPUTER software ,COMPUTER software development ,PRODUCTION engineering ,NEW Year - Abstract
Much has been investigated about software reuse since the software crisis. The development of software reuse methods, implementation techniques, and cost models has resulted in a significant amount of research over years. Nevertheless, the increasing adoption of reuse techniques, many of them subsumed under higher level software engineering processes, and advanced programming techniques that ease the way to reuse software assets, have hidden somehow in the recent years new research trends on the practice of reuse and caused the disappearance of several reuse conferences. Also, new forms of reuse like open data and feature models have brought new opportunities for reuse beyond the traditional software components. From past to present, we summarize in this research the recent history of software reuse, and we report new research areas and forms of reuse according to current needs in industry and application domains, as well as promising research trends for the upcoming years. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.