255 results
Search Results
2. Incorporating DRAGON12-light trainer board in an introductory microprocessors course.
- Author
-
Alawneh, Shadi
- Subjects
MICROPROCESSORS ,MICROCONTROLLERS ,SERVOMECHANISMS ,SYSTEMS design ,COMPUTER software development ,SOFTWARE architecture ,PROGRAMMING languages - Abstract
The HCS12 microcontroller and DRAGON12-Light Trainer boards are extensively utilized in microprocessor system design education. This paper details the rationale, approach, and outcomes from implementing the DRAGON12-Light Trainer board in teaching an upper-level undergraduate microprocessors course at Oakland University. The course's primary goal is for students to acquire the skills necessary to design both hardware and software for microprocessor-based systems, with applications across various industries. The paper assesses the effectiveness of employing the Motorola HCS12 microcontroller for its real-world relevance and integrates the Freescale CodeWarrior IDE v5.1 environment for software development. This research uniquely contributes by measuring the improvement in students' practical skills, specifically in hardware and software design using assembly and C programming, through hands-on lab assignments. It reports on the development of students' abilities to engage in microprocessor-based system design and critically evaluates the applied aspects of pedagogy by incorporating the DRAGON12-Light Trainer in lab exercises, such as those involving 'Measuring Human Reaction Time' and 'Servo motor interfacing.' Quantitative results, derived from student surveys and assessments, indicate significant improvements in students' programming competencies. The paper provides statistical results showcasing the increase in students' self-rated confidence levels in assembly and C language programming before and after course completion. Additionally, qualitative insights are discussed, reflecting students' experiences and the perceived applicability of the skills they acquired. These results underscore the pedagogical value of integrating practical training devices like the DRAGON12-Light Trainer board into microprocessors curricula. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Unit Test Generation Using Large Language Models: A Systematic Literature Review.
- Author
-
Zapkus, Dovydas Marius and Slotkienė, Asta
- Subjects
LANGUAGE models ,COMPUTER software development ,ROBUST statistics ,AUTOMATION ,TASK performance - Abstract
Unit testing is a fundamental aspect of software development, ensuring the correctness and robustness of code implementations. Traditionally, unit tests are manually crafted by developers based on their understanding of the code and its requirements. However, this process can be time-consuming, errorprone, and may overlook certain edge cases. In recent years, there has been growing interest in leveraging large language models (LLMs) for automating the generation of unit tests. LLMs, such as GPT (Generative Pre-trained Transformer), CodeT5, StarCoder, LLaMA, have demonstrated remarkable capabilities in natural language understanding and code generation tasks. By using LLMs, researchers aim to develop techniques that automatically generate unit tests from code snippets or specifications, thus optimizing the software testing process. This paper presents a literature review of articles that use LLMs for unit test generation tasks. It also discusses the history of the most commonly used large language models and their parameters, including the first time they have been used for code generation tasks. The result of this study presents the large language models for code and unit test generation tasks and their increasing popularity in code generation domain, indicating a great promise for the future of unit test generation using LLMs. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Efficient Compiler Design for a Geometric Shape Domain-Specific Language: Emphasizing Abstraction and Optimization Techniques.
- Author
-
Gupta, Priya, ManiKiran, Terala, Purushotham, Mailapalli, Suriya, L. Jeya, Venkata, Rasamsetty Naga, and Nanda, Sambhudutta
- Subjects
COMPILERS (Computer programs) ,GEOMETRIC shapes ,MATHEMATICAL optimization ,COMPUTER software development - Abstract
The research paper represents a novel approach to the design and optimization of a compiler for a domain-specific language (DSL) focused on geometric shape creation and manipulation. The primary objective is to develop a compiler capable of generating efficient machine code while offering users a high level of abstraction. The paper begins with an overview of DSLs and compilers, emphasizing their importance in software development. Next, it outlines the specific requirements of the geometric shape DSL and proposes a compiler design that addresses them. This innovative approach considers DSL's unique features, such as shape creation and manipulation, and aims to generate high-quality machine code. The paper also discusses optimization techniques to enhance the generated code's quality and performance, including loop unrolling and instruction scheduling. These optimizations are particularly suited to the DSL, which focuses on geometric shape creation and manipulation and are integral to achieving efficient machine code generation. In conclusion, the paper emphasizes the novelty of this approach to DSL compiler design and anticipates exciting results from testing the compiler developed for the geometric shape DSL. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Code Analyzer.
- Author
-
Ahire, Purvesh, Milind Kulkarni, Prof., Avachat, Manas, Bhople, Rutuja, and Singh, Annany Dev
- Subjects
JAVA programming language ,COMPUTER software development ,SOURCE code ,MAINTAINABILITY (Engineering) ,MODULAR design ,HEALTH websites - Abstract
This research paper explores the role of code analyzer that takes the help of tokenization in its working. To assure code quality and maintainability, code analysis is essential in software development. In the context of the Java programming language, this paper provides a creative approach to code analysis utilizing tokenization. Our code analyzer allows for an accurate assessment of code structure, dependencies, and patterns by tokenizing source code into understandable units. The suggested analyzer, which uses token-based methodologies, aids in the correct identification of possible defects, and performance bottlenecks. It provides vital information for developers to improve code readability, modularization, and best practices adherence. The experimental findings show that the tokenization-based technique is successful and efficient in increasing overall code quality and maintainability. [ABSTRACT FROM AUTHOR]
- Published
- 2024
6. The Potential of AI-Driven Assistants in Scaled Agile Software Development.
- Author
-
Saklamaeva, Vasilka and Pavlič, Luka
- Subjects
AGILE software development ,SOFTWARE engineering ,ARTIFICIAL intelligence ,COMPUTER software development - Abstract
Scaled agile development approaches are now used widely in modern software engineering, allowing businesses to improve teamwork, productivity, and product quality. The incorporation of artificial intelligence (AI) into scaled agile development methods (SADMs) has emerged as a potential strategy in response to the ongoing demand for simplified procedures and the increasing complexity of software projects. This paper explores the intersection of AI-driven assistants within the context of the scaled agile framework (SAFe) for large-scale software development, as it stands out as the most widely adopted framework. Our paper pursues three principal objectives: (1) an evaluation of the challenges and impediments encountered by organizations during the implementation of SADMs, (2) an assessment of the potential advantages stemming from the incorporation of AI in large-scale contexts, and (3) the compilation of aspects of SADMs that AI-driven assistants enhance. Through a comprehensive systematic literature review, we identified and described 18 distinct challenges that organizations confront. In the course of our research, we pinpointed seven benefits and five challenges associated with the implementation of AI in SADMs. These findings were systematically categorized based on their occurrence either within the development phase or the phases encompassing planning and control. Furthermore, we compiled a list of 15 different AI-driven assistants and tools, subjecting them to a more detailed examination, and employing them to address the challenges we uncovered during our research. One of the key takeaways from this paper is the exceptional versatility and effectiveness of AI-driven assistants, demonstrating their capability to tackle a broader spectrum of problems. In conclusion, this paper not only sheds light on the transformative potential of AI, but also provides invaluable insights for organizations aiming to enhance their agility and management capabilities. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Agile Project Management in the Age of Digital Transformation: Exploring Emerging Trends.
- Author
-
Gorski, Hortensia, Gligorea, Ilie, Brudan, Adrian, and Oancea, Romana
- Subjects
PROJECT management ,DIGITAL transformation ,DIGITAL technology ,COMPUTER software development ,ARTIFICIAL intelligence - Abstract
In the context of today's dynamic environment, agility and speed are two essential characteristics that apply to project management in the software development industry, as well as in many other industries. In order to meet the complex and continuous challenges of the digital age, the principles, techniques and methods of Agile Project Management and Scrum are expected to become more widespread, especially in software development, replacing or augmenting the traditional ones. This paper aims to identify trends in project management related to digital transformation and diffusion of Industry 4.0 technologies. A bibliometric analysis was carried out by searching the WOS database. The resulting documents were exported and processed in VOSviewer to fulfil the scope. The research revealed that, in the context of digital transformation, information technology supports the agile approach, agile transformation and agile project management. Furthermore, emerging technologies specific to Industry 4.0, especially artificial intelligence, and big data, can contribute significantly to all project phases. These emerging technologies can improve data processing and analysis, project forecasting, and risks prediction, can support decision making thus contributing to the success of the project. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. Differentiated Security Requirements: An Exploration of Microservice Placement Algorithms in Internet of Vehicles.
- Author
-
Zhang, Xing, Liang, Jun, Lu, Yuxi, Zhang, Peiying, and Bi, Yanxian
- Subjects
REINFORCEMENT learning ,TECHNOLOGICAL innovations ,ALGORITHMS ,INTERNET ,COMPUTER software development ,INTERNET of things - Abstract
In recent years, microservices, as an emerging technology in software development, have been favored by developers due to their lightweight and low-coupling features, and have been rapidly applied to the Internet of Things (IoT) and Internet of Vehicles (IoV), etc. Microservices deployed in each unit of the IoV use wireless links to transmit data, which exposes a larger attack surface, and it is precisely because of these features that the secure and efficient placement of microservices in the environment poses a serious challenge. Improving the security of all nodes in an IoV can significantly increase the service provider's operational costs and can create security resource redundancy issues. As the application of reinforcement learning matures, it is enabling faster convergence of algorithms by designing agents, and it performs well in large-scale data environments. Inspired by this, this paper firstly models the placement network and placement behavior abstractly and sets security constraints. The environment information is fully extracted, and an asynchronous reinforcement-learning-based algorithm is designed to improve the effect of microservice placement and reduce the security redundancy based on ensuring the security requirements of microservices. The experimental results show that the algorithm proposed in this paper has good results in terms of the fit of the security index with user requirements and request acceptance rate. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Developing a Tool for Modeling and Simulation of Discrete Systems using Iterative Approach.
- Author
-
Davidrajuh, Reggie
- Subjects
DISCRETE systems ,PETRI nets ,ITERATIVE learning control ,SIMULATION methods & models ,COMPUTER software development ,INDUSTRIAL applications - Abstract
General-purpose Petri Net Simulator (GPenSIM) is a new tool for modelling and simulating discrete systems. GPenSIM was developed using the Iterative Approach, which divides the software development life-cycle into smaller tasks. In each iteration, we add more functionality. However, GPenSIM was created to model industrial applications and academic purposes (teaching and research). Hence, in each iteration, some industrial, real-life, and large discrete systems are modelled, and the experiences received from these projects are also incorporated into the tool development. This paper follows the development of GPenSIM through three major iterations and explains the importance and lessons learned during these iterations. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. Exploring the Connection between the TDD Practice and Test Smells—A Systematic Literature Review †.
- Author
-
Marabesi, Matheus, García-Holgado, Alicia, and García-Peñalvo, Francisco José
- Subjects
OLFACTOMETRY ,SMELL ,COMPUTER software development ,ORGANIZATIONAL structure - Abstract
Test-driven development (TDD) is an agile practice of writing test code before production code, following three stages: red, green, and refactor. In the red stage, the test code is written; in the green stage, the minimum code necessary to make the test pass is implemented, and in the refactor stage, improvements are made to the code. This practice is widespread across the industry, and various studies have been conducted to understand its benefits and impacts on the software development process. Despite its popularity, TDD studies often focus on the technical aspects of the practice, such as the external/internal quality of the code, productivity, test smells, and code comprehension, rather than the context in which it is practiced. In this paper, we present a systematic literature review using Scopus, Web of Science, and Google Scholar that focuses on the TDD practice and the influences that lead to the introduction of test smells/anti-patterns in the test code. The findings suggest that organizational structure influences the testing strategy. Additionally, there is a tendency to use test smells and TDD anti-patterns interchangeably, and test smells negatively impact code comprehension. Furthermore, TDD styles and the relationship between TDD practice and the generation of test smells are frequently overlooked in the literature. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. SOFTWARE EFFORT ESTIMATION USING MACHINE LEARNING ALGORITHMS.
- Author
-
LAVINGIA, KRUTI, PATEL, RAJ, PATEL, VIVEK, and LAVINGIA, AMI
- Subjects
MACHINE learning ,SOFTWARE engineering ,COMPUTER software development ,SCHEDULING software ,COMPUTER software ,ESTIMATION theory - Abstract
Effort estimation is a crucial aspect of software development, as it helps project managers plan, control, and schedule the development of software systems. This research study compares various machine learning techniques for estimating effort in software development, focusing on the most widely used and recent methods. The paper begins by highlighting the significance of effort estimation and its associated difficulties. It then presents a comprehensive overview of the different categories of effort estimation techniques, including algorithmic, model-based, and expert-based methods. The study concludes by comparing methods for a given software development project. Random Forest Regression algorithm performs well on the given dataset tested along with various Regression algorithms, including Support Vector, Linear, and Decision Tree Regression. Additionally, the research identifies areas for future investigation in software effort estimation, including the requirement for more accurate and reliable methods and the need to address the inherent complexity and uncertainty in software development projects. This paper provides a comprehensive examination of the current state-of-the-art in software effort estimation, serving as a resource for researchers in the field of software engineering. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Spatial Data Fusion Model Design and Research for an Underground Pipeline in Urban Environment Scene Modeling.
- Author
-
Shen, Tao, Zhang, Huabin, Huo, Liang, and Sun, Di
- Subjects
UNDERGROUND pipelines ,GEOGRAPHIC information systems ,MULTISENSOR data fusion ,UNDERGROUND construction ,COMPUTER software development - Abstract
In the rapid development of urban construction, underground pipelines play a crucial role. However, the current underground pipelines have poor association with relevant management departments, and there are deficiencies in data completeness, accuracy, and information content. Managing and sharing information resources is relatively difficult, transforming the constructed 3D underground pipeline geographic information systems into an 'Information silo'. This results in redundant construction and resource wastage of underground utilities. The complex distribution characteristics of underground utilities make rapid batch modeling and post-model maintenance challenging. Therefore, researching a 3D spatial data fusion model for urban underground utilities becomes particularly important. Given the above problem, this paper proposes a spatial data fusion model for underground pipeline scene modeling. It elaborates on the geometric, semantic, and temporal characteristics of underground pipelines, encapsulating these features. With underground pipeline objects as the core and pipeline characteristics as the foundation, a spatial data fusion model integrating multiple characteristics of underground pipelines has been constructed. Through software development, the data model designed in this paper facilitates rapid construction of underground pipeline scenes. This further enhances the consistency and integrity of underground pipeline data, enabling shared resources and comprehensive supervision of facility operations on a daily basis. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. The Use of AI in Software Engineering: A Synthetic Knowledge Synthesis of the Recent Research Literature.
- Author
-
Kokol, Peter
- Subjects
SOFTWARE engineering ,NATURAL language processing ,ARTIFICIAL intelligence ,ENGINEERING management ,COMPUTER software developers ,COMPUTER software development - Abstract
Artificial intelligence (AI) has witnessed an exponential increase in use in various applications. Recently, the academic community started to research and inject new AI-based approaches to provide solutions to traditional software-engineering problems. However, a comprehensive and holistic understanding of the current status needs to be included. To close the above gap, synthetic knowledge synthesis was used to induce the research landscape of the contemporary research literature on the use of AI in software engineering. The synthesis resulted in 15 research categories and 5 themes—namely, natural language processing in software engineering, use of artificial intelligence in the management of the software development life cycle, use of machine learning in fault/defect prediction and effort estimation, employment of deep learning in intelligent software engineering and code management, and mining software repositories to improve software quality. The most productive country was China (n = 2042), followed by the United States (n = 1193), India (n = 934), Germany (n = 445), and Canada (n = 381). A high percentage (n = 47.4%) of papers were funded, showing the strong interest in this research topic. The convergence of AI and software engineering can significantly reduce the required resources, improve the quality, enhance the user experience, and improve the well-being of software developers. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Integrated Project Development through Combined Theory and Practices of Core Courses focusing on Software Development Skills: Integrated Learning Framework.
- Author
-
Desai, Padmashree and Hiremath, P. G. Sunitha
- Subjects
COMPUTER software testing ,COMPUTER software development ,REPORT writing ,INTERDISCIPLINARY education ,COMPUTER science students ,EDUCATION policy - Abstract
The National Education Policy promotes moving from the conventional content-heavy and memorization learning practice towards holistic learning/integrated learning. It imparts a creative and multidisciplinary curriculum that focuses equally on curriculum and assessment. All educational establishments assess students using written examinations, quizzes, seminars, term paper writing, and course projects. A semester typically includes 4-5 courses, and students must earn credits for these courses by scoring a good Cumulative Grade Point Average (CGPA). As course projects offer depth knowledge/holistic learning/lifelong learning of a course for the student, many courses include course projects as one of the activities in the course. If all the courses are intended to include course projects as a mandatory pedagogy, it will be difficult for students to acquire in-depth knowledge and required skills while also dealing with stress. So we are proposing an integrated learning framework by applying the theory and practices of two core courses- Software Engineering and Web Technologies to develop a web application. This integrated learning focuses on developing software development and software testing skills in computer science for undergraduate students pursuing a bachelor of engineering degree. This framework alleviated the pressure on students during placement and created job opportunities in software development. The framework consists of three important phases- The first phase includes the identification of the problem as a need for customers, writing requirements and analyzing the same. Students apply modular design principles and break down the codebase into distinct modules. This technique enhances code organization, reusability, and maintainability. The second phase focused on developing the front end by harnessing the power of Angular, a leading web framework, to craft a sleek and interactive user interface. The backend is built using Node.js, which serves as the foundation, enabling the software system to cater to highperformance server environments. These modules communicated seamlessly through well-defined APIs, facilitating the integration of various components within the application, ultimately delivering a seamless and responsive user experience. An industry expert conducted a workshop on Angular This paper was submitted for review on Sept 10, 2023. It was accepted on Nov, 15, 2023. Corresponding author: Padmashree Desai, Department, K.L.E. Technological University, Karnataka India. Address: Hubblii-580031 (e-mail: padmashri@kletech.ac.in). Copyright © YYYY JEET. and React to get hands-on experience. The third phase focused on software testing using appropriate testing tools such as Selenium, Jmeter, TestComplete and Appium to test the web application. A software testing workshop was conducted for students by industry experts to expose the students to designing test cases, test plans, and testing strategies. The hands-on experience on testing tools was provided during the workshop. Faculty reviews are conducted on each phase, and rubrics-based assessment is done on each phase. Approximately sixty teams created web-based applications for real-world scenarios. Positive aspects of the framework in feedback indicated that more than 87% of the students agreed that they could apply Software engineering principles and practices such as requirements management. modular design and testing in web applications. Also, more than 85% of students acquire skills from code-to-web design mastery by developing web applications in Angular Node.js and backend implementation. This framework helped to improve teamwork, presentation and communication skills. Confidence in software development improved to a greater extent. The design and implementation of the framework met the stated outcome of the courses. The student's academic performance improved by 10% compared to the previous year when students were not involved in the integrated project development. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. The Impact of Large Language Models on Programming Education and Student Learning Outcomes.
- Author
-
Jošt, Gregor, Taneski, Viktor, and Karakatič, Sašo
- Subjects
LANGUAGE models ,EDUCATIONAL outcomes ,PROGRAMMING languages ,CHATGPT ,COMPUTER software development ,DEBUGGING - Abstract
Recent advancements in Large Language Models (LLMs) like ChatGPT and Copilot have led to their integration into various educational domains, including software development education. Regular use of LLMs in the learning process is still not well-researched; thus, this paper intends to fill this gap. The paper explores the nuanced impact of informal LLM usage on undergraduate students' learning outcomes in software development education, focusing on React applications. We carefully designed an experiment involving thirty-two participants over ten weeks where we examined unrestricted but not specifically encouraged LLM use and their correlation with student performance. Our results reveal a significant negative correlation between increased LLM reliance for critical thinking-intensive tasks such as code generation and debugging and lower final grades. Furthermore, a downward trend in final grades is observed with increased average LLM use across all tasks. However, the correlation between the use of LLMs for seeking additional explanations and final grades was not as strong, indicating that LLMs may serve better as a supplementary learning tool. These findings highlight the importance of balancing LLM integration with the cultivation of independent problem-solving skills in programming education. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. FAIR-USE4OS: Guidelines for creating impactful open-source software.
- Author
-
Sonabend, Raphael, Gruson, Hugo, Wolansky, Leo, Kiragga, Agnes, and Katz, Daniel S.
- Subjects
SOFTWARE architecture ,DESIGN software ,COMPUTER software ,OPEN source software ,COMPUTER software development ,RESEARCH personnel - Abstract
This paper extends the FAIR (Findable, Accessible, Interoperable, Reusable) guidelines to provide criteria for assessing if software conforms to best practices in open source. By adding "USE" (User-Centered, Sustainable, Equitable), software development can adhere to open source best practice by incorporating user-input early on, ensuring front-end designs are accessible to all possible stakeholders, and planning long-term sustainability alongside software design. The FAIR-USE4OS guidelines will allow funders and researchers to more effectively evaluate and plan open-source software projects. There is good evidence of funders increasingly mandating that all funded research software is open source; however, even under the FAIR guidelines, this could simply mean software released on public repositories with a Zenodo DOI. By creating FAIR-USE software, best practice can be demonstrated from the very beginning of the design process and the software has the greatest chance of success by being impactful. Author summary: This research builds on the FAIR principles to ensure research software adheres to open-source development best practice, which includes community engagement and early planning for long-term sustainability. By creating guidelines ("FAIR-USE4OS") that can be followed, funders and researchers are in a stronger position to evaluate and create research software with maximal chance of success. This research is important as open-source software that is not "FAIR-USE" has a lower probability of long-term impact. These guidelines will help benefit and impact society once they are widely accepted by researchers and funders, which could happen within a relatively short time period given good evidence that funders are actively including and updating open source policies, which directly impact upon how research is conducted. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. RE Methods for Virtual Reality Software Product Development: A Mapping Study.
- Author
-
Karre, Sai Anirudh, Reddy, Y. Raghu, and Mittal, Raghav
- Subjects
VIRTUAL reality software ,COMPUTER software development ,NEW product development ,VIRTUAL reality ,REQUIREMENTS engineering ,PRODUCT attributes - Abstract
Software practitioners use various methods in Requirements Engineering (RE) to elicit, analyze, and specify the requirements of enterprise products. The methods impact the final product characteristics and influence product delivery. Ad-hoc usage of the methods by software practitioners can lead to inconsistency and ambiguity in the product. With the notable rise in enterprise products, games, and so forth across various domains, Virtual Reality (VR) has become an essential technology for the future. The methods adopted for RE for developing VR products requires a detailed study. This article presents a mapping study on RE methods prescribed and used for developing VR applications including requirements elicitation, requirements analysis, and requirements specification. Our study provides insights into the use of such methods in the VR community and suggests using specific RE methods in various fields of interest. We also discuss future directions in RE for VR products. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. Containerization in Edge Intelligence: A Review.
- Author
-
Urblik, Lubomir, Kajati, Erik, Papcun, Peter, and Zolotová, Iveta
- Subjects
CONTAINERIZATION ,ARTIFICIAL intelligence ,COMPUTER software development ,CLOUD computing - Abstract
The onset of cloud computing brought with it an adoption of containerization—a lightweight form of virtualization, which provides an easy way of developing and deploying solutions across multiple environments and platforms. This paper describes the current use of containers and complementary technologies in software development and the benefits it brings. Certain applications run into obstacles when deployed on the cloud due to the latency it introduces or the amount of data that needs to be processed. These issues are addressed by edge intelligence. This paper describes edge intelligence, the deployment of artificial intelligence close to the data source, the opportunities it brings, along with some examples of practical applications. We also discuss some of the challenges in the development and deployment of edge intelligence solutions and the possible benefits of applying containerization in edge intelligence. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. Expediting the design and development of secure cloud-based mobile apps.
- Author
-
Chimuco, Francisco T., Sequeiros, Joāo B. F., Simōes, Tiago M. C., Freire, Mário M., and Inácio, Pedro R. M.
- Subjects
- *
MOBILE apps , *DATA privacy , *DATA security , *SOFTWARE engineering , *APPLICATION software , *COMPUTER software development - Abstract
The adoption and popularity of mobile devices by end-users is partially driven by the increasing development and availability of mobile applications that can aid solving different problems and provide access to services in a wide range of domains or categories, namely healthcare, education, e-commerce or entertainment. While these applications use and benefit from the combination of a wide panoply of technologies from the Internet of Things, fog and cloud computing, data security and privacy are typically not fully taken into account before the creation of many mobile applications or during the software development phases. This paper presents an in-depth approach to modeling attacks on the specific cloud and mobile ecosystem, given its importance in the process of secure application development. Moreover, aiming at bridging the knowledge gap between developers and security experts, this paper presents an alpha version of the security by design for cloud and mobile ecosystem (secD4CloudMobile) framework. secD4CloudMobile is a set of tools that covers cloud and mobile security requirement elicitation (CMSRE), cloud and mobile security best practices guidelines (CMSBPG), cloud mobile attack modeling elicitation (CMAME), and cloud mobile security test specification and tools (CM2ST). The purpose of the framework is to provide cloud and mobile application developers useful readily applicable information and guidelines, striving to bring security engineering and software engineering closer, in a more accessible and automated manner, aiming at the incorporation of security by construction. Finally, the paper presents some preliminary results and discussion. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. Evolution of secure development lifecycles and maturity models in the context of hosted solutions.
- Author
-
Lange, Felix and Kunz, Immanuel
- Subjects
- *
COMPUTER software development , *SPRINTING , *PRIVACY , *RETIREMENT , *COMPUTER software - Abstract
Organizations creating software commonly utilize software development lifecycles (SDLCs) to structure development activities. Secure development lifecycles (SDLs) integrate into SDLCs, adding security or compliance activities. They are widely used and have been published by industry leaders and in literature. These SDLs, however, were mostly designed before or while cloud services and other hosted solutions became popular. Such offerings widen the provider's responsibilities, as they not only deliver software but operate and decommission it as well. SDLs, however, do not always account for this change. Security maturity models (SMMs) help to assess SDLs and identify improvements by introducing a baseline to compare against. Multiple of these models were created after the advent of hosted solutions and are more recent than commonly referenced SDLs. Recent SMMs and SDLs may therefore support hosted solutions better than older proposals do. This paper compares a set of current and historic SDLs and SMMs in order to review their support for hosted solutions, including how support has changed over time. Security, privacy, and support for small or agile organizations are considered, as all are relevant to hosted solutions. The SDLs analyzed include Microsoft's SDL, McGraw's Touchpoints, the Cisco's SDL, and Stackpole and Oksendahl's SDL2. The SMMs reviewed are OWASP's Software Assurance Maturity Model 2 and DevSecOps Maturity Model. To assess the support for hosted solutions, the security and privacy activities foreseen in each SDLC phase are compared, before organizational compatibility, activity relevance, and efficiency are assessed. The paper further demonstrates how organizations may select and adjust a suitable proposal. The analyzed proposals are found to not sufficiently support hosted solutions: Important SDLC phases, such as solution retirement, are not always sufficiently supported. Agile practices, such as working in sprints, and small organizations are often not sufficiently considered as well. Efficiency is found to vary based on the application context. A clear improvement trend from before the proliferation of hosted solutions cannot be identified. Future work is therefore found to be required. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. Effort-Aware Fault-Proneness Prediction Using Non-API-Based Package-Modularization Metrics.
- Author
-
Shaikh, Mohsin, Tunio, Irfan, Khan, Jawad, and Jung, Younhyun
- Subjects
COMPUTER software development ,SOURCE code ,LEGACY systems ,SOFTWARE architecture ,SYSTEMS software - Abstract
Source code complexity of legacy object-oriented (OO) software has a trickle-down effect over the key activities of software development and maintenance. Package-based OO design is widely believed to be an effective modularization. Recently, theories and methodologies have been proposed to assess the complementary aspects of legacy OO systems through package-modularization metrics. These package-modularization metrics basically address non-API-based object-oriented principles, like encapsulation, commonality-of-goal, changeability, maintainability, and analyzability. Despite their ability to characterize package organization, their application towards cost-effective fault-proneness prediction is yet to be determined. In this paper, we present theoretical illustration and empirical perspective of non-API-based package-modularization metrics towards effort-aware fault-proneness prediction. First, we employ correlation analysis to evaluate the relationship between faults and package-level metrics. Second, we use multivariate logistic regression with effort-aware performance indicators (ranking and classification) to investigate the practical application of proposed metrics. Our experimental analysis over open-source Java software systems provides statistical evidence for fault-proneness prediction and relatively better explanatory power than traditional metrics. Consequently, these results guide developers for reliable and modular package-based software design. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. Using system dynamics to support a functional exercise for pandemic preparedness and response.
- Author
-
Green, Caroline, Beishuizen, Berend, Stein, Mart, Rovers, Chantal P., Tostmann, Alma, de Jong, Daan L. K., Houareau, Claudia, Perseke, Knut, Spieker, Clara, Grote, Ulrike, Csornai, Patrick, Tighe, Carlos, Hayes, Conor, Andrade, Jair, Connolly, Máire A., and Duggan, Jim
- Subjects
PANDEMIC preparedness ,SYSTEM dynamics ,COMPUTER software development ,PUBLIC health - Abstract
In pandemic preparedness and response, a Functional Exercise (FX) is used to simulate a situation as close to a real‐life event as possible without the deployment of resources. Participants are drawn from public health emergency operations centres, and work through a scenario script to test possible responses to a novel pathogen outbreak. This paper summarises the role of system dynamics modelling in the design and implementation of a functional exercise, which involved the Dutch and German national public health institutes in March 2023. The findings confirm the value of the system dynamics method in integrating disease and hospital models, and also highlights how well the method aligns with modern software development processes. The paper concludes with a discussion of what worked well, and presents areas for future enhancements of management flight simulators to support functional exercises. © 2024 The Author(s). System Dynamics Review published by John Wiley & Sons Ltd on behalf of System Dynamics Society. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. Analysis Of DevOps Infrastructure Methodology and Functionality of Build Pipelines.
- Author
-
Rangineni, Sandeep and Bhardwaj, Arvind Kumar
- Subjects
EDUCATION ethics ,COMPUTER software development ,INFRASTRUCTURE (Economics) ,SYSTEMS software ,REQUIREMENTS engineering ,SCALABILITY - Abstract
The DevOps pipeline for infrastructure is a critical component in modern software development and operations practices. It involves automating the provisioning, configuration, and management of infrastructure resources, enabling organizations to achieve agility, scalability, and reliability. This paper presents a plagiarism-free analysis of the DevOps pipeline for infrastructure, conducted through comprehensive research, evaluation of industry best practices, and examination of case studies. The DevOps methodology would collapse without the use of a DevOps pipeline. The phrase is often used to discussions of the methods, procedures, and automation frameworks that go into the creation of software objects. Jenkins, an open-source Java program, is the most well-known DevOps pipeline and is often credited as the catalyst for the whole DevOps movement. Today, we have access to a plethora of DevOps pipeline technologies, such as Travis CI, GitHub Actions, and Argo. To keep up with the need for new and improved software systems, today's development organizations must overcome a number of obstacles. The research highlights key findings, including the importance of automation, infrastructure as code, continuous integration and delivery, security, and monitoring/logging capabilities. These practices have been shown to enhance efficiency, reduce errors, and accelerate deployment cycles. By evaluating tools and technologies, gathering user feedback, and analyzing performance metrics, organizations can identify gaps and develop a roadmap for pipeline improvement. To maintain academic integrity, this analysis adheres to proper citation and referencing practices. Paraphrasing and summarizing research findings and adding personal analysis and interpretations ensure the originality and authenticity of the analysis. Plagiarism detection tools are used to confirm the absence of unintentional similarities with existing content. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. Puzzle Pattern, a Systematic Approach to Multiple Behavioral Inheritance Implementation in Object-Oriented Programming.
- Author
-
Fallucchi, Francesca and Gozzi, Manuel
- Subjects
OBJECT-oriented programming ,COMPUTER software development ,APPLICATION software ,SOFTWARE architecture ,SOFTWARE engineering - Abstract
Featured Application: This software design pattern can be used in OOP programming in order to promote conceptual clarity, reduce coupling, and facilitate system scalability. Object-oriented programming (OOP) has long been a dominant paradigm in software development, but it is not without its challenges. One major issue is the problem of tight coupling between objects, which can hinder flexibility and make it difficult to modify or extend code. Additionally, the complexity of managing inheritance hierarchies can lead to rigid and fragile designs, making it hard to maintain and evolve the software over time. This paper introduces a software development pattern that seeks to offer a renewed approach to writing code in object-oriented (OO) environments. Addressing some of the limitations of the traditional approach, the Puzzle Pattern focuses on extreme modularity, favoring writing code exclusively in building blocks that do not possess a state (e.g., Java interfaces that support concrete methods definitions in interfaces starting from version 8). Concrete classes are subsequently assembled through the implementation of those interfaces, reducing coupling and introducing a new level of flexibility and adaptability in software construction. The highlighted pattern offers significant benefits in software development, promoting extreme modularity through interface-based coding, enhancing adaptability via multiple inheritance, and upholding the SOLID principles, though it may pose challenges such as complexity and a learning curve for teams. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. Zynerator: Bridging Model-Driven Architecture and Microservices for Enhanced Software Development.
- Author
-
Zouani, Younes and Lachgar, Mohamed
- Subjects
COMPUTER software development ,MODERN architecture ,ARCHITECTURAL style ,MAINTAINABILITY (Engineering) ,SCALABILITY ,SOFTWARE architecture - Abstract
Model-driven architecture (MDA) has demonstrated significant potential in automating code generation processes, yet its application often falls short in addressing the complexities of modern architectural styles, notably microservices. Microservice architecture, characterized by its decomposition of applications into small, independently deployable services, presents unique challenges and opportunities that traditional MDA approaches struggle to accommodate. In this paper, Zynerator, a novel framework that bridges the gap between model-driven architecture and microservice development, is presented. By integrating semantic decorators into the PIM, Zynerator empowers end-users to express intricate functional and non-functional requirements, laying the foundation for the generation of contextually appropriate code. Moreover, Zynerator goes beyond traditional MDA capabilities by offering a solution for microservice architecture integration, enabling the generation of service gateways, service discovery mechanisms, and other essential components inherent to microservice ecosystems. This integration not only streamlines the development process but also ensures the scalability, resilience, and maintainability of microservice-based applications. Through Zynerator, a flexible and comprehensive solution is presented that leverages the strengths of model-driven architecture (MDA), while addressing the evolving needs of modern software architecture, particularly in the realm of microservice development. Empirical results showed that Zynerator enhances code generation alignment to functional requirements by 55%, reduces microservice adoption in terms of communication and deployment times by 30%, and increases system scalability by supporting up to 10,000 concurrent users, without performance degradation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. Factors influencing sustainability aspects in crowdsourced software development: A systematic literature review.
- Author
-
Haider, Waqas, Ilyas, Muhammad, Khalid, Shah, and Ali, Sikandar
- Subjects
- *
COMPUTER software development , *SOFTWARE engineering , *COMPUTER software industry , *SUSTAINABILITY , *SUSTAINABLE engineering - Abstract
Crowdsource software development has become more and more popular in recent years in the software industry. Crowdsourcing is an open‐call technique for outsourcing tasks to a broad and undefined crowd. Crowdsourcing provides numerous advantages including reduced costs, fast project completion, talent identification, diversity of solutions, top‐quality, and access to problem‐solving creativity. Despite of the benefits gained from crowdsourcing, there are numerous issues like lack of experienced workers, lack of confidentiality, copyright issues, software sustainability, and so forth. There is also less focus on the long‐term sustainability of software development because of new ideas emerging in crowdsourcing software development. Furthermore, in literature, lack of guidelines towards sustainable software crowdsourcing is highlighted as one of the limitations in the software standards. This study aims to identify the factors that influence sustainability aspects in crowdsourced software development. We have conducted a systematic literature review for identification of these factors. In this paper, we present findings of the systematic literature review in the form of a list of 11 factors extracted from a sample of 45 finally selected papers. Among these factors, six of the factors are ranked as critical factors. These critical factors are "Lack of coding standard in documentation," "Use of popular programming tools," "Crowd Lack of knowledge and awareness about sustainability," "Energy‐efficient coding," "Lack of awareness about sustainable software engineering practices," and "Lack of coordination/communication between client and crowd." [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. Advances in automated support for requirements engineering: a systematic literature review.
- Author
-
Umar, Muhammad Aminu and Lano, Kevin
- Subjects
- *
REQUIREMENTS engineering , *TECHNICAL literature , *NATURAL language processing , *COMPUTER software development , *UNIFIED modeling language , *SYSTEMS software - Abstract
Requirements Engineering (RE) has undergone several transitions over the years, from traditional methods to agile approaches emphasising increased automation. In many software development projects, requirements are expressed in natural language and embedded within large volumes of text documents. At the same time, RE activities aim to define software systems' functionalities and constraints. However, manually executing these tasks is time-consuming and prone to errors. Numerous research efforts have proposed tools and technologies for automating RE activities to address this challenge, which are documented in published works. This review aims to examine empirical evidence on automated RE and analyse its impact on the RE sub-domain and software development. To achieve our goal, we conducted a Systematic Literature Review (SLR) following established guidelines for conducting SLRs. We aimed to identify, aggregate, and analyse papers on automated RE published between 1996 and 2022. We outlined the output of the support tool, the RE phase covered, levels of automation, development approach, and evaluation approaches. We identified 85 papers that discussed automated RE from various perspectives and methodologies. The results of this review demonstrate the significance of automated RE for the software development community, which has the potential to shorten development cycles and reduce associated costs. The support tools primarily assist in generating UML models (44.7%) and other activities such as omission of steps, consistency checking, and requirement validation. The analysis phase of RE is the most widely automated phase, with 49.53% of automated tools developed for this purpose. Natural language processing technologies, particularly POS tagging and Parser, are widely employed in developing these support tools. Controlled experimental methods are the most frequently used (48.2%) for evaluating automated RE tools, while user studies are the least employed evaluation method (8.2%). This paper contributes to the existing body of knowledge by providing an updated overview of the research literature, enabling a better understanding of trends and state-of-the-art practices in automated RE for researchers and practitioners. It also paves the way for future research directions in automated requirements engineering. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. A Hybrid Artificial Bee Colony and Artificial Fish Swarm Algorithms for Software Cost Estimation.
- Author
-
Sharif, Hawar Othman, Ghareb, Mazen Ismaeel, and Mohamedyusf, Hoshmen Murad
- Subjects
COMPUTER software development ,BEES algorithm ,COST estimates ,MATHEMATICAL models ,MACHINE learning - Abstract
Software cost estimation (SCE), estimating the cost and time required for software development, plays a highly significant role in managing software projects. A somewhat accurate SCE is necessary for a software project to be successful. It allows effective control of construction time and cost. In the past few decades, various models have been presented to evaluate software projects, including mathematical models and machine learning algorithms. In this paper, a new model based on the hybrid of the artificial fish swarm algorithm (AFSA) and the artificial bee colony (ABC) algorithm is presented for SCE. The initial population of AFSA, which includes the values of the effort factors, is generated using the ABC algorithm. ABC algorithm is used to solve the problems of the AFSA algorithm such as population diversity and getting stuck in a local optimum. ABC algorithm achieves the best solutions using observer and scout bees. The evaluation of the combined method has been implemented on eight different data sets and evaluated based on eight different criteria such as mean magnitude of relative error and PRED (0.25). The proposed method is more error-free than current SCE methods, according to the results. The error value of the proposed method is lower on NASA60, NASA63, and NASA93 datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. Requirements for User Experience Management - A Tertiary Study.
- Author
-
Hinderks, Andreas, Domínguez Mayo, Francisco José, José Escalona, María, and Thomaschewski, Jörg
- Subjects
USER experience ,COMPUTER software development ,LITERATURE reviews - Abstract
Today's users expect to be able to interact with the products they own without much effort and also want to be excited about them. The development of a positive user experience must therefore be managed. We understand management in general as a combination of a goal, a strategy, and resources. When applied to UX, user experience management consists of a UX goal, a UX strategy, and UX resources. We conducted a tertiary study and examined the current state of existing literature regarding possible requirements. We want to figure out, what requirements can be derived from the literature reviews with the focus on UX and agile development. In total, we were able to identify and analyse 16 studies. After analysing the studies in detail, we identified different requirements for UX management. In summary, we identified 13 requirements. The most frequently mentioned requirements were prototypes and UX/usability evaluation. Communication between UX professionals and developers was identified as a major improvement in the software development process. In summary, we were able to identify requirements for UX management of People/Social, Technology/Artifacts, and Process/Practice. However, we could not identify requirements for UX management that enabled the development and achievement of a UX goal. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
30. Sustaining Digital Assets Through Mobile Estate Planning.
- Author
-
Katuk, Norliza, Muniandy, Asvinitha, Wahab, Norazlina Abd, and Ahmad, Ijaz
- Subjects
ESTATE planning ,DIGITAL asset management ,ASSETS (Accounting) ,ONLINE banking ,MOBILE apps ,COMPUTER software development - Abstract
Copyright of An-Najah University Journal for Research, B: Humanities is the property of An-Najah National University and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
31. SURE: Structure for Unambiguous Requirement Expression in Natural Language.
- Author
-
Parrales-Bravo, Franklin, Caicedo-Quiroz, Rosangela, Barzola-Monteses, Julio, Vasquez-Cevallos, Leonel, Galarza-Soledispa, María Isabel, and Reyes-Wagnio, Manuel
- Subjects
NATURAL languages ,COMPUTER software developers ,REQUIREMENTS engineering ,COMPUTER software development ,ENGINEERING education - Abstract
This study presents three structures for clearly expressing functional requirements (FRs) and quantitative non-functional requirements (qt-NFRs). Expressing requirements with these structures will allow the understanding of requirements by stakeholders and software developers. The first structure is the SURE format, which is composed of three main sections: a title, a short definition, and a detailed description. The second proposed structure is a template to facilitate the definition of the title and description of unambiguous FRs. It is based on the application of CRUD operations on a certain entity, calling it the "CRUDE" structure. Finally, the third structure serves as a template to make it easier to clearly define the description and title of qt-NFRs. It is based on the application of system properties to computer events or actions, calling it the "PROSE" structure. In this, it is very important to specify those metric values that are desired or expected by the stakeholder. To know how much the definition of FRs and qt-NFRs improved when the proposed structures were used, 46 requirement specification documents elaborated as homework by students of the "Requirement Engineering" course offered at the University of Guayaquil between 2020 and 2022 were evaluated by five experts with more than 10 years of experience in software development for Ecuadorian companies. The findings showed that students reduced the percentage of unambiguous FRs and qt-NFRs from over 80% to about 10%. In conclusion, the findings demonstrate how crucial the three structures proposed in this paper are to helping students develop the ability to clearly express requirements. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. Enhancing Code Readability through Automated Consistent Formatting.
- Author
-
Kanoutas, Thomas, Karanikiotis, Thomas, and Symeonidis, Andreas L.
- Subjects
PROGRAMMING languages ,SYSTEMS design ,COMPUTER software development ,SOURCE code ,SOFTWARE maintenance - Abstract
Code readability is critical to software development and has a significant impact on maintenance and collaboration in evolving technology landscapes. With the increasing complexity of projects and the diversity of developers' coding styles, the need for automated tools to improve code readability has become more apparent. This paper presents an innovative automated system designed to improve code readability by modeling and enforcing consistent formatting standards. The approach uses techniques such as Long Short-Term Memory (LSTM) networks and N-gram models, allowing the system to adapt to different coding styles and preferences. The system works autonomously by analyzing code styling within a project, identifying deviations from established standards and providing actionable recommendations for consistent styling. To validate our approach, several evaluations were performed on a large dataset of Java files. The results demonstrate the system's effectiveness in detecting and correcting formatting errors, identifying a formatting error within the first five predictions more than 90% of the time, while providing the correct fix nearly 96% of the time, regardless of formatting convention or programming language. By offering a solution tailored to the specific needs of different teams, our system represents a significant advance in automated code formatting and readability improvement. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. FROM .NET CORE TO .NET 8: A COMPREHENSIVE ANALYSIS OF PERFORMANCE, FEATURES, AND MIGRATION PATHWAYS.
- Author
-
Cvijić, Branimir and Ranilović, Pero
- Subjects
INFORMATION technology personnel ,COMPUTER software development ,BENCHMARKING (Management) - Abstract
This analysis embarks on a comprehensive exploration of the .NET ecosystem’s evolution, with a spotlight on the transition from .NET Core to the unified .NET platform, culminating in the release of .NET 8. It meticulously examines the performance enhancements, feature evolutions, and migration strategies that underscore this transition, providing a lens through which the future trajectory of .NET, including the anticipation of .NET 9, can be discerned. By offering a deep dive into the comparative performance metrics and the introduction of novel features across versions, this paper caters to IT professionals, students, and technology aficionados seeking to grasp the full extent of .NET’s capabilities and its strategic direction. The findings aim to not only delineate the technical advancements but also to contextualize the platform’s ongoing innovation within the broader software development ecosystem. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. Leveraging pre-trained language models for code generation.
- Author
-
Soliman, Ahmed, Shaheen, Samir, and Hadhoud, Mayada
- Subjects
LANGUAGE models ,NATURAL language processing ,CAUSAL models ,COMPUTER software development ,COMPUTER software developers - Abstract
Code assistance refers to the utilization of various tools, techniques, and models to help developers in the process of software development. As coding tasks become increasingly complex, code assistant plays a pivotal role in enhancing developer productivity, reducing errors, and facilitating a more efficient coding workflow. This assistance can manifest in various forms, including code autocompletion, error detection and correction, code generation, documentation support, and context-aware suggestions. Language models have emerged as integral components of code assistance, offering developers the capability to receive intelligent suggestions, generate code snippets, and enhance overall coding proficiency. In this paper, we propose new hybrid models for code generation by leveraging pre-trained language models BERT, RoBERTa, ELECTRA, and LUKE with the Marian Causal Language Model. Selecting these models based on their strong performance in various natural language processing tasks. We evaluate the performance of these models on two datasets CoNaLa and DJANGO and compare them to existing state-of-the-art models. We aim to investigate the potential of pre-trained transformer language models to revolutionize code generation, offering improved precision and efficiency in navigating complex coding scenarios. Additionally, conducting error analysis and refining the generated code. Our results show that these models, when combined with the Marian Decoder, significantly improve code generation accuracy and efficiency. Notably, the RoBERTaMarian model achieved a maximum BLEU score of 35.74 and an exact match accuracy of 13.8% on CoNaLa, while LUKE-Marian attained a BLEU score of 89.34 and an exact match accuracy of 78.50% on DJANGO. Implementation of this work is available at https://github.com/AhmedSSoliman/Leveraging-Pretrained-Language-Models-for-Code-Generation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. Integrating green computing into rational unified process for sustainable development goals: a comprehensive approach.
- Author
-
Firmansyah, Filan, Sudirman, M. Yoga Distra, and Putra, Rakhmadi Irfansyah
- Subjects
SOFTWARE development tools ,SUSTAINABLE development ,GREEN technology ,COMPUTER software development ,COMPUTER software testing ,REQUIREMENTS engineering - Abstract
This research explores the incorporation of green computing variables into the rational unified process (RUP) methodology, specifically focusing on sustainable development goal (SDGs) 12-responsible consumption and production. Supported by three additional papers using the preferred reporting items for systematic reviews and meta-analyses (PRISMA) method. Our study aims to promote eco-friendly software development practices and tools (artifacts) aligned with green computing principles to support SDGs throughout RUP development phases. We conducted a matrix thorough analysis of existing green computing adaptability within RUP, yielding key findings: a system charter for inception, system requirement specification for elaboration, software development result for construction, and software test report/user acceptance test for transition. As a result, we've compiled comprehensive phase-specific documents, emphasizing the need for educational initiatives to foster green computing adoption among developers. This study advocates for cross-disciplinary collaboration to ensure successful implementation of eco-friendly software development processes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. Best Agile method selection approach at workplace.
- Author
-
Merzouk, Soukaina, Jabir, Brahim, Marzak, Abdelaziz, and Sael, Nawal
- Subjects
AGILE software development ,DECISION trees ,COMPUTER software development ,BUSINESS process modeling ,PROJECT managers ,CLIENT satisfaction ,CUSTOMER satisfaction - Abstract
Selecting the most suitable agile software development method is a challenging task due to the variety of available methods, each with its strengths and weaknesses. To achieve project goals effectively, factors such as project needs, team size, complexity, and customer involvement should be carefully evaluated. Choosing the appropriate agile method is crucial for achieving high client satisfaction and effective team management, but it can be a challenging task for project managers and higher-level management officials. This paper presents a solution aiming to help them in selecting the most suitable software development method for their project. In this regard, this solution includes a pre-project management approach model and a decision tree that considers the unique requirements of the project. In the proposed solution results, Scrum was found to be suitable for both small and large projects, on the condition that roles and responsibilities are clearly defined and that the approach is people-centric. Furthermore, high-risk mitigation measures should be added for small projects. To facilitate the use of our model, a software application has been developed which implements the decision-making tree. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. Studying the Quality of Source Code Generated by Different AI Generative Engines: An Empirical Evaluation.
- Author
-
Tosi, Davide
- Subjects
GENERATIVE artificial intelligence ,LANGUAGE models ,COMPUTER software development ,ENGINES ,ARTIFICIAL intelligence ,SOFTWARE measurement - Abstract
The advent of Generative Artificial Intelligence is opening essential questions about whether and when AI will replace human abilities in accomplishing everyday tasks. This issue is particularly true in the domain of software development, where generative AI seems to have strong skills in solving coding problems and generating software source code. In this paper, an empirical evaluation of AI-generated source code is performed: three complex coding problems (selected from the exams for the Java Programming course at the University of Insubria) are prompted to three different Large Language Model (LLM) Engines, and the generated code is evaluated in its correctness and quality by means of human-implemented test suites and quality metrics. The experimentation shows that the three evaluated LLM engines are able to solve the three exams but with the constant supervision of software experts in performing these tasks. Currently, LLM engines need human-expert support to produce running code that is of good quality. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. A bibliometric analysis of Agile software development publications originating from Turkey.
- Author
-
Ozkan, Necmettin, Gurgen Erdogan, Tugba, Bal, Sevval, and Gök, Mehmet Şahin
- Subjects
- *
AGILE software development , *BIBLIOMETRICS , *COMPUTER software development , *SECONDARY research , *RESEARCH personnel - Abstract
Agile software development has reached wide adoption in various countries including Turkey, even though from which its original cultural backgrounds differ. In Turkey, many organizations have started to adopt Agile approaches more and more in their software development processes. This interest in the country's software development is parallel to what the academic literature on Agile in the country exhibits. However, despite the prevalence of Agile in Turkey, there is a lack of sufficient secondary research and comprehensive review on Agile in Turkey, which poses a significant necessity for further investigation. Considering this gap, we performed a quantitative bibliometric analysis of Agile software development publications produced by Turkish organizations in a holistic and broad approach both for scholars and practitioners. We provide a summary of relevant academic studies that emerged in Agile research in Turkey by focusing on many aspects including bibliometric properties of papers, researchers, affiliations, venues, and thematic contents that are separated into 15 sub‐research questions. After delivering results based on the questions, we discuss the results and findings of our study and present implications regarding the findings. The main contributions of our work are twofold. First, the paper may help the readers to have a quick idea, understand the subject, and gain insight from a large volume of scientific data. Second, the paper can help readers to use these analyses to form future research. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. Stress, motivation, and performance in global software engineering.
- Author
-
Suárez, Julio and Vizcaíno, Aurora
- Subjects
- *
SOFTWARE engineering , *COMPUTER software development , *MOTIVATION (Psychology) , *VIRTUAL work teams - Abstract
The objective of this study is to analyze the current perspective as regards knowledge related to what causes stress or motivates developers, how these two aspects are related to each other, and how this in turn affects their performance in the sphere of Global Software Development and how these can be controlled. This paper presents the results obtained after conducting a systematic mapping study of literature in order to analyze how stress, motivation, and performance affect the project members in Global Software Development teams. We carried out a systematic mapping of published studies dealing with stress, motivation, and performance in global software engineering. A total of 118 papers dealing with this subject were found. The literature analyzed provided a relatively significant quantity of data referring to the impact that the characteristics of distributed software development projects have on the performance and productivity of teams, along with the actions taken to improve that performance. However, when focusing on the analysis of the impact of this type of projects on team members' motivation, and on the actions that can be taken to improve that motivation, we discovered that the number of works decreases considerably and that works referring to the impact of this kind of development on developers' stress were virtually non‐existent, as were those concerning ways in which to improve that stress. We are, therefore, of the opinion that it is necessary to carry out in‐depth research into the aspects of working in distributed teams that may have a negative impact on developers' levels of motivation and stress, along with what could be beneficial in order to improve levels of motivation and decrease levels of stress. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. FSDP: Frequent Software Defects Prediction Based on Defect Correlation Learning for Quality Software Development.
- Author
-
Reddy, Sareddy Shiva and Pabboju, Suresh
- Subjects
COMPUTER software quality control ,COMPUTER software development ,SUPPORT vector machines ,COMPUTER software ,SYSTEMS software - Abstract
Software has become an essential and important part of every domain system. Developing quality software is critical to maintaining a stable and secure system. Most of the existing software defect prediction tasks focus on the various kinds of defects leftover in the software system, but they do not focus on the most common and frequent software defects that developers most commonly do and which have a significant effect on the quality of software development. These unnoticeable defects have a considerable impact on the functionality and also on development time, effort, and cost of the software. This paper propose a frequent software defects prediction (FSDP) mechanism based on defects Correlation learning method (CLM) utilizing various defects metrics. The aim of this work is to assist developers in accurately identifying software defects and support project managers in ensuring software quality by minimizing the presence of defect-prone code during development. The evaluation of FSDP was performed using NASA datasets in comparison with the conventional Naive Bayes, Support Vector Machine, and Random Forest classifiers and also compare with the state-of-the-arts methods to measure accuracy, precision, recall and F-Score to understand the impact of the prediction accuracy. The proposed FSDP achieves 98.88% accuracy and 98.86% precision in compare to state-of-the-arts classifiers methods indicates the effectiveness of the proposed approach in defect prediction. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
41. STANDARDS IN THE IT INDUSTRY – THE DEVELOPERS’ PERSPECTIVE.
- Author
-
ROGIŃSKI, Mikołaj
- Subjects
INFORMATION technology industry ,INFORMATION technology ,GROUNDED theory ,COMPUTER software development - Abstract
Purpose: The aim of this article was to present the typology of information technology standards and to explore their importance for programmers. Design/methodology/approach: The research was exploratory in nature, and based on grounded theory and ethnography. The tool used to collect data were interviews. Findings: On the basis of the research it is concluded that standards were of utmost importance to the respondents, and were a thing that allowed them to work efficiently. Research limitations/implications: The conducted research was qualitative and inductive. For this reason, there is limited possibility of making generalizations about the results. Originality/value: The paper presents important findings that might increase the work efficiency of programmers. Additionally the research was conducted using a relatively uncommon approach in IT and management field (grounded theory, ethnography, qualitative methods, interviews). [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. A Secure and Cost-Effective Training Framework Atop Serverless Computing for Object Detection in Blasting Sites.
- Author
-
Tianming Zhang, Zebin Chen, Haonan Guo, Bojun Ren, Quanmin Xie, Mengke Tian, and Yong Wang
- Subjects
OBJECT recognition (Computer vision) ,BLASTING ,COMPUTER software development ,MOBILE robots ,RESEARCH personnel - Abstract
The data analysis of blasting sites has always been the research goal of relevant researchers. The rise of mobile blasting robots has aroused many researchers' interest in machine learning methods for target detection in the field of blasting. Serverless Computing can provide a variety of computing services for people without hardware foundations and rich software development experience, which has aroused people's interest in how to use it in the field ofmachine learning. In this paper, we design a distributedmachine learning training application based on the AWS Lambda platform. Based on data parallelism, the data aggregation and training synchronization in Function as a Service (FaaS) are effectively realized. It also encrypts the data set, effectively reducing the risk of data leakage. We rent a cloud server and a Lambda, and then we conduct experiments to evaluate our applications. Our results indicate the effectiveness, rapidity, and economy of distributed training on FaaS. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
43. Harnessing Test-Oriented Knowledge Graphs for Enhanced Test Function Recommendation.
- Author
-
Liu, Kaiqi, Wu, Ji, Sun, Qing, Yang, Haiyan, and Wan, Ruiyuan
- Subjects
KNOWLEDGE graphs ,KNOWLEDGE gap theory ,COMPUTER software development ,TEST systems - Abstract
Application Programming Interfaces (APIs) have become common in contemporary software development. Many automated API recommendation methods have been proposed. However, these methods suffer from a deficit of using domain knowledge, giving rise to challenges like the "cold start" and "semantic gap" problems. Consequently, they are unsuitable for test function recommendation, which recommends test functions for test engineers to implement test cases formed with various test steps. This paper introduces an approach named TOKTER, which recommends test functions leveraging test-oriented knowledge graphs. Such a graph contains domain concepts and their relationships related to the system under test and the test harness, which is constructed from the corpus data of the concerned test project. TOKTER harnesses the semantic associations between test steps (or queries) and test functions by considering literal descriptions, test function parameters, and historical data. We evaluated TOKTER with an industrial dataset and compared it with three state-of-the-art approaches. Results show that TOKTER significantly outperformed the baseline by margins of at least 36.6% in mean average precision (MAP), 19.6% in mean reciprocal rank (MRR), and 1.9% in mean recall (MR) for the top-10 recommendations. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
44. Assessing System Quality Changes during Software Evolution: The Impact of Design Patterns Explored via Dependency Analysis.
- Author
-
Hsu, Kuo-Hsun, Szu-Tu, Hua-Chieh, and Tsai, Chia-Hsing
- Subjects
SOFTWARE architecture ,COMPUTER software development ,DESIGN software ,COMPUTER software ,COMPUTER software quality control - Abstract
Design patterns provide solutions to recurring problems in software design and development, promoting scalability, readability, and maintainability. While past research focused on the utilization of the design patterns and performance, there is limited insight into their impact on program evolution. Dependency signifies relationships between program elements, reflecting a program's structure and interaction. High dependencies indicate complexity and potential flaws, hampering system quality and maintenance. This paper presents how design patterns influence software evolution by analyzing dependencies using the Abstract Syntax Tree (AST) to examine dependency patterns during evolution. We employed three widely adopted design patterns from the Gang of Four (GoF) as experimental examples. The results show that design patterns effectively reduce dependencies, lowering system complexity and enhancing quality. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
45. Commit-Level Software Change Intent Classification Using a Pre-Trained Transformer-Based Code Model.
- Author
-
Heričko, Tjaša, Šumak, Boštjan, and Karakatič, Sašo
- Subjects
NATURAL language processing ,TRANSFORMER models ,SOFTWARE maintenance ,COMPUTER software ,COMPUTER software development ,SOURCE code - Abstract
Software evolution is driven by changes made during software development and maintenance. While source control systems effectively manage these changes at the commit level, the intent behind them are often inadequately documented, making understanding their rationale challenging. Existing commit intent classification approaches, largely reliant on commit messages, only partially capture the underlying intent, predominantly due to the messages' inadequate content and neglect of the semantic nuances in code changes. This paper presents a novel method for extracting semantic features from commits based on modifications in the source code, where each commit is represented by one or more fine-grained conjoint code changes, e.g., file-level or hunk-level changes. To address the unstructured nature of code, the method leverages a pre-trained transformer-based code model, further trained through task-adaptive pre-training and fine-tuning on the downstream task of intent classification. This fine-tuned task-adapted pre-trained code model is then utilized to embed fine-grained conjoint changes in a commit, which are aggregated into a unified commit-level vector representation. The proposed method was evaluated using two BERT-based code models, i.e., CodeBERT and GraphCodeBERT, and various aggregation techniques on data from open-source Java software projects. The results show that the proposed method can be used to effectively extract commit embeddings as features for commit intent classification and outperform current state-of-the-art methods of code commit representation for intent categorization in terms of software maintenance activities undertaken by commits. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
46. Exploring the Potential of Pre-Trained Language Models of Code for Automated Program Repair.
- Author
-
Hao, Sichong, Shi, Xianjun, and Liu, Hongwei
- Subjects
LANGUAGE models ,DEBUGGING ,COMPUTER software development ,SOURCE code ,CURRICULUM planning - Abstract
In the realm of software development, automated program repair (APR) emerges as a pivotal technique, autonomously debugging faulty code to boost productivity. Despite the notable advancements of large pre-trained language models of code (PLMCs) in code generation, their efficacy in complex tasks like APR remains suboptimal. This limitation is attributed to the generic development of PLMCs, whose specialized potential for APR is yet be to fully explored. In this paper, we propose a novel approach designed to enhance PLMCs' APR performance through source code augmentation and curriculum learning. Our approach employs code augmentation operators to generate a spectrum of syntactically varied yet semantically congruent bug-fixing programs, thus enriching the dataset's diversity. Furthermore, we design a curriculum learning strategy, enabling PLMCs to develop a deep understanding of program semantics from these enriched code variants, thereby refining their APR fine-tuning prowess. We apply our approach across different PLMCs and systematically evaluate it on three benchmarks: BFP-small, BFP-medium, and Defects4J. The experimental results show that our approach outperforms both original models and existing baseline methods, demonstrating the promising future of adapting PLMCs for code debugging in practice. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
47. ChainAgile: A framework for the improvement of Scrum Agile distributed software development based on blockchain.
- Author
-
Qureshi, Junaid Nasir and Farooq, Muhammad Shoaib
- Subjects
AGILE software development ,SCRUM (Computer software development) ,BLOCKCHAINS ,ELECTRONIC wallets ,COMPUTER software development ,LATE payment - Abstract
Software Development based on Scrum Agile in a distributed development environment plays a pivotal role in the contemporary software industry by facilitating software development across geographic boundaries. However, in the past different frameworks utilized to address the challenges like communication and collaboration in scrum agile distributed software development (SADSD) were notably inadequate in transparency, security, traceability, geographically dispersed location work agreements, geographically dispersed teamwork effectiveness, and trust. These deficiencies frequently resulted in delays in software development and deployment, customer dissatisfaction, canceled agreements, project failures, and disputes over payments between customers and development teams. To address these challenges of SADSD, this paper proposes a new framework called ChainAgile, which leverages blockchain technology. ChainAgile employs a private Ethereum blockchain to facilitate the execution of smart contracts. These smart contracts cover a range of functions, including acceptance testing, secure payments, requirement verification, task prioritization, sprint backlog, user story design and development and payments with the automated distribution of payments via digital wallets to development teams. Moreover, in the ChainAgile framework, smart contracts also play a pivotal role in automatically imposing penalties on customers for making late payments or for no payments and penalties on developers for completing the tasks that exceed their deadlines. Furthermore, ChainAgile effectively addresses the scalability limitations intrinsic in blockchain technology by incorporating the Interplanetary File System (IPFS) is used for storage solutions as an off-chain mechanism. The experimental results conclusively show that this innovative approach substantially improves transparency, traceability, coordination, communication, security, and trust for both customers and developers engaged in scrum agile distributed software development (SADSD). [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
48. A Solution for Submitting Expenses.
- Author
-
Monteiro, Tiago, Abrantes, Steven, and Ratinho, Maria
- Subjects
COMPUTER software development ,COST ,MODERN society ,MOBILE apps ,POWER tools ,SOFTWARE frameworks - Abstract
Given the escalating and rapid advancement of technology in recent decades, modern society has progressively recognized the need and practicality of having applications on their personal technological devices. These applications are intended to simplify the execution of routine tasks and, consequently, save time. This Article presents the development of a mobile application designed for the purpose of submitting expense reports, while also ensuring its adaptability to meet the needs of future customers across diverse industries. The primary goal of this application is to automate and streamline the process of expense reporting, thereby eliminating the requirement for manual completion of paper forms. This paper describes the steps and details of the development of the app using the Power Apps platform, one of the most popular and renowned frameworks for software development using the Low-Code approach. In addition, the use of a technological tool such as Power Apps shows the importance of innovation and process automation for today's reality, especially in times of change and challenges. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
49. Efficient Cross-Project Software Defect Prediction Based on Federated Meta-Learning.
- Author
-
Chen, Haisong, Yang, Linlin, and Wang, Aili
- Subjects
MACHINE learning ,COMPUTER software development ,COMPUTER software ,FORECASTING ,LOCAL knowledge ,ITERATIVE learning control - Abstract
Software defect prediction is an important part of software development, which aims to use existing historical data to predict future software defects. Focusing on the model performance and communication efficiency of cross-project software defect prediction, this paper proposes an efficient communication-based federated meta-learning (ECFML) algorithm. The lightweight MobileViT network is used as the meta-learner of the Model Agnostic Meta-Learning (MAML) algorithm. By learning common knowledge on the local data of multiple clients, and then fine-tuning the model, the number of unnecessary iterations is reduced, and communication efficiency is improved while reducing the number of parameters. The gradient information model is encrypted using the differential privacy of the Laplace mechanism, and the optimal privacy budget is determined through experiments. Experiments on three public datasets (AEEEM, NASA, and Relink) verified the effectiveness of ECFML in terms of parameter quantity, convergence, and model performance of cross-project software defect prediction. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
50. EEG as a potential ground truth for the assessment of cognitive state in software development activities: A multimodal imaging study.
- Author
-
Medeiros, Júlio, Simões, Marco, Castelhano, João, Abreu, Rodolfo, Couceiro, Ricardo, Henriques, Jorge, Castelo-Branco, Miguel, Madeira, Henrique, Teixeira, César, and de Carvalho, Paulo
- Subjects
ELECTROENCEPHALOGRAPHY ,COMPUTER software development ,DIAGNOSTIC imaging ,INSULAR cortex ,HUMAN error ,DEBUGGING ,SHORT-term memory - Abstract
Cognitive human error and recent cognitive taxonomy on human error causes of software defects support the intuitive idea that, for instance, mental overload, attention slips, and working memory overload are important human causes for software bugs. In this paper, we approach the EEG as a reliable surrogate to MRI-based reference of the programmer's cognitive state to be used in situations where heavy imaging techniques are infeasible. The idea is to use EEG biomarkers to validate other less intrusive physiological measures, that can be easily recorded by wearable devices and useful in the assessment of the developer's cognitive state during software development tasks. Herein, our EEG study, with the support of fMRI, presents an extensive and systematic analysis by inspecting metrics and extracting relevant information about the most robust features, best EEG channels and the best hemodynamic time delay in the context of software development tasks. From the EEG-fMRI similarity analysis performed, we found significant correlations between a subset of EEG features and the Insula region of the brain, which has been reported as a region highly related to high cognitive tasks, such as software development tasks. We concluded that despite a clear inter-subject variability of the best EEG features and hemodynamic time delay used, the most robust and predominant EEG features, across all the subjects, are related to the Hjorth parameter Activity and Total Power features, from the EEG channels F4, FC4 and C4, and considering in most of the cases a hemodynamic time delay of 4 seconds used on the hemodynamic response function. These findings should be taken into account in future EEG-fMRI studies in the context of software debugging. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.