9,297 results on '"Software engineering"'
Search Results
2. Development of dragonfly identification field guide application in Jatimulyo Tourism village integrated with values of local potential: Feasibility and student responses.
- Author
-
Salim, Ishadiyanto and Wibowo, Yuni
- Subjects
- *
HIGH school students , *DRAGONFLIES , *IDENTIFICATION , *BIOLOGY teachers , *READABILITY formulas , *SOFTWARE engineering - Abstract
This study aims to determine the feasibility and student responses of dragonfly identification field guide mobile application for tenth grade students in field study activities of biodiversity in Indonesia. This research is a research and development with 4D model with the stages of define, design, develop, and disseminate. Feasibility test of dragonfly identification guide mobile application is carried out by material experts, media experts, biology teachers, and student responses to know the readability of the product. Validation used a questionanarie to collect data and analyzed using descriptive analysis. The results showed that the dragonfly identification guide application compiled in Jatimulyo Tourism Village has very good quality in aspects of material, presentation, software engineering, usage, display and has good quality in the language aspect. Readability tests by students get a percentage of 85% with a very good response. Therefore, the dragonfly identification field guide integrated with values of Local Potential in Jatimulyo Tourism Village is feasible to be used for senior high school students. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Sharpening Your Tools.
- Author
-
GARFINKEL, SIMSON and STEWART, JON
- Subjects
- *
FORENSIC sciences , *DIGITAL technology , *C++ , *OPEN source software , *SOFTWARE engineering - Abstract
This article details the work done to update a high-performance digital forensics tool, bulk_extractor (BE) to C++17 programming language. The article describes the complex nature of digital forensics and the tools that support it, an overview of BE, and the steps undertaken to update it which included improving the code quality, removing rarely used functionality, and performance tuning.
- Published
- 2023
- Full Text
- View/download PDF
4. Security for Machine Learning-based Software Systems: A Survey of Threats, Practices, and Challenges.
- Author
-
Chen, Huaming and Babar, M. Ali
- Published
- 2024
- Full Text
- View/download PDF
5. Graph topological transformations in space-filling cell aggregates.
- Author
-
Sarkar, Tanmoy and Krajnc, Matej
- Subjects
- *
CELL transformation , *DATA structures , *KNOWLEDGE graphs , *ORDER-disorder transitions , *PROBLEM solving , *SOFTWARE engineering - Abstract
Cell rearrangements are fundamental mechanisms driving large-scale deformations of living tissues. In three-dimensional (3D) space-filling cell aggregates, cells rearrange through local topological transitions of the network of cell-cell interfaces, which is most conveniently described by the vertex model. Since these transitions are not yet mathematically properly formulated, the 3D vertex model is generally difficult to implement. The few existing implementations rely on highly customized and complex software-engineering solutions, which cannot be transparently delineated and are thus mostly non-reproducible. To solve this outstanding problem, we propose a reformulation of the vertex model. Our approach, called Graph Vertex Model (GVM), is based on storing the topology of the cell network into a knowledge graph with a particular data structure that allows performing cell-rearrangement events by simple graph transformations. Importantly, when these same transformations are applied to a two-dimensional (2D) polygonal cell aggregate, they reduce to a well-known T1 transition, thereby generalizing cell-rearrangements in 2D and 3D space-filling packings. This result suggests that the GVM's graph data structure may be the most natural representation of cell aggregates and tissues. We also develop a Python package that implements GVM, relying on a graph-database-management framework Neo4j. We use this package to characterize an order-disorder transition in 3D cell aggregates, driven by active noise and we find aggregates undergoing efficient ordering close to the transition point. In all, our work showcases knowledge graphs as particularly suitable data models for structured storage, analysis, and manipulation of tissue data. Author summary: Space-filling polygonal and polyhedral packings have been studied as physical models for foams and living tissues for decades. One of the main challenges in the field is to mathematically describe complex topological transformations of the network of cell-cell interfaces that are present during cell rearrangements, accompanying plastic deformations and large-scale cellular flows. Our work addresses this challenge by storing the topology of the network of cell-cell interfaces into a knowledge graph with a specific data structure, uniquely defined by a metagraph. It turns out that this graph technology, also used by tech giants such as Google and Amazon, allows representing topological transformations as graph transformations, that are intuitive, easy to visualize, and straight-forward to implement computationally. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Implementing no-signaling correlations as a service.
- Author
-
Koniorczyk, Mátyás, Naszvadi, Péter, Bodor, András, Hanyecz, Ottó, Adam, Peter, and Pintér, Miklós
- Subjects
- *
WEB-based user interfaces , *QUANTUM computers , *SOFTWARE engineering - Abstract
We deal with no-signaling correlations that include Bell-type quantum nonlocality. We consider a logical implementation using a trusted central server with encrypted connections to clients. We show that in this way it is possible to implement two-party no-signaling correlations in an asynchronous manner. While from the point of view of physics our approach can be considered as the computer emulation of the results of measurements on entangled particles, from the software engineering point of view it introduces a primitive in communication protocols that can be capable of coordinating agents without revealing the details of their actions. We present an actual implementation in the form of a Web-based application programming interface (RESTful Web API). We demonstrate the use of the API via the simple implementation of the Clauser–Horne–Shimony–Holt game. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Intelligent Learning-Based Methods for Determining the Ideal Team Size in Agile Practices.
- Author
-
Olivares, Rodrigo, Noel, Rene, Guzmán, Sebastián M., Miranda, Diego, and Munoz, Roberto
- Abstract
One of the significant challenges in scaling agile software development is organizing software development teams to ensure effective communication among members while equipping them with the capabilities to deliver business value independently. A formal approach to address this challenge involves modeling it as an optimization problem: given a professional staff, how can they be organized to optimize the number of communication channels, considering both intra-team and inter-team channels? In this article, we propose applying a set of bio-inspired algorithms to solve this problem. We introduce an enhancement that incorporates ensemble learning into the resolution process to achieve nearly optimal results. Ensemble learning integrates multiple machine-learning strategies with diverse characteristics to boost optimizer performance. Furthermore, the studied metaheuristics offer an excellent opportunity to explore their linear convergence, contingent on the exploration and exploitation phases. The results produce more precise definitions for team sizes, aligning with industry standards. Our approach demonstrates superior performance compared to the traditional versions of these algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. Cross-Project Defect Prediction Based on Domain Adaptation and LSTM Optimization.
- Author
-
Javed, Khadija, Shengbing, Ren, Asim, Muhammad, and Wani, Mudasir Ahmad
- Subjects
- *
SOFTWARE engineering , *PROBABILISTIC generative models , *SUPPORT vector machines , *DATA distribution , *FEATURE selection , *FORECASTING - Abstract
Cross-project defect prediction (CPDP) aims to predict software defects in a target project domain by leveraging information from different source project domains, allowing testers to identify defective modules quickly. However, CPDP models often underperform due to different data distributions between source and target domains, class imbalances, and the presence of noisy and irrelevant instances in both source and target projects. Additionally, standard features often fail to capture sufficient semantic and contextual information from the source project, leading to poor prediction performance in the target project. To address these challenges, this research proposes Smote Correlation and Attention Gated recurrent unit based Long Short-Term Memory optimization (SCAG-LSTM), which first employs a novel hybrid technique that extends the synthetic minority over-sampling technique (SMOTE) with edited nearest neighbors (ENN) to rebalance class distributions and mitigate the issues caused by noisy and irrelevant instances in both source and target domains. Furthermore, correlation-based feature selection (CFS) with best-first search (BFS) is utilized to identify and select the most important features, aiming to reduce the differences in data distribution among projects. Additionally, SCAG-LSTM integrates bidirectional gated recurrent unit (Bi-GRU) and bidirectional long short-term memory (Bi-LSTM) networks to enhance the effectiveness of the long short-term memory (LSTM) model. These components efficiently capture semantic and contextual information as well as dependencies within the data, leading to more accurate predictions. Moreover, an attention mechanism is incorporated into the model to focus on key features, further improving prediction performance. Experiments are conducted on apache_lucene, equinox, eclipse_jdt_core, eclipse_pde_ui, and mylyn (AEEEM) and predictor models in software engineering (PROMISE) datasets and compared with active learning-based method (ALTRA), multi-source-based cross-project defect prediction method (MSCPDP), the two-phase feature importance amplification method (TFIA) on AEEEM and the two-phase transfer learning method (TPTL), domain adaptive kernel twin support vector machines method (DA-KTSVMO), and generative adversarial long-short term memory neural networks method (GB-CPDP) on PROMISE datasets. The results demonstrate that the proposed SCAG-LSTM model enhances the baseline models by 33.03%, 29.15% and 1.48% in terms of F1-measure and by 16.32%, 34.41% and 3.59% in terms of Area Under the Curve (AUC) on the AEEEM dataset, while on the PROMISE dataset it enhances the baseline models' F1-measure by 42.60%, 32.00% and 25.10% and AUC by 34.90%, 27.80% and 12.96%. These findings suggest that the proposed model exhibits strong predictive performance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. The Real Challenges for Climate and Weather Modelling on its Way to Sustained Exascale Performance: A Case Study using ICON (v2.6.6).
- Author
-
Adamidis, Panagiotis, Pfister, Erik, Bockelmann, Hendryk, Zobel, Dominik, Beismann, Jens-Olaf, and Jacob, Marek
- Subjects
- *
CLIMATE change , *ATMOSPHERIC models , *SOFTWARE engineering , *PERFORMANCE theory , *SOFTWARE architecture , *TASK performance - Abstract
The weather and climate model ICON (ICOsahedral Nonhydrostatic) is being used in high resolution climate simulations, in order to resolve small-scale physical processes. The envisaged performance for this task is 1 simulated year per day for a coupled atmosphere-ocean setup at global 1.2 km resolution. The necessary computing power for such simulations can only be found on exascale supercomputing systems. The main question we try to answer in this article is where to find sustained exascale performance, i. e. which hardware (processor type) is best suited for the weather and climate model ICON and consequently how this performance can be exploited by the model, i. e. what changes are required in ICON's software design so as to utilize exascale platforms efficiently. To this end, we present an overview of the available hardware technologies and a quantitative analysis of the key performance indicators of the ICON model on several architectures. It becomes clear that domain decomposition-based parallelization has reached the scaling limits, leading us to conclude that the performance of a single node is crucial to achieve both better performance and better energy efficiency. Furthermore, based on the computational intensity of the examined kernels of the model it is shown that architectures with higher memory throughput are better suited than those with high computational peak performance. From a software engineering perspective, a redesign of ICON from a monolithic to a modular approach is required to address the complexity caused by hardware heterogeneity and new programming models to make ICON suitable for running on such machines. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. Image‐based communication on social coding platforms.
- Author
-
Nayebi, Maleknaz and Adams, Bram
- Subjects
- *
COMPUTER software developers , *SOCIAL networks , *SOFTWARE analytics , *SOFTWARE engineering , *IMAGE processing , *VIDEOS - Abstract
Visual content in the form of images and videos has taken over general‐purpose social networks in a variety of ways, streamlining and enriching online communications. We are interested to understand if and to what extent the use of images is popular and helpful in social coding platforms. We mined 9 years of data from two popular software developers' platforms: the Mozilla issue tracking system, that is, Bugzilla, and the most well‐known platform for developers' Q/A, that is, Stack Overflow. We further triangulated and extended our mining results by performing a survey with 168 software developers. We observed that, between 2013 and 2022, the number of posts containing image data on Bugzilla and Stack Overflow doubled. Furthermore, we found that sharing images makes other developers engage more and faster with the content. In the majority of cases in which an image is included in a developer's post, the information in that image is complementary to the text provided. Finally, our results showed that when an image is shared, understanding the content without the information in the image is unlikely for 86.9% of the cases. Based on these observations, we discuss the importance of considering visual content when analyzing developers and designing automation tools. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Empirical exploration of critical challenges of requirements implementation in global software development.
- Author
-
Yaseen, Muhammad
- Subjects
- *
FACE-to-face communication , *RANK correlation (Statistics) , *REQUIREMENTS engineering , *RESEARCH personnel , *COMPUTER software industry , *SOFTWARE engineering , *COMPUTER software development - Abstract
Requirements collection is difficult and a critical phase of software development life cycle, in particular in a global software development (GSD) environment. In GSD, clients and vendors are physically separated such that there exist challenges such as lack of face‐to‐face communication, language differences, culture variation, and time zone differences. The objective of current research is to identify critical challenges of requirement engineering in GSD. There is no work done yet to empirically analyze all possible challenges via questionnaire survey from software industries. This research paper empirically investigates and analyze the identified challenges from systematic literature review (SLR) based on a questionnaire survey. For this purpose, 50 respondents from different countries are organized. From this research, 13 challenges during requirements implementation in the context of GSD have been identified previously via SLR. These challenges were then evaluated using an empirical approach of questionnaire survey. The results from respondents were analyzed based on type of respondents, level of experience of respondents, and from client–vendor perspective. Finally, the challenges were prioritized based on its frequency of occurrences from the SLR and the questionnaire survey. The relationships between the challenges and the survey results were evaluated using the Spearman's correlation coefficient. The results produced a 0.835 Spearman's correlation coefficient at significance level ρ = 0.000, which showed a strong positive correlation between the outcome of SLR and survey with no significant difference. The implication of this research work lies in both fundamental and practical perspective. The prioritized set of challenges was provided based on SLR, and questionnaire survey acts as a knowledge base for both researchers and industrial practitioners. This work will help researchers to identify challenges in GSD projects or other software engineering areas. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Stress, motivation, and performance in global software engineering.
- Author
-
Suárez, Julio and Vizcaíno, Aurora
- Subjects
- *
SOFTWARE engineering , *COMPUTER software development , *MOTIVATION (Psychology) , *VIRTUAL work teams - Abstract
The objective of this study is to analyze the current perspective as regards knowledge related to what causes stress or motivates developers, how these two aspects are related to each other, and how this in turn affects their performance in the sphere of Global Software Development and how these can be controlled. This paper presents the results obtained after conducting a systematic mapping study of literature in order to analyze how stress, motivation, and performance affect the project members in Global Software Development teams. We carried out a systematic mapping of published studies dealing with stress, motivation, and performance in global software engineering. A total of 118 papers dealing with this subject were found. The literature analyzed provided a relatively significant quantity of data referring to the impact that the characteristics of distributed software development projects have on the performance and productivity of teams, along with the actions taken to improve that performance. However, when focusing on the analysis of the impact of this type of projects on team members' motivation, and on the actions that can be taken to improve that motivation, we discovered that the number of works decreases considerably and that works referring to the impact of this kind of development on developers' stress were virtually non‐existent, as were those concerning ways in which to improve that stress. We are, therefore, of the opinion that it is necessary to carry out in‐depth research into the aspects of working in distributed teams that may have a negative impact on developers' levels of motivation and stress, along with what could be beneficial in order to improve levels of motivation and decrease levels of stress. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. On the perception of graph layouts.
- Author
-
Grabinger, Lisa, Hauser, Florian, and Mottok, Jürgen
- Subjects
- *
FORENSIC psychology , *EYE tracking , *SOFTWARE engineering , *DISCRETION - Abstract
In the field of software engineering, graph‐based models are used for a variety of applications. Usually, the layout of those graphs is determined at the discretion of the user. This article empirically investigates whether different layouts affect the comprehensibility or popularity of a graph and whether one can predict the perception of certain aspects in the graph using basic graphical laws from psychology (i.e., Gestalt principles). Data on three distinct layouts of one causal graph is collected from 29 subjects using eye tracking and a print questionnaire. The evaluation of the collected data suggests that the layout of a graph does matter and that the Gestalt principles are a valuable tool for assessing partial aspects of a layout. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Forensic experts' view of forensic‐ready software systems: A qualitative study.
- Author
-
Daubner, Lukas, Buhnova, Barbora, and Pitner, Tomas
- Subjects
- *
SYSTEMS software , *DIGITAL forensics , *SOFTWARE failures , *SOFTWARE engineers , *QUALITATIVE research - Abstract
Software engineers widely acknowledge the inclusion of security requirements in the early stages of the development process. However, the need to prepare the software for the failure of the implemented security controls and subsequent investigation of the incident is often not discussed. Forensic‐ready software systems represent an evolution of secure systems being designed for the eventual digital forensic investigation. However, their exact properties remain largely unexplored, beyond preliminary high‐level conceptualizations of requirements and capabilities. Further obstacles hindering the adoption of forensic‐ready software systems are the different priorities and goals of involved parties and a gap in the digital forensics expertise of software engineers. In this paper, we conduct an empirical qualitative study identifying the problems and needs of forensic readiness while framing the notion of an ideal forensic‐ready software system and how it should treat potential evidence. To this end, we conducted semisupervised interviews with digital forensics experts on their idea, experience, and suggestions. The results provide insights into the needs of the experts to facilitate the definition of correct requirements towards forensic‐ready software systems to support the anticipated investigations properly. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Towards effective feature selection in estimating software effort using machine learning.
- Author
-
Jadhav, Akshay and Kumar Shandilya, Shishir
- Subjects
- *
FEATURE selection , *COMPUTER software industry , *COMPUTER software development , *COMPUTER software , *RANDOM forest algorithms , *MACHINE learning - Abstract
Software effort estimation is a vital process in the software industry for successfully administering 5Ds of the software development life cycle (SDLC). The 5Ds stand for demand, development, direction, deployment, and designated cost of the software. Software development effort estimation (SDEE) is an effort prediction mechanism to calculate the effort for the development of the software product in order to minimize the challenges in the software field. Academics and practitioners are striving to identify which machine learning estimation technique yields more accurate results based on evaluation metrics, datasets, and other pertinent aspects. The feature selection techniques impact accuracy by selecting the main and relevant features in the dataset and eliminating the redundant and irrelevant features in the dataset. To achieve accurate estimations, the paper utilizes feature selection algorithms, along with various machine learning techniques, which predict the desired effort and the performance of the model has been measured in terms of prediction accuracy, R2 value, relative error, and mean absolute error. The datasets China and Maxwell are trained with the relevant features by applying feature selection algorithms, and estimation techniques are applied to predict the effort. The performance is compared with the regression models and feature selection techniques utilized by many authors previously. The result of the proposed methodology significantly gives the best performance with the combination of feature selection and estimation models than all regression models when applied alone, to both datasets. From the results, it is perceptible that random forest is performing well with the feature selection techniques and obtains the highest prediction accuracy of 99.33% with the China and 89.47% with the Maxwell datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. Counteracting sociocultural barriers in global software engineering using group activities.
- Author
-
Yasin, Affan, Fatima, Rubia, Ali Khan, Javed, Liu, Lin, Ali, Raian, and Wang, Jianmin
- Subjects
- *
ACTIVE learning , *CONSCIOUSNESS raising , *REQUIREMENTS engineering , *TIME management , *SOFTWARE engineering , *COMPUTER software development - Abstract
In modern times, internationally organized teams face a number of coordination problems owing to their different physical operating locations. These challenges usually come in temporal, cultural, and linguistic forms. To resolve some of these issues, we need more coordination, teamwork, and shared understanding in the requirements engineering phase. Many approaches have been introduced to overcome these challenges associated with global software engineering (GSE). The objective of this research study is to introduce amateurs to GSE and improve their understanding of its associated challenges through an activity‐based learning approach. Our method is primarily targeted toward students who already have theoretical knowledge on the topic but require first‐hand experience with GSE. With the aforementioned motivation in mind, we propose, designe, and empirically evaluate two different activities that can help enhance awareness of GSE challenges. For each activity, we simulate an environment wherein participants are made to go through various constructed coordination challenges related to communication, time management, team mistrust, linguistic barriers, cultural barriers, and distribution of tasks. The effectiveness of our proposed activities, captured by the extent to which participants were able to deal with GSE challenges, was judged through various techniques including (i) observation, (ii) post activities survey questionnaire, and (iii) brainstorming and discussion. We show that the proposed activities were effective in helping students learn and further their understanding of GSE concepts. In particular, discussion sessions and survey questionnaire results reflect their ability to identify critical GSE challenges (specifically related to teams) in a simulated scenario. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Machine learning-based defect prediction model using multilayer perceptron algorithm for escalating the reliability of the software.
- Author
-
Juneja, Sapna, Nauman, Ali, Uppal, Mudita, Gupta, Deepali, Alroobaea, Roobaea, Muminov, Bahodir, and Tao, Yuning
- Subjects
- *
SOFTWARE reliability , *SOFTWARE engineering , *COMPUTER software quality control , *PREDICTION models , *COMPUTER software testing , *SYSTEM failures - Abstract
When it comes to software development, precise planning, proper documentation and proper process control, some errors are inevitable in the software environment. These software flaws can lead to quality deterioration, which can be the main reason behind system failure. As the whole world especially developing countries is dependent upon software systems, it is very important to focus on its reliability aspect. Nowadays sophisticated systems require concerted efforts for managing and reducing the shortcomings in software engineering. But, these efforts require more cost, more money and more time. Software error prediction is the most helpful step in the testing stage of the software development life cycle. It identifies components or parts of the code where an error may occur and requires broad testing, so the test resources can be efficiently used. Software error assessment reduces efforts of testing the software by helping the software testers locate the actual problem and classify different classes of errors in the system. Error estimators are majorly used in various organizations to evaluate the software to save time, improve the quality of software and testing and optimize resources to meet timelines. Machine learning provides support in fault projection by collecting the training data from various edge devices and thus helps in escalating the reliability of the software available on Kaggle. The multilayer perceptron shows better results in precision, recall, F1 score and accuracy as compared to decision tree and Gaussian Naive Bayes as it achieves an accuracy of 96.8%. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. Revolutionizing software developmental processes by utilizing continuous software approaches.
- Author
-
Khan, Habib Ullah, Afsar, Waseem, Nazir, Shah, Noor, Asra, Kundi, Mahwish, Maashi, Mashael, and Alshahrani, Haya Mesfer
- Subjects
- *
SOFTWARE engineering , *COMPUTER software developers , *SOFTWARE maintenance , *COMPUTER software development , *CONTINUOUS improvement process , *CONTINUOUS processing - Abstract
The development of smart and innovative software applications in various disciplines has inspired our lives by providing various cutting-edge technologies spanning from online to smart and efficient systems. The proliferation of innovative internet-enabled tools has transformed the nation into a globalized world where individuals can participate on various platforms, collaborate in activities, communicate on issues, and exchange information safely and consistently. Coordination and cooperation are essential in software development. It gathers all software developers in one space, encouraging them to discuss goals and work rationally to accomplish the project goal. In recent years, continuous software development and deployment have become increasingly common in software engineering. Continuous software engineering (CSE) is a method that involves a variety of strategies to increase the regularity of novel and modified software versions. CSE enables a continuous learning and improvement process through rapid software update iteration by combining continuous integration and delivery. Continuous integration is a method that has arisen in order to remove gaps between development and deployment. Software engineers must handle uncertainty and alter stakeholders' requirements, which is possible through continuous software developmental strategies that manage the overall software cycle and produce high-quality software applications. The proposed study is a systematic review related to continuous software development and deployment and focuses on achieving four aims: (1) To explore the impacts of continuous development on software, (2) to pinpoint various tools used to carry out this process, (3) to highlight the challenges faced in adopting continuous approaches for development and (4) to analyze the phases of continuous software engineering. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. Interoperability of heterogeneous Systems of Systems: from requirements to a reference architecture.
- Author
-
Sadeghi, Mersedeh, Carenini, Alessio, Corcho, Oscar, Rossi, Matteo, Santoro, Riccardo, and Vogelsang, Andreas
- Subjects
- *
LITERATURE reviews , *SYSTEM of systems , *SOFTWARE engineering , *BEST practices - Abstract
Interoperability stands as a critical hurdle in developing and overseeing distributed and collaborative systems. Thus, it becomes imperative to gain a deep comprehension of the primary obstacles hindering interoperability and the essential criteria that systems must satisfy to achieve it. In light of this objective, in the initial phase of this research, we conducted a survey questionnaire involving stakeholders and practitioners engaged in distributed and collaborative systems. This effort resulted in the identification of eight essential interoperability requirements, along with their corresponding challenges. Then, the second part of our study encompassed a critical review of the literature to assess the effectiveness of prevailing conceptual approaches and associated technologies in addressing the identified requirements. This analysis led to the identification of a set of components that promise to deliver the desired interoperability by addressing the requirements identified earlier. These elements subsequently form the foundation for the third part of our study, a reference architecture for interoperability-fostering frameworks that is proposed in this paper. The results of our research can significantly impact the software engineering of interoperable systems by introducing their fundamental requirements and the best practices to address them, but also by identifying the key elements of a framework facilitating interoperability in Systems of Systems. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. Software business process adaptive approach supporting organization architecture evolution.
- Author
-
Li, Youhuizi, Yin, Yuyu, Li, Yu, Hu, Haijie, Lu, Linyang, and Cao, Jie
- Subjects
- *
BUSINESS software , *BUSINESS process modeling , *SELF-adaptive software , *SOFTWARE maintenance , *SOFTWARE engineering - Abstract
Software maintenance and evolution play an important role in the software engineering field, especially when current software becomes more and more complex and powerful. As an entity to implement business processes and gain revenue, valuable software is composed of business logic and corresponding organization role interaction interfaces. With the enterprise development, the organization architecture also evolves, like expanding, cross department cooperation, and so on. However, existing software process adaptive approaches mainly focus on handling the change of the business (program) logic instead of organization structure. Therefore, we propose an adaptive software business process approach that supports organization architecture evolution and automatically migrates the run‐time process instances to the latest version. First, a business process adaptation model is designed, which includes the organization layer, business process layer and event layer that connects the two. Based on the model, the organization changing impact and business process model modification are formalized. Besides, the business process adaptation approach is designed. According to the dependence between the organization architecture and the business process activities, the affected domain detection algorithms for three basic business process structures and the business process instance migration algorithm are developed. Finally, the feasibility and stability of the proposed system are comprehensively evaluated with the synthetic data sets. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. Preparation of a Computer Software Program for the Feasibility Study of Livestock Enterprises.
- Author
-
MUNDAN, Durhasan and MUNDAN, İbrahim Talha
- Subjects
- *
COMPUTER software , *DATABASES , *ACCOUNTING software , *PROGRAMMING languages , *FEASIBILITY studies , *SOFTWARE engineering - Abstract
This study was carried out with the aim of developing a software program that will enable the breeder to decide easily during the preparation of the feasibility for livestock enterprises. For this purpose, 63 enterprises in Gaziantep and Sanliurfa provinces/Turkey were visited between the years 2021-2022 and all the data obtained were evaluated. The "C#" programming language was used in the development of the software program. "Microsoft SQL Server" database was used to store the obtained data. This feasibility program is a software program where productivity checks are performed for enterprises and their personnel. It is a program that can be used easily from a small-capacity enterprises to a large-capacity enterprises. The cost calculations are not included in the program due to the economic conditions of the market. As a result, this program, which was prepared by taking into account software engineering techniques, will provide great advantages and conveniences for enterprises. Risk factors will be determined and alternatives will be presented with this software program that performs enterprises efficiency testing. It has been concluded that this software will be a program that can be preferred by the breeder since it can be used on all computers and offers different alternatives in enterprises establishments. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. Improving Science That Uses Code.
- Author
-
Thimbleby, Harold
- Abstract
As code is now an inextricable part of science it should be supported by competent Software Engineering, analogously to statistical claims being properly supported by competent statistics. If and when code avoids adequate scrutiny, science becomes unreliable and unverifiable because results — text, data, graphs, images, etc — depend on untrustworthy code. Currently, scientists rarely assure the quality of the code they rely on, and rarely make it accessible for scrutiny. Even when available, scientists rarely provide adequate documentation to understand or use it reliably. This paper proposes and justifies ways to improve science using code: 1. Professional Software Engineers can help, particularly in critical fields such as public health, climate change and energy. 2. 'Software Engineering Boards,' analogous to Ethics or Institutional Review Boards, should be instigated and used. 3. The Reproducible Analytic Pipeline (RAP) methodology can be generalized to cover code and Software Engineering methodologies, in a generalization this paper introduces called RAP +. RAP + (or comparable interventions) could be supported and or even required in journal, conference and funding body policies. The paper's Supplemental Material provides a summary of Software Engineering best practice relevant to scientific research, including further suggestions for RAP + workflows. 'Science is what we understand well enough to explain to a computer.' Donald E. Knuth in |$A=B$| [ 1 ] 'I have to write to discover what I am doing.' Flannery O'Connor, quoted in Write for your life [ 2 ] 'Criticism is the mother of methodology.' Robert P. Abelson in Statistics as Principled Argument [ 3 ] 'From its earliest times, science has operated by being open and transparent about methods and evidence, regardless of which technology has been in vogue.' Editorial in Nature [ 4 ] [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. Optimizing OCR Performance for Programming Videos: The Role of Image Super-Resolution and Large Language Models.
- Author
-
Alahmadi, Mohammad D. and Alshangiti, Moayad
- Subjects
- *
LANGUAGE models , *HIGH resolution imaging , *SOFTWARE engineering , *COMPUTER software development , *SOURCE code - Abstract
The rapid evolution of video programming tutorials as a key educational resource has highlighted the need for effective code extraction methods. These tutorials, varying widely in video quality, present a challenge for accurately transcribing the embedded source code, crucial for learning and software development. This study investigates the impact of video quality on the performance of optical character recognition (OCR) engines and the potential of large language models (LLMs) to enhance code extraction accuracy. Our comprehensive empirical analysis utilizes a rich dataset of programming screencasts, involving manual transcription of source code and the application of both traditional OCR engines, like Tesseract and Google Vision, and advanced LLMs, including GPT-4V and Gemini. We investigate the efficacy of image super-resolution (SR) techniques, namely, enhanced deep super-resolution (EDSR) and multi-scale deep super-resolution (MDSR), in improving the quality of low-resolution video frames. The findings reveal significant improvements in OCR accuracy with the use of SR, particularly at lower resolutions such as 360p. LLMs demonstrate superior performance across all video qualities, indicating their robustness and advanced capabilities in diverse scenarios. This research contributes to the field of software engineering by offering a benchmark for code extraction from video tutorials and demonstrating the substantial impact of SR techniques and LLMs in enhancing the readability and reusability of code from these educational resources. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. Contributions of enterprise architecture to software engineering: A systematic literature review.
- Author
-
Martínez‐López, José Antonio, García, Félix, Ruiz, Francisco, and Vizcaíno, Aurora
- Subjects
- *
SOFTWARE architecture , *TECHNICAL literature , *SOFTWARE engineering , *AGILE software development , *INFORMATION technology , *SOFTWARE maintenance , *ENGINEERING models - Abstract
Enterprise architecture is a growing trend that aims to help deal with the complexity of socio‐technical systems such as human organizations, as well as their information technology and systems areas. Nevertheless, the contribution of enterprise architecture to the field of software engineering remains unclear. The purpose of this systematic literature review is to see how enterprise architecture is used in software development and maintenance practice. To this end, we first carried out a search in the SCOPUS database and then organized the papers according to the Software Engineering Body of Knowledge to determine what areas of software engineering are covered by each research study. To understand how enterprise architecture is used, we established a classification based on ISO 42010 and TOGAF. From the systematic literature review, we noticed that the early stages of development are the most impacted by the enterprise architecture. On the other hand, we observed that enterprise architecture is of assistance in the areas of engineering management, engineering processes, and engineering models and methods; these tasks are carried out by teams or managers using different, often agile, development methods or standards. In turn, we found that the most common categories are architecture descriptions; these are often used to facilitate communication and information‐sharing between different stakeholders, in addition to frameworks, which will help to establish common practices in the organization related to the joint use of enterprise architecture and software development. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. Technical debt (TD) through the lens of Twitter: A survey.
- Author
-
Alfayez, Reem, Winn, Robert, Ding, Yunyan, Alfayez, Ghaida, and Boehm, Barry
- Subjects
- *
MICROBLOGS , *SOFTWARE engineering , *RESEARCH personnel , *SYSTEMS software - Abstract
Technical debt (TD) is a metaphor used to refer to the added software system costs acquired from taking shortcuts. Unfortunately, large amounts of TD can lead to serious consequences, and, thus, the management of TD is essential. Due to TD being a relatively new subject of study, many aspects of TD remain ambiguous. Fortunately, Twitter has been proven to hold a wealth of information on many subjects. As such, this survey study aims to gain a better understanding on how interest in TD has evolved over time and how TD is addressed on Twitter. A total of 128,897 TD‐related tweets were scrapped from Twitter and analyzed using a number of proxy measures and Latent Dirichlet Allocation (LDA). The results revealed that interest in TD on Twitter has been generally increasing since the platform's early stages. Furthermore, TD‐related tweets were found to revolve around 11 distinct categories. The TD in games category was discovered to be the most popular category, followed by TD communication and TD repayment. The results highlight that TD is a diverse and overarching topic that contains many potential avenues for further exploration. Software engineering researchers, practitioners, and educators can utilize this study to help steer their TD‐related future efforts. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. Problems of integration between disciplines in the software engineering undergraduates preparation.
- Author
-
Yusupov, Firnafas, Yusupov, Davronbek, and Takhirova, Gulhayo
- Subjects
- *
SOFTWARE engineering , *INFORMATION & communication technologies , *TRAINING of engineers , *COMPUTER science , *UNDERGRADUATES , *SCIENTIFIC computing - Abstract
This article, as an example of the integration of disciplines, is focused on the issues of combining knowledge, skills and practical experience at all levels of training of software engineering specialists and synthesizing knowledge directed to specific field goals are discussed. At the same time, modern information and communication technologies are considered as a means of integration that connects mathematics and computer sciences with each other. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. A new proposed method for determining software requirements.
- Author
-
Hussien, Nadia Mahmood, Mohialden, Yasmin Makki, Allah, Hanan Abed Alwally Abed, and Joshi, Kapil
- Subjects
- *
SOFTWARE engineering , *SYSTEMS software , *ENGINEERING models , *RESERVATION systems , *REQUIREMENTS engineering , *COMPUTER software - Abstract
The first step in developing a system is to define it (requirement specification). In this paper, the combination of the complexity of the incremental model with the object-oriented software engineering model led to a new way of developing software system requirements based on the system's goal. This study focused on the earliest and most critical stage of a software system's life cycle: creating the software specifications that would be utilized to ensure its success. These requirements must be carefully examined when developing a trustworthy software system. This method is used to design and define requirements for COVID-19's social distance online reservation system, and it proved its efficiency in terms of describing what had to be done and the subsequent processes for design, building, and development. COVID-19 developed and enhanced the system. Use it afterwards to demonstrate how nicely it functions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. Software factory a higher education road map.
- Author
-
Meza, Jaime, Cardenas, Leticia Vaca, Torres, Christian, Mendoza, Karina, and Veliz, Vicente
- Subjects
- *
EDUCATION software , *COMPUTER software industry , *HIGHER education , *ARTIFICIAL intelligence , *DATA scrubbing , *COMPUTER software testing , *SOFTWARE engineering - Abstract
For a long time, software engineering has been considered as a highlight driver of development and business support. Software development carries on some issues like high cost and failure projects. These issues have opened new challenges for several scholars and researchers; however, today software crisis continues. Software Factories presents a solution to improve software production and, to face those challenges especially to face de new develop AI-based applications requirements. This work aimed to develop a road map for building Software Factories inside higher education institutions. A design-based research (DBR) methodological approach has been used to build the proposed model. The methodological approach includes four steps: i) design, ii) test, iii) evaluate, and iv) reflect; these steps are repeated several times until they get the best outcomes and learning; a couple of iterations have been executed for testing the model. Early outcomes showed that: i) Building business software components have been the base for developing Software Factories (SF); hence the SF could permit to increase the students' professional skills and, at the same time to, get a broad of opportunities for linking with the industry. ii) The process for building components recommended a set of steps: model requirements, data collection, data cleaning, data labeling, feature engineering, training model, model evaluation, model deployment, and model monitoring. And, iii) during testing time, students showed an increase in their academic and professional skills in the field of the software industry. In conclusion, new trends increasing the fast-changing technical environments foster the development of components nowadays; therefore, a more substantial nexus between higher education and software development practice should be created to improve software mass production. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. Responsible AI Pattern Catalogue: A Collection of Best Practices for AI Governance and Engineering.
- Author
-
Lu, Qinghua, Zhu, Liming, Xu, Xiwei, Whittle, Jon, Zowghi, Didar, and Jacquet, Aurelie
- Published
- 2024
- Full Text
- View/download PDF
30. Requirements and software engineering for automotive perception systems: an interview study.
- Author
-
Habibullah, Khan Mohammad, Heyn, Hans-Martin, Gay, Gregory, Horkoff, Jennifer, Knauss, Eric, Borg, Markus, Knauss, Alessia, Sivencrona, Håkan, and Li, Polly Jing
- Subjects
- *
REQUIREMENTS engineering , *AUTOMOTIVE engineering , *AUTOMOBILE engineers , *INDUSTRIAL safety , *THEMATIC analysis , *SOFTWARE engineering , *AUTOMOBILE driving simulators - Abstract
Driving automation systems, including autonomous driving and advanced driver assistance, are an important safety-critical domain. Such systems often incorporate perception systems that use machine learning to analyze the vehicle environment. We explore new or differing topics and challenges experienced by practitioners in this domain, which relate to requirements engineering (RE), quality, and systems and software engineering. We have conducted a semi-structured interview study with 19 participants across five companies and performed thematic analysis of the transcriptions. Practitioners have difficulty specifying upfront requirements and often rely on scenarios and operational design domains (ODDs) as RE artifacts. RE challenges relate to ODD detection and ODD exit detection, realistic scenarios, edge case specification, breaking down requirements, traceability, creating specifications for data and annotations, and quantifying quality requirements. Practitioners consider performance, reliability, robustness, user comfort, and—most importantly—safety as important quality attributes. Quality is assessed using statistical analysis of key metrics, and quality assurance is complicated by the addition of ML, simulation realism, and evolving standards. Systems are developed using a mix of methods, but these methods may not be sufficient for the needs of ML. Data quality methods must be a part of development methods. ML also requires a data-intensive verification and validation process, introducing data, analysis, and simulation challenges. Our findings contribute to understanding RE, safety engineering, and development methodologies for perception systems. This understanding and the collected challenges can drive future research for driving automation and other ML systems. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. Development of Data Acquisition Software for Electromagnetic Instruments in Landslide Detection.
- Author
-
Li, Bin, Xu, Qiang, Liu, Tian-Xiang, Cheng, Qiang, Tang, Min-gao, Zheng, Guang, and Lei, Hang
- Subjects
- *
LANDSLIDES , *ACQUISITION of data , *GEOLOGICAL research , *DIGITAL signal processing , *ENGINEERING geology , *SOFTWARE engineering - Abstract
Rapid societal development and increased engineering construction have exacerbated the disturbance of the geological environment. The impact of extreme climatic factors has grown, leading to a surge in geological disasters, with landslides emerging as particularly significant. Consequently, fundamental research in geological disaster detection or monitoring necessitates an in-depth study of the physical phenomena accompanying landslides' development, evolution, and occurrence. Exploring the signal characteristics associated with landslides is crucial to indirectly understanding their development and change processes—a scientific question deserving thorough exploration. Despite this research's importance, there is a notable gap in the investigation of the key design and specific implementation of electromagnetic instruments tailored for landslide detection. This gap is particularly pronounced in designing and implementing data acquisition software for electromagnetic instruments. This interdisciplinary research draws on theoretical frameworks from embedded computer science, software engineering, digital signal processing technology, geophysics, and engineering geology. It focuses on developing specialized data acquisition application software for landslide detection or monitoring, contributing to the scientific understanding of landslide development and providing independent intellectual property in the electromagnetic wave signal detection field. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. Seamless Function-Oriented Mechanical System Architectures and Models.
- Author
-
Wyrwich, Christian, Boelsen, Kathrin, Jacobs, Georg, Zerwas, Thilo, Höpfner, Gregor, Konrad, Christian, and Berroth, Joerg
- Subjects
- *
SYSTEMS engineering , *NEW product development , *PARAMETRIC modeling , *MECHANICAL models , *SOFTWARE engineering , *PRODUCT design - Abstract
One major challenge of today's product development is to master the constantly increasing product complexity driven by the interactions between different disciplines, like mechanical, electrical and software engineering. An approach to master this complexity is function-oriented model-based systems engineering (MBSE). In order to guide the developer through the process of transferring requirements into a final product design, MBSE methods are essential. However, especially in mechanics, function-oriented product development is challenging, as functionality is largely determined by the physical effects that occur in the contacts of physical components. Currently, function-oriented MBSE methods enable either the modeling of contacts or of structures as part of physical components. To create seamless function-oriented mechanical system architectures, a holistic method for modeling contacts, structures and their dependencies is needed. Therefore, this paper presents an extension of the motego method to model structures, by which the seamless parametric modeling of function-oriented mechanical system architectures from requirements to the physical product is enabled. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. CodeBERT‐Attack: Adversarial attack against source code deep learning models via pre‐trained model.
- Author
-
Zhang, Huangzhao, Lu, Shuai, Li, Zhuo, Jin, Zhi, Ma, Lei, Liu, Yang, and Li, Ge
- Subjects
- *
SOURCE code , *DEEP learning , *NATURAL language processing , *COMPUTER vision , *SOFTWARE engineering , *COMPUTER programming education - Abstract
Over the past few years, the software engineering (SE) community has widely employed deep learning (DL) techniques in many source code processing tasks. Similar to other domains like computer vision and natural language processing (NLP), the state‐of‐the‐art DL techniques for source code processing can still suffer from adversarial vulnerability, where minor code perturbations can mislead a DL model's inference. Efficiently detecting such vulnerability to expose the risks at an early stage is an essential step and of great importance for further enhancement. This paper proposes a novel black‐box effective and high‐quality adversarial attack method, namely CodeBERT‐Attack (CBA), based on the powerful large pre‐trained model (i.e., CodeBERT) for DL models of source code processing. CBA locates the vulnerable positions through masking and leverages the power of CodeBERT to generate textual preserving perturbations. We turn CodeBERT against DL models and further fine‐tuned CodeBERT models for specific downstream tasks, and successfully mislead these victim models to erroneous outputs. In addition, taking the power of CodeBERT, CBA is capable of effectively generating adversarial examples that are less perceptible to programmers. Our in‐depth evaluation on two typical source code classification tasks (i.e., functionality classification and code clone detection) against the most widely adopted LSTM and the powerful fine‐tuned CodeBERT models demonstrate the advantages of our proposed technique in terms of both effectiveness and efficiency. Furthermore, our results also show (1) that pre‐training may help CodeBERT gain resilience against perturbations further, and (2) certain pre‐training tasks may be beneficial for adversarial robustness. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. Multimedia resources as a support for requirements engineering and software maintenance.
- Author
-
Santos, Anne Caroline Melo, Júnior, Methanias Colaço, and de Carvalho Andrade, Edna
- Subjects
- *
SOFTWARE engineering , *REQUIREMENTS engineering , *SOFTWARE maintenance , *SCIENTIFIC method , *COMPUTER software development , *MULTIMEDIA systems - Abstract
Textual documentations are frequently used in the software development process to outline features and behaviors of an application. For some people, textual descriptions may not be enough to understand what is being developed. In this scenario, multimedia resources appear as an option for software documentation, providing other ways to observe and interpret information. Objective: To identify and characterize the approaches and techniques which promote the use of multimedia in requirements engineering (RE) to support software development and maintenance. Method: A systematic mapping was conducted to find the primary studies in the literature and collect evidence for directing future research. Results: Only 27.66% of the approaches found validated their solutions through controlled experiments, showing the need to increase the use of scientific method in this area, with replications of studies that will allow to evaluating if other researchers independently will come up with the same results. In this context, the approaches/techniques identified were TRECE, MURMER, Wiki System Multimedia, Storytelling, Virtual World Environment, VisionCatcher, PRESTO4U, ReqVidA, CrowdRE, AVW, The Software Cinema Technique, Dolli Project, UTOPIA, and approaches without explicit names, which, as a rule, use multimedia resources as an additional support. Conclusions: There was a favorable consensus regarding the use of multimedia in RE. The selected studies demonstrated to be favorable to the adoption of media to persist and store the requirements of a system. Moreover, multimedia resources can improve the process of understanding the code and decrease evolution and maintenance costs. General terms are design, documentation, experimentation, human factors, multimedia, reliability, software engineering, verification, and security. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. Investigating the Maturity of RE Practices and the Adoption of Human Values in Industry from the Perspective of Software Engineering Practitioners.
- Author
-
Alwadani, Rawabi and Baslyman, Malak
- Subjects
- *
SOFTWARE engineering , *VALUE engineering , *VALUE capture , *COMPUTER software industry , *REQUIREMENTS engineering , *APPLICATION software - Abstract
In the past, the focus of developing software applications was mainly on collecting, analyzing, and implementing user and business requirements. Nowadays, with the unlimited variety of software applications that serve the same purpose, it has become essential to go beyond user requirements to incorporate their emotions and values to ensure the use of those applications. However, the paucity of addressing the incorporation of human values into software engineering practices, in the literature and in the industry, makes it challenging to understand how to do it. Hence, in this study, we attempted to understand the level of adopting human values in software engineering activities, perceived usefulness, opportunities, and challenges in practice. In addition, we empirically investigated the relationship between the maturity level of the Requirements Engineering (RE) practices and the adoption of human values. To achieve those goals, we designed a survey that was distributed to software industry practitioners; 51 complete responses were received. The results showed that there is a positive relationship between the maturity level of RE and the adoption of human values. Also, most participants agreed that incorporating human values into the software design cycle is important; however, the lack of proven effective techniques and practices to capture and analyze the values are two of the main obstacles to adopting human values in practice. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. State-of-the-Art Research in Blockchain of Things for HealthCare.
- Author
-
Almalki, Jameel
- Subjects
- *
BLOCKCHAINS , *SOFTWARE engineering , *TELEMEDICINE - Abstract
Existing blockchain approaches exhibit a diverse set of dimensions, and on the other hand, IoT-based health care applications manifest a wide variety of requirements. The state-of-the-art analysis of blockchain concerning existing IoT-based approaches for the healthcare domain has been investigated to a limited extend. The purpose of this survey paper is to analyze current state-of-the-art blockchain work in several IoT disciplines, with a focus on the health sector. This study also attempts to demonstrate the prospective use of blockchain in healthcare, as well as the obstacles and future paths of blockchain development. Furthermore, the fundamentals of blockchain have been thoroughly explained to appeal to a diverse audience. On the contrary, we analyzed state-of-the-art studies from several IoT disciplines for eHealth, and also the study deficit but also the obstacles when considering blockchain to IoT, which are highlighted and explored in the paper with suggested alternatives. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. Large language models: a primer and gastroenterology applications.
- Author
-
Shahab, Omer, El Kurdi, Bara, Shaukat, Aasma, Nadkarni, Girish, and Soroush, Ali
- Subjects
- *
LANGUAGE models , *GASTROENTEROLOGISTS , *CHATGPT , *ARTIFICIAL intelligence , *TECHNOLOGICAL innovations , *SOFTWARE engineering - Abstract
Over the past year, the emergence of state-of-the-art large language models (LLMs) in tools like ChatGPT has ushered in a rapid acceleration in artificial intelligence (AI) innovation. These powerful AI models can generate tailored and high-quality text responses to instructions and questions without the need for labor-intensive task-specific training data or complex software engineering. As the technology continues to mature, LLMs hold immense potential for transforming clinical workflows, enhancing patient outcomes, improving medical education, and optimizing medical research. In this review, we provide a practical discussion of LLMs, tailored to gastroenterologists. We highlight the technical foundations of LLMs, emphasizing their key strengths and limitations as well as how to interact with them safely and effectively. We discuss some potential LLM use cases for clinical gastroenterology practice, education, and research. Finally, we review critical barriers to implementation and ongoing work to address these issues. This review aims to equip gastroenterologists with a foundational understanding of LLMs to facilitate a more active clinician role in the development and implementation of this rapidly emerging technology. Plain language summary: Large language models in gastroenterology: a simplified overview for clinicians This text discusses the recent advancements in large language models (LLMs), like ChatGPT, which have significantly advanced artificial intelligence. These models can create specific, high-quality text responses without needing extensive training data or complex programming. They show great promise in transforming various aspects of clinical healthcare, particularly in improving patient care, medical education, and research. This article focuses on how LLMs can be applied in the field of gastroenterology. It explains the technical aspects of LLMs, their strengths and weaknesses, and how to use them effectively and safely. The text also explores how LLMs could be used in clinical practice, education, and research in gastroenterology. Finally, it discusses the challenges in implementing these models and the ongoing efforts to overcome them, aiming to provide gastroenterologists with the basic knowledge needed to engage more actively in the development and use of this emerging technology. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. Enhancing conceptual models with computational capabilities: A methodical approach to executable integrative modeling.
- Author
-
Levi‐Soskin, Natali, Marwedel, Stephan, Jbara, Ahmad, and Dori, Dov
- Abstract
The lack of a common executable modeling framework that integrates systems engineering, software design, and other engineering domains is a major impediment to seamless product development processes. Our research aims to overcome this system‐software modeling gap by integrating computational, software‐related, and model execution capabilities into OPM‐based conceptual modeling, resulting in a holistic unified executable quantitative‐qualitative modeling framework. The gap is overcome via a Methodical Approach to Executable Integrative Modeling—MAXIM, an extension of OPM ISO 19450:2015, a standardization approvement given on 2015. We present the principles of MAXIM and demonstrate its operation within OPCloud—a web‐based collaborative conceptual OPM modeling framework. As a proof‐of‐concept, a model of an Airbus civil aircraft landing gear braking system is constructed and executed. Using MAXIM, engineers from five domains can collaborate at the very early phase of the system development and jointly construct a unified model that fuses qualitative and quantitative aspects of the various disciplines. This case study illustrates an important first step towards satisfying the critical and growing need to integrate systems engineering with software computations into a unified framework that enables a smooth transition from high‐level architecting to detailed, discipline‐oriented design. Such a framework is a key to agile yet robust future development of software‐intensive systems. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. TIME HISTORY NONLINEAR ANALYSIS FOR 2D MODELIZATION OF AN EXISTING BUILDING USING FLEXIBILITY AND DISPLACEMENTBASED FORMULATION.
- Author
-
Belgasmia, Mourad, Moussaoui, Sabah, and Chabane, Rebadj
- Subjects
- *
NONLINEAR analysis , *DISTRIBUTED computing , *CHAIN restaurants , *ORDER picking systems , *SOFTWARE engineering , *REACTION time - Abstract
The object of research is a distributed order processing system for a restaurant chain. The subject of the research is the analysis of the use of Redis for managing event queues in distributed systems. When implementing a distributed order processing system in a restaurant chain with a possible load of up to 20,000 users per day, the Redis system was used. Management of 9 distributed subsystems was organized through Redis. This solution showed an increase in the performance of the system under heavy load (from 50 transactions per second), but the response time of the system in some cases of its operation was longer than without using Redis. When working systems using Redis, it is necessary to take into account the amount of data with which Redis will work, since it does not exceed the amount of RAM, the absence of differentiation into users and groups, and the absence of a query language, which is replaced by a key-value scheme. This research is aimed at analyzing the operation of the system during trial operation under real load. We compared the operation of a configured system with Redis enabled and disabled. The main indicators for the analysis were the system response time and the maximum request execution time. The research was carried out for 2 weeks, the first week using the system settings with disabled Redis, the second – with enabled Redis. We selected 2 days with a similar load on the system to each other. Especially indicative are the results of comparing the durations of the longest queries, which show an almost constant value of the duration for the system in the mode of enabled Redis. The hypothesis of an increase in the system response time at low loads was confirmed, but this value not only leveled off at a load of 500 unique users but also became less at loads of 1000 unique users. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. Balanced knowledge distribution among software development teams—Observations from open‐ and closed‐source software development.
- Author
-
Shafiq, Saad, Mayr‐Dorn, Christoph, Mashkoor, Atif, and Egyed, Alexander
- Abstract
Summary In software development, developer turnover is among the primary reasons for project failures, leading to a great void of knowledge and strain for newcomers. Unfortunately, no established methods exist to measure how the problem domain knowledge is distributed among developers. Awareness of how this knowledge evolves and is owned by key developers in a project helps stakeholders reduce risks caused by turnover. To this end, this paper introduces a novel, realistic representation of problem domain knowledge distribution: the ConceptRealm. To construct the ConceptRealm, we employ a latent Dirichlet allocation model to represent textual features obtained from 300 K issues and 1.3 M comments from 518 open‐source projects. We analyze whether the newly emerged issues and developers share similar concepts or how aligned the individual developers' concepts are with the team over time. We also investigate the impact of leaving developers on the frequency of concepts. Finally, we also evaluate the soundness of our approach on a closed‐source software project, thus allowing the validation of the results from a practical standpoint. We find out that the ConceptRealm can represent the problem domain knowledge within a project and can be utilized to predict the alignment of developers with issues. We also observe that projects exhibit many keepers independent of project maturity and that abruptly leaving keepers correlates with a decline of their core concepts as the remaining developers cannot quickly familiarize themselves with those concepts. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
41. Improving Software Engineering Students’ Creative Thinking and Motivation Using Practical Prototyping and Innovation Techniques.
- Author
-
Kozov, Vasil, Minev, Ekaterin, and Andreeva, Magdalena
- Subjects
- *
ENGINEERING students , *SOFTWARE engineering , *CREATIVE thinking , *ACADEMIC motivation , *REQUIREMENTS engineering - Abstract
Traditionally university students lack motivation in subjects that are more focused on documentation and theory. This problem only deepens with each new generation. A practical workshop approach has been implemented in the subject “Analysing system requirements and specifications”. Its place in the curriculum is explained. A technique for developing innovation and prototypes, used by Google for motivation is described. The method of its’ implementation is thoroughly documented. A brief experiment in the form of a workshop is described and the gathered data is analysed. A survey on student feedback is conducted and the results are discuss ed. Influence on student soft skills improvement is evaluated. An observation on using the methodology as an introductory workshop to break the ice with engineering students is made. The conclusions made based on the feedback data and discussions with students show that the methodology is successful and student motivation and attendance is increased. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. Report on the state of the SoSyM journal (2023 summary).
- Author
-
Challita, Stéphanie, Combemale, Benoit, Ergin, Huseyin, Gray, Jeff, Rumpe, Bernhard, and Schindler, Martin
- Subjects
- *
MACHINE learning , *SOFTWARE engineering , *BUSINESS process modeling - Abstract
This article provides a summary of the current state of the Software & Systems Modeling (SoSyM) journal in 2023. It highlights the publication of new papers and the addition of new editors to the journal. The retirement of two esteemed editors is also mentioned, with gratitude expressed for their contributions. The article includes statistics on the number of papers published, the impact factor of the journal, and the number of submissions and downloads. It announces the recipients of the journal's ten-year most influential paper awards and lists the papers presented at the MODELS conference. The article concludes by expressing appreciation to the reviewers for their contributions. Additionally, a separate document is a message of gratitude and recognition from the software and systems modeling community. The community expresses their appreciation to all the reviewers who have assisted the modeling community and announces the recipients of the Best Reviewers of 2023 award. The document also provides a list of all the reviewers who contributed their expertise to the journal over the past year. The content of the current issue is briefly mentioned, and the community wishes everyone a joyful New Year and encourages readers to explore the journal's archive. [Extracted from the article]
- Published
- 2024
- Full Text
- View/download PDF
43. Business process modeling language selection for research modelers.
- Author
-
Farshidi, Siamak, Kwantes, Izaak Beer, and Jansen, Slinger
- Subjects
- *
BUSINESS process modeling , *LANGUAGE research , *BUSINESS process management , *SOFTWARE engineering , *INDUSTRIAL engineering - Abstract
Business process modeling is a crucial aspect of domains such as Business Process Management and Software Engineering. The availability of various BPM languages in the market makes it challenging for process modelers to select the best-fit BPM language for a specific process modeling task. A decision model is necessary to systematically capture and make scattered knowledge on BPM languages available for reuse by process modelers and academics. This paper presents a decision model for the BPM language selection problem in research projects. The model contains mappings of 72 BPM features to 23 BPM languages. We validated and refined the decision model through 10 expert interviews with domain experts from various organizations. We evaluated the efficiency, validity, and generality of the decision model by conducting four case studies of academic research projects with their original researchers. The results confirmed that the decision model supports process modelers in the selection process by providing more insights into the decision process. Based on the empirical evidence from the case studies and domain expert feedback, we conclude that having the knowledge readily available in the decision model supports academics in making more informed decisions that align with their preferences and prioritized requirements. Furthermore, the captured knowledge provides a comprehensive overview of BPM languages, features, and quality characteristics that other researchers can employ to tackle future research challenges. Our observations indicate that BPMN is a commonly used modeling language for process modeling. Therefore, it is more sensible for academics to explain why they did not select BPMN than to discuss why they chose it for their research project(s). [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
44. Modelling guidance in software engineering: a systematic literature review.
- Author
-
Chakraborty, Shalini and Liebel, Grischa
- Subjects
- *
TECHNICAL literature , *SOFTWARE engineering , *COMPUTER software industry , *SYSTEMS software - Abstract
Despite potential benefits in Software Engineering, adoption of software modelling in industry is low. Technical issues such as tool support have gained significant research before, but individual guidance and training have received little attention. As a first step towards providing the necessary guidance in modelling, we conduct a systematic literature review to explore the current state of the art. We searched academic literature for guidance on model creation and selected 35 papers for full-text screening through three rounds of selection. We find research on model creation guidance to be fragmented, with inconsistent usage of terminology, and a lack of empirical validation or supporting evidence. We outline the different dimensions commonly used to provide guidance on software and system model creation. Additionally, we provide definitions of the three terms modelling method, style, and guideline as current literature lacks a well-defined distinction between them. These definitions can help distinguishing between important concepts and provide precise modelling guidance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
45. Quo Vadis modeling?: Findings of a community survey, an ad-hoc bibliometric analysis, and expert interviews on data, process, and software modeling.
- Author
-
Michael, Judith, Bork, Dominik, Wimmer, Manuel, and Mayr, Heinrich C.
- Subjects
- *
BIBLIOMETRICS , *SOFTWARE measurement , *COMPUTER software , *COMPUTER software development , *SYSTEMS software , *SCIENTIFIC community - Abstract
Models are the key tools humans use to manage complexity in description, development, and analysis. This applies to all scientific and engineering disciplines and in particular to the development of software and data-intensive systems. However, different methods and terminologies have become established in the individual disciplines, even in the sub-fields of Informatics, which raises the need for a comprehensive and cross-sectional analysis of the past, present, and future of modeling research. This paper aims to shed some light on how different modeling disciplines emerged and what characterizes them with a discussion of the potential toward a common modeling future. It focuses on the areas of software, data, and process modeling and reports on an analysis of the research approaches, goals, and visions pursued in each, as well as the methods used. This analysis is based on the results of a survey conducted in the communities concerned, on a bibliometric study, and on interviews with a prominent representative of each of these communities. The paper discusses the different viewpoints of the communities, their commonalities and differences, and identifies possible starting points for further collaboration. It further discusses current challenges for the communities in general and modeling as a research topic in particular and highlights visions for the future. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
46. 预防医学+软件工程双学士学位培养模式探讨.
- Author
-
曾欣, 何晓琴, 张韬, 赵星, 肖雄, 李伟, 丁林, 潘杰, 王国庆, 刘巧兰, 林萍, 洪玫, and 裴晓方
- Abstract
Objective To explore the training mode and mechanism of compound talents of preventive medicine and software engineering. Methods In order to adapt to the development trend of multidisciplinary and new technology integration under the background of new medical science construction and the strategic needs of building a healthy China, the West China School of Public Health and the School of Computer Science (School of Software) of Sichuan University have established a dual bachelorJ s degree talent training program in Preventive Medicine and Software Engineering, a teaching professor expert team with cross disciplinary integration, integrated courses and internship bases, through extensive research and expert verification, in accordance with the requirements of the National Standards for Undergraduate Professional Teaching Quality and the Management Measures for BachelorJ s Degree Authorization and Granting. The program aims to recruit dual bachelor's degree students, strengthen student guidance and teaching feedback, and continuously summarize and improve the talent training mode. Results Three sessions of dual bachelor's degree students in Preventive Medicine + Software Engineering have been recruited, a teaching team integrating medical and engineering has been established, seven integrated courses have been formed, professional training objectives and curriculum systems have been established and continuously improved, and two medical research and practical teaching innovation bases suitable for dual bachelorJ s degree students have been established. The adaptability and satisfaction of students have been continuously improved. Conclusion The double bachelor degree training model of preventive medicine + software engineering with the characteristics of public health and software has been initially established, which provides an important theoretical and practical reference for the reform of the training mode of medical-industrial integration. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
47. Software effort estimation modeling and fully connected artificial neural network optimization using soft computing techniques.
- Author
-
Kassaymeh, Sofian, Alweshah, Mohammed, Al-Betar, Mohammed Azmi, Hammouri, Abdelaziz I., and Al-Ma'aitah, Mohammad Atwah
- Subjects
- *
SOFT computing , *GREY Wolf Optimizer algorithm , *COMPUTER software development , *COMPUTER software , *METAHEURISTIC algorithms , *SOFTWARE engineering , *ARTIFICIAL neural networks - Abstract
In software engineering, the planning and budgeting stages of a software project are of great importance to all stakeholders, including project managers as well as clients. The estimated costs and scheduling time needed to develop any software project before and/or during startup form the basis of a project's success. The main objective of soft- ware estimation techniques is to determine the actual effort and/or time required for project development. The use of machine learning methods to address the estimation problem has, in general, proven remarkably successful for many engineering problems. In this study, a fully connected neural network (FCNN) model and a metaheuristic, gray wolf optimizer (GWO), called GWO-FC, is proposed to tackle the software development effort estimation (SEE) problem. The GWO is integrated with FCNN to optimize the FCNN parameters in order to enhance the accuracy of the obtained results by improving the FCNN's ability to explore the parameter search field and avoid falling into local optima. The proposed technique was evaluated utilizing various benchmark SEE datasets. Furthermore, various recent algorithms from the literature were employed to verify the GWO-FC performance. In terms of accuracy, comparative outcomes reveal that the GWO-FC performs better than other methods in most datasets and evaluation criteria. Experimental outcomes reveal the strong potential of the GWO-FC method to achieve reliable estimation results. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
48. Digital Twin Prototypes for Supporting Automated Integration Testing of Smart Farming Applications †.
- Author
-
Barbie, Alexander, Hasselbring, Wilhelm, and Hansen, Malte
- Subjects
- *
DIGITAL twins , *AGRICULTURAL technology , *AGRICULTURE , *SOFTWARE engineering , *PROTOTYPES , *COMPUTER software development , *SOURCE code - Abstract
Industry 4.0 marks a major technological shift, revolutionizing manufacturing with increased efficiency, productivity, and sustainability. This transformation is paralleled in agriculture through smart farming, employing similar advanced technologies to enhance agricultural practices. Both fields demonstrate a symmetry in their technological approaches. Recent advancements in software engineering and the digital twin paradigm are addressing the challenge of creating embedded software systems for these technologies. Digital twins allow full development of software systems before physical prototypes are made, exemplifying a cost-effective method for Industry 4.0 software development. Our digital twin prototype approach mirrors software operations within a virtual environment, integrating all sensor interfaces to ensure accuracy between emulated and real hardware. In essence, the digital twin prototype acts as a prototype of its physical counterpart, effectively substituting it for automated testing of physical twin software. This paper discusses a case study applying this approach to smart farming, specifically enhancing silage production. We also provide a lab study for independent replication of this approach. The source code for a digital twin prototype of a PiCar-X by SunFounder is available open-source on GitHub, illustrating how digital twins can bridge the gap between virtual simulations and physical operations, highlighting the symmetry between physical and digital twins. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
49. Early career software developers and work preferences in software engineering.
- Author
-
Ahmad, Muhammad Ovais, Ahmad, Iftikhar, and Qayum, Fawad
- Abstract
Context: The software engineering researchers and practitioners echoed the need for investigations to better understand the engineers developing software and services. In light of current studies, there are significant associations between the personalities of software engineers and their work preferences. However, limited studies are using psychometric measurements in software engineering. Objective: We aim to evaluate attitudes of early‐stage software engineers and investigate link between their personalities and work preferences. Method: We collected extensive psychometric data from 303 graduate‐level students in Computer Science programs at four Pakistani and one Swedish university using Five‐Factor Model. The statistical analysis investigated associations between various personality traits and work preferences. Results: The data support the existence of two clusters of software engineers, one of which is more highly rated across the board. Numerous correlations exist between personality qualities and the preferred types of employment for software developers. For instance, those who exhibit greater levels of emotional stability, agreeableness, extroversion, and conscientiousness like working on technical activities on a set timetable. Similar relationships between personalities and occupational choices are also evident in the earlier studies. More neuroticism is reported in female respondents than in male respondents. Higher intelligence was demonstrated by those who worked on the "entire development process" and "technical components of the project." Conclusion: When assigning project tasks to software engineers, managers might use the statistically significant relationships that emerged from the analysis of personality attributes. It would be beneficial to construct effective teams by taking personality factors like extraversion and agreeableness into consideration. The study techniques and analytical tools we use may identify subtle relationships and reflect distinctions across various groups and populations, making them valuable resources for both future academic research and industrial practice. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
50. Systematic analysis of software development in cloud computing perceptions.
- Author
-
Khan, Habib Ullah, Ali, Farhad, and Nazir, Shah
- Abstract
Cloud computing is characterized as a shared computing and communication infrastructure. It encourages the efficient and effective developmental processes that are carried out in various organizations. Cloud computing offers both possibilities and solutions of problems for outsourcing and management of software developmental operations across distinct geography. Cloud computing is adopted by organizations and application developers for developing quality software. The cloud has the significant impact on utilizing the artificial complexity required in developing and designing quality software. Software developmental organization prefers cloud computing for outsourcing tasks because of its available and scalable nature. Cloud computing is the ideal choice utilized for development modern software as they have provided a completely new way of developing real‐time cost‐effective, efficient, and quality software. Tenants (providers, developers, and consumers) are provided with platforms, software services, and infrastructure based on pay per use phenomenon. Cloud‐based software services are becoming increasingly popular, as observed by their widespread use. Cloud computing approach has drawn the interest of researchers and business because of its ability to provide a flexible and resourceful platform for development and deployment. To determine a cohesive understanding of the analyzed problems and solutions to improve the quality of software, the existing literature resources on cloud‐based software development should be analyzed and synthesized systematically. Keyword strings were formulated for analyzing relevant research articles from journals, book chapters, and conference papers. The research articles published in (2011–2021) various scientific databases were extracted and analyzed for retrieval of relevant research articles. A total of 97 research publications are examined in this SLR and are evaluated to be appropriate studies in explaining and discussing the proposed topic. The major emphasis of the presented systematic literature review (SLR) is to identify the participating entities of cloud‐based software development, challenges associated with adopting cloud for software developmental processes, and its significance to software industries and developers. This SLR will assist organizations, designers, and developers to develop and deploy user‐friendly, efficient, effective, and real time software applications. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.