19 results on '"Andreas Vogelsang"'
Search Results
2. CATE: CAusality Tree Extractor from Natural Language Requirements
- Author
-
Julian Frattini, Andreas Vogelsang, Jannik Fischbach, and Noah Jadallah
- Subjects
FOS: Computer and information sciences ,Computer Science - Computation and Language ,Parsing ,Binary tree ,business.industry ,Computer science ,computer.software_genre ,Semantics ,Causality ,Computer Science - Information Retrieval ,Tree (data structure) ,Tree structure ,Artificial intelligence ,business ,Computation and Language (cs.CL) ,computer ,Information Retrieval (cs.IR) ,Sentence ,Natural language ,Natural language processing - Abstract
Causal relations (If A, then B) are prevalent in requirements artifacts. Automatically extracting causal relations from requirements holds great potential for various RE activities (e.g., automatic derivation of suitable test cases). However, we lack an approach capable of extracting causal relations from natural language with reasonable performance. In this paper, we present our tool CATE (CAusality Tree Extractor), which is able to parse the composition of a causal relation as a tree structure. CATE does not only provide an overview of causes and effects in a sentence, but also reveals their semantic coherence by translating the causal relation into a binary tree. We encourage fellow researchers and practitioners to use CATE at https://causalitytreeextractor.com/
- Published
- 2021
- Full Text
- View/download PDF
3. Transfer Learning for Mining Feature Requests and Bug Reports from Tweets and App Store Reviews
- Author
-
Jannik Fischbach, Andreas Vogelsang, Julian Frattini, Dominik Spies, and Pablo Restrepo Henao
- Subjects
FOS: Computer and information sciences ,Computer Science - Computation and Language ,Automated mining ,Information retrieval ,Computer science ,business.industry ,Deep learning ,App store ,Social media analytics ,Software Engineering (cs.SE) ,Computer Science - Software Engineering ,Task (computing) ,Software bug ,Feature (machine learning) ,Artificial intelligence ,Transfer of learning ,business ,Computation and Language (cs.CL) - Abstract
Identifying feature requests and bug reports in user comments holds great potential for development teams. However, automated mining of RE-related information from social media and app stores is challenging since (1) about 70% of user comments contain noisy, irrelevant information, (2) the amount of user comments grows daily making manual analysis unfeasible, and (3) user comments are written in different languages. Existing approaches build on traditional machine learning (ML) and deep learning (DL), but fail to detect feature requests and bug reports with high Recall and acceptable Precision which is necessary for this task. In this paper, we investigate the potential of transfer learning (TL) for the classification of user comments. Specifically, we train both monolingual and multilingual BERT models and compare the performance with state-of-the-art methods. We found that monolingual BERT models outperform existing baseline methods in the classification of English App Reviews as well as English and Italian Tweets. However, we also observed that the application of heavyweight TL models does not necessarily lead to better performance. In fact, our multilingual BERT models perform worse than traditional ML methods.
- Published
- 2021
- Full Text
- View/download PDF
4. Welcome to the First International Workshop on Requirements Engineering for Explainable Systems (RE4ES)
- Author
-
Wasja Brunotte, Timo Speith, Larissa Chazette, Verena Klös, Eric Knauss, and Andreas Vogelsang
- Subjects
Engineering ,Engineering management ,Requirements engineering ,business.industry ,business ,Value (mathematics) - Abstract
Welcome to the First International Workshop on Requirements Engineering for Explainable Systems (RE4ES), where we aim to advance requirements engineering (RE) for explainable systems, foster interdisciplinary exchange, and build a community. On the one hand, we believe that the methods and techniques of the RE community can add much value to explainability research. On the other hand, we have to ensure that we develop techniques fitted to the needs of other communities.This first workshop explores synergies between the RE community and other communities already researching explainability.To this end, we have based our agenda on a mix of paper presentations from authors of different domains, one keynote from industry and one from research, as well as interactive activities to stimulate lively discussions.
- Published
- 2021
- Full Text
- View/download PDF
5. Cases for Explainable Software Systems: Characteristics and Examples
- Author
-
Verena Klös, Andreas Vogelsang, and Mersedeh Sadeghi
- Subjects
Focus (computing) ,Computer science ,Taxonomy (general) ,Intelligent decision support system ,Benchmark (computing) ,Requirements elicitation ,Software system ,Data science - Abstract
The need for systems to explain behavior to users has become more evident with the rise of complex technology like machine learning or self-adaptation. In general, the need for an explanation arises when the behavior of a system does not match the user’s expectations. However, there may be several reasons for a mismatch including errors, goal conflicts, or multi-agent interference. Given the various situations, we need precise and agreed descriptions of explanation needs as well as benchmarks to align research on explainable systems. In this paper, we present a taxonomy that structures needs for an explanation according to different reasons. We focus on explanations to improve the user interaction with the system. For each leaf node in the taxonomy, we provide a scenario that describes a concrete situation in which a software system should provide an explanation. These scenarios, called explanation cases, illustrate the different demands for explanations. Our taxonomy can guide the requirements elicitation for explanation capabilities of interactive intelligent systems and our explanation cases build the basis for a common benchmark. We are convinced that both, the taxonomy and the explanation cases, help the community to align future research on explainable systems.
- Published
- 2021
- Full Text
- View/download PDF
6. Destination Prediction Based on Partial Trajectory Data
- Author
-
Ibrahim Emre Göl, Christoph Lingenfelder, Patrick Ebel, and Andreas Vogelsang
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Mean squared error ,Computer Science - Artificial Intelligence ,Computer science ,Machine Learning (stat.ML) ,02 engineering and technology ,computer.software_genre ,Machine Learning (cs.LG) ,Contextual design ,Statistics - Machine Learning ,0502 economics and business ,0202 electrical engineering, electronic engineering, information engineering ,050210 logistics & transportation ,Artificial neural network ,business.industry ,05 social sciences ,Navigation system ,I.2.1 ,Tree (data structure) ,Artificial Intelligence (cs.AI) ,Recurrent neural network ,Global Positioning System ,Trajectory ,020201 artificial intelligence & image processing ,Data mining ,business ,computer - Abstract
Two-thirds of the people who buy a new car prefer to use a substitute instead of the built-in navigation system. However, for many applications, knowledge about a user's intended destination and route is crucial. For example, suggestions for available parking spots close to the destination can be made or ride-sharing opportunities along the route are facilitated. Our approach predicts probable destinations and routes of a vehicle, based on the most recent partial trajectory and additional contextual data. The approach follows a three-step procedure: First, a $k$-d tree-based space discretization is performed, mapping GPS locations to discrete regions. Secondly, a recurrent neural network is trained to predict the destination based on partial sequences of trajectories. The neural network produces destination scores, signifying the probability of each region being the destination. Finally, the routes to the most probable destinations are calculated. To evaluate the method, we compare multiple neural architectures and present the experimental results of the destination prediction. The experiments are based on two public datasets of non-personalized, timestamped GPS locations of taxi trips. The best performing models were able to predict the destination of a vehicle with a mean error of 1.3 km and 1.43 km respectively., Comment: 2020 IEEE Intelligent Vehicles Symposium
- Published
- 2020
- Full Text
- View/download PDF
7. Trace Link Recovery using Semantic Relation Graphs and Spreading Activation
- Author
-
Andreas Vogelsang and Aaron Schlutter
- Subjects
Semantic Relation Graph ,Computer science ,Lag ,Traceability ,computer.software_genre ,Graph ,Natural language requirements ,Spreading Activation ,Predictive power ,Data mining ,ddc:004 ,Algebraic number ,Explicit knowledge ,Recovery approach ,computer ,Natural Language ,Semantic relation - Abstract
Trace Link Recovery tries to identify and link related existing requirements with each other to support further engineering tasks. Existing approaches are mainly based on algebraic Information Retrieval or machine-learning. Machine-learning approaches usually demand reasonably large and labeled datasets to train. Algebraic Information Retrieval approaches like distance between tf-idf scores also work on smaller datasets without training but are limited in providing explanations for trace links. In this work, we present a Trace Link Recovery approach that is based on an explicit representation of the content of requirements as a semantic relation graph and uses Spreading Activation to answer trace queries over this graph. Our approach is fully automated including an NLP pipeline to transform unrestricted natural language requirements into a graph. We evaluate our approach on five common datasets. Depending on the selected configuration, the predictive power strongly varies. With the best tested configuration, the approach achieves a mean average precision of 40% and a Lag of 50%. Even though the predictive power of our approach does not outperform state-of-the-art approaches, we think that an explicit knowledge representation is an interesting artifact to explore in Trace Link Recovery approaches to generate explanations and refine results. Trace Link Recovery versucht, verwandte bestehende Anforderungen zu identifizieren und miteinander zu verknüpfen, um weitere technische Aufgaben zu unterstützen. Bestehende Ansätze basieren hauptsächlich auf algebraischem Information Retrieval oder maschinellem Lernen. Machine-Learning-Ansätze erfordern in der Regel relativ große und vorab klassifizierte Datensätze zum Trainieren. Algebraische Ansätze wie z.B. tf-idf funktionieren auch bei kleineren Datensätzen ohne Training, sind aber in der Bereitstellung von Erklärungen für Verknüpfungen begrenzt. In dieser Arbeit stellen wir einen Trace Link Recovery Ansatz vor, der auf einer expliziten Darstellung des Inhalts von Anforderungen durch einen semantischen Relationsgraphs basiert und die Ausbreitungsaktivierung verwendet, um Verknüpfungen über diesen Graphen zu identifizieren. Unser Ansatz ist vollständig automatisiert, einschließlich einer NLP-Pipeline zur Umwandlung uneingeschränkt natürlichsprachlicher Anforderungen in einen Graphen. Wir evaluieren unseren Ansatz anhand von fünf öffentlichen Datensätzen. Abhängig von der gewählten Konfiguration variiert die Performanz stark. Mit der am besten getesteten Konfiguration erreicht der Ansatz eine Genauigkeit von 40% und einen Lag von 50%. Auch wenn die Vorhersagekraft unseres Ansatzes dem Stand der Technik nicht überlegen ist, sind wir der Meinung, dass eine explizite Wissensrepräsentation ein interessantes Artefakt ist, das in Trace Link Recovery Ansätzen untersucht werden sollte, um Erklärungen zu generieren und die Ergebnisse zu verfeinern.
- Published
- 2020
- Full Text
- View/download PDF
8. Towards Self-Explainable Cyber-Physical Systems
- Author
-
Joel Greenyer, Andreas Wortmann, Andreas Vogelsang, Christoph Sommer, Verena Klös, Francisco J. Chiyah Garcia, Mathias Blumreiter, and Maike Schwammberger
- Subjects
FOS: Computer and information sciences ,Artificial Intelligence (cs.AI) ,Leverage (negotiation) ,Computer Science - Artificial Intelligence ,Computer science ,Stakeholder ,Cyber-physical system ,Design methods ,Data science - Abstract
With the increasing complexity of Cyber-Physical Systems, their behavior and decisions become increasingly difficult to understand and comprehend for users and other stakeholders. Our vision is to build self-explainable systems that can, at run-time, answer questions about the system's past, current, and future behavior. As hitherto no design methodology or reference framework exists for building such systems, we propose the Monitor, Analyze, Build, Explain (MAB-EX) framework for building self-explainable systems that leverage requirements- and explainability models at run-time. The basic idea of MAB-EX is to first Monitor and Analyze a certain behavior of a system, then Build an explanation from explanation models and convey this EXplanation in a suitable way to a stakeholder. We also take into account that new explanations can be learned, by updating the explanation models, should new and yet un-explainable behavior be detected by the system.
- Published
- 2019
- Full Text
- View/download PDF
9. Automated Generation of Test Models from Semi-Structured Requirements
- Author
-
Dietmar Freudenstein, Maximilian Junker, Jannik Fischbach, and Andreas Vogelsang
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Model-based testing ,Business rule ,Computer science ,Process (engineering) ,Programming language ,Principal (computer security) ,Machine Learning (stat.ML) ,Context (language use) ,computer.software_genre ,Computer Science - Information Retrieval ,Machine Learning (cs.LG) ,Test (assessment) ,Software Engineering (cs.SE) ,Computer Science - Software Engineering ,Test case ,Statistics - Machine Learning ,computer ,Information Retrieval (cs.IR) ,Natural language - Abstract
[Context:] Model-based testing is an instrument for automated generation of test cases. It requires identifying requirements in documents, understanding them syntactically and semantically, and then translating them into a test model. One light-weight language for these test models are Cause-Effect-Graphs (CEG) that can be used to derive test cases. [Problem:] The creation of test models is laborious and we lack an automated solution that covers the entire process from requirement detection to test model creation. In addition, the majority of requirements is expressed in natural language (NL), which is hard to translate to test models automatically. [Principal Idea:] We build on the fact that not all NL requirements are equally unstructured. We found that 14 % of the lines in requirements documents of our industry partner contain "pseudo-code"-like descriptions of business rules. We apply Machine Learning to identify such semi-structured requirements descriptions and propose a rule-based approach for their translation into CEGs. [Contribution:] We make three contributions: (1) an algorithm for the automatic detection of semi-structured requirements descriptions in documents, (2) an algorithm for the automatic translation of the identified requirements into a CEG and (3) a study demonstrating that our proposed solution leads to 86 % time savings for test model creation without loss of quality.
- Published
- 2019
- Full Text
- View/download PDF
10. Microservice Architectures for Advanced Driver Assistance Systems: A Case-Study
- Author
-
Andreas Vogelsang, Christian Berger, Ola Benderius, and Jannik Lotz
- Subjects
FOS: Computer and information sciences ,Service (systems architecture) ,business.industry ,Computer science ,Automotive industry ,Advanced driver assistance systems ,Software Engineering (cs.SE) ,Computer Science - Software Engineering ,Software ,Software system ,Software architecture ,Software engineering ,business ,Agile software development ,Automotive software - Abstract
The technological advancements of recent years have steadily increased the complexity of vehicle-internal software systems, and the ongoing development towards autonomous driving will further aggravate this situation. This is leading to a level of complexity that is pushing the limits of existing vehicle software architectures and system designs. By changing the software structure to a service-based architecture, companies in other domains successfully managed the rising complexity and created a more agile and future-oriented development process. This paper presents a case-study investigating the feasibility and possible effects of changing the software architecture for a complex driver assistance function to a microservice architecture. The complete procedure is described, starting with the description of the software-environment and the corresponding requirements, followed by the implementation, and the final testing. In addition, this paper provides a high-level evaluation of the microservice architecture for the automotive use-case. The results show that microservice architectures can reduce complexity and time-consuming process steps and makes the automotive software systems prepared for upcoming challenges as long as the principles of microservice architectures are carefully followed.
- Published
- 2019
- Full Text
- View/download PDF
11. Automatic Glossary Term Extraction from Large-Scale Requirements Specifications
- Author
-
Kerstin Hartig, Andreas Vogelsang, Tim Gemkow, and Miro Conzelmann
- Subjects
Glossary ,Deep linguistic processing ,Computer science ,media_common.quotation_subject ,02 engineering and technology ,computer.software_genre ,0202 electrical engineering, electronic engineering, information engineering ,Quality (business) ,natural language processing ,004 Datenverarbeitung ,Informatik ,media_common ,Ground truth ,Requirements engineering ,Crowd RE ,business.industry ,Scale (chemistry) ,020207 software engineering ,glossary term extraction ,Term (time) ,020201 artificial intelligence & image processing ,requirements engineering ,Artificial intelligence ,Metric (unit) ,ddc:004 ,business ,computer ,Natural language processing - Abstract
Creating glossaries for large corpora of requirments is an important but expensive task. Glossary term extraction methods often focus on achieving a high recall rate and, therefore, favor linguistic proecssing for extracting glossary term candidates and neglect the benefits from reducing the number of candidates by statistical filter methods. However, especially for large datasets a reduction of the likewise large number of candidates may be crucial. This paper demonstrates how to automatically extract relevant domain-specific glossary term candidates from a large body of requirements, the CrowdRE dataset. Our hybrid approach combines linguistic processing and statistical filtering for extracting and reducing glossary term candidates. In a twofold evaluation, we examine the impact of our approach on the quality and quantity of extracted terms. We provide a ground truth for a subset of the requirements and show that a substantial degree of recall can be achieved. Furthermore, we advocate requirements coverage as an additional quality metric to assess the term reduction that results from our statistical filters. Results indicate that with a careful combination of linguistic and statistical extraction methods, a fair balance between later manual efforts and a high recall rate can be achieved.
- Published
- 2018
- Full Text
- View/download PDF
12. Welcome from the Organizers
- Author
-
Pradeep K. Murukannaiah, Andreas Vogelsang, Rachel Harrison, and Eduard C. Groen
- Subjects
Engineering ,Engineering management ,Requirements engineering ,business.industry ,business - Abstract
We would like to welcome you to the 5th International Workshop on Artificial Intelligence for Requirements Engineering (AIRE'18). This interdisciplinary workshop is intended to explore and extend the synergies between Artificial Intelligence and Requirements Engineering. Our objective is to discover Requirements Engineering areas that may benefit from the application of AI tools and techniques. We intend to inspire a new and broad community for interdisciplinary discussions concerning novel research directions for Requirements Engineering and Artificial Intelligence.
- Published
- 2018
- Full Text
- View/download PDF
13. Preface to AIRE 2017
- Author
-
Henning Femmer and Andreas Vogelsang
- Published
- 2017
- Full Text
- View/download PDF
14. Message from the WASA 2017 Organizing Committee
- Author
-
Miroslaw Staron, Andreas Vogelsang, Yaping Luo, Harald Altinger, and Yanja Dajsuren
- Subjects
Engineering ,Social software engineering ,Software ,business.industry ,Systems engineering ,Software system ,View model ,business ,Software architecture ,Software engineering - Published
- 2017
- Full Text
- View/download PDF
15. Characterizing Implicit Communal Components as Technical Debt in Automotive Software Systems
- Author
-
Maximilian Junker, Andreas Vogelsang, and Henning Femmer
- Subjects
Engineering ,business.industry ,020207 software engineering ,02 engineering and technology ,computer.software_genre ,Empirical research ,Code refactoring ,Technical debt ,020204 information systems ,Component (UML) ,0202 electrical engineering, electronic engineering, information engineering ,Software system ,Layer (object-oriented design) ,Empirical evidence ,business ,Software engineering ,computer ,Automotive software - Abstract
Automotive software systems are often characterized by a set of features that are implemented through a network of communicating components. It is common practice to implement or adapt features by an ad hoc (re) use of signals that originate from components of another feature. Thereby, over time some components become so-called implicit communal components. These components increase the necessary efforts for several development activities because they introduce feature dependencies. Refactoring implicit communal components reduces these efforts but also costs refactoring effort. In this paper, we provide empirical evidence that implicit communal components exist in industrial automotive systems. For two cases, we show that less than 10% of the components are responsible for more than 90% of the feature dependencies. Secondly, we propose a refactoring approach for implicit communal components, which makes them explicit by moving them to a dedicated platform component layer. Finally, we characterize implicit communal components as technical debt, which is a metaphor for suboptimal solutions having short-term benefits but causing a long-term negative impact. With this metaphor, we describe the trade-off between accepting the negative effects of implicit communal components and spending the necessary refactoring costs.
- Published
- 2016
- Full Text
- View/download PDF
16. Systematic elicitation of mode models for multifunctional systems
- Author
-
Henning Femmer, Andreas Vogelsang, and Christian Winkler
- Subjects
Structure (mathematical logic) ,Requirement ,Engineering ,Requirements engineering ,business.industry ,Problem domain ,Distributed computing ,Mode (statistics) ,Systems engineering ,Software requirements specification ,business ,Requirements analysis ,Feature-oriented domain analysis - Abstract
Many requirements engineering approaches structure and specify requirements based on the notion of modes or system states. The set of all modes is usually considered as the mode model of a system or problem domain.
- Published
- 2015
- Full Text
- View/download PDF
17. A model-based approach to innovation management of automotive control systems
- Author
-
Andreas Vogelsang, Steffen Fuhrmann, and Mario Gleirscher
- Subjects
Engineering ,Engineering management ,business.industry ,Automotive control systems ,Control (management) ,Feature (machine learning) ,Systems engineering ,Automotive industry ,Innovation management ,business ,Platform development ,Technology management ,Domain (software engineering) - Abstract
Innovation management of automotive control sys- tems is a challenging issue not least because of the need to handle short iterative life cycles and families of these systems. After collecting experience in control engineering from a three year collaboration in the automotive domain, we conclude that innovation management is only weakly aligned with feature and platform development. This situation makes it difficult to assure feasibility of innovations and to identify potentials for innovations. We present a novel approach to integrating innovation man- agement with requirements and technology management by using behavioral system models. We investigate requirements-based and technology-based innovation. To the best of our knowledge, this is the first approach to using such models for managing feature and platform innovations in an automotive company. We discuss an example to illustrate how our approach can be applied.
- Published
- 2014
- Full Text
- View/download PDF
18. Local deposition of plasma-polymerized films at atmospheric pressure
- Author
-
Ansgar Schmidt-Bleker, Joern Winter, Martin Polak, Andreas Vogelsang, K.-D. Weltmann, Antje Quade, Stephan Reuter, and Katja Fricke
- Subjects
Carbon film ,Materials science ,Atmospheric pressure ,Chemical engineering ,Plasma-enhanced chemical vapor deposition ,Deposition (phase transition) ,Atmospheric-pressure plasma ,Combustion chemical vapor deposition ,Thin film ,Plasma processing - Abstract
Recently reported progress regarding thin film deposition under atmospheric pressure conditions led to increased interests for its application in optics, semiconductor production, automotive, or medical industry. Therefore, extensively research has been performed in the development of atmospheric pressure plasma sources for thin film deposition. Miniaturized non-thermal atmospheric pressure plasma jets represent a suitable tool for local surface coating and thus for the preparation of chemical micro-patterns. Consequently, investigations are of interest concerning the feasibility of plasma jets in surface engineering for customer-specific requirements. So far, two atmospheric pressure plasma jets with different geometries have been developed, which can be used for this purposes 1-2. In these set-ups, the supply of the precursor can be realized in different ways: I) the mixture of carrier gas and precursor is introduced into the main flow downstream the active discharge or II) by using a cap which was build to control and tailor the gas curtain which can diffuse into the effluent of the jet2-3. In the present paper, results are given of an experimental study on plasma enhanced chemical vapor deposition under atmospheric pressure conditions. Emphasis is given on depositing films which exhibit either hydrophilic (e.g. nitrogen-rich coatings) or hydrophobic surface properties (e.g. Teflon-like coatings). The chemical structure of these films, measured by X-ray photo electron spectroscopy, as well as their wettability will be shown and discussed. Deposition rates have been determined by weighing. Hence, by controlling the deposition conditions film growth rates of 6-43 nm s-1 have been obtained for fluorine-rich films, for example.
- Published
- 2013
- Full Text
- View/download PDF
19. 5th International Workshop on Artificial Intelligence for Requirements Engineering, AIRE@RE 2018, Banff, AB, Canada, August 21, 2018
- Author
-
Eduard C. Groen, Rachel Harrison, Pradeep K. Murukannaiah, and Andreas Vogelsang
- Published
- 2018
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.