34 results on '"Liming Zhu"'
Search Results
2. Prompt-tuned Code Language Model as a Neural Knowledge Base for Type Inference in Statically-Typed Partial Code
- Author
-
Qing Huang, Zhiqiang Yuan, Zhenchang Xing, Xiwei Xu, Liming Zhu, and Qinghua Lu
- Subjects
Software Engineering (cs.SE) ,FOS: Computer and information sciences ,Computer Science - Software Engineering - Abstract
Partial code usually involves non-fully-qualified type names (non-FQNs) and undeclared receiving objects. Resolving the FQNs of these non-FQN types and undeclared receiving objects (referred to as type inference) is the prerequisite to effective search and reuse of partial code. Existing dictionary-lookup based methods build a symbolic knowledge base of API names and code contexts, which involve significant compilation overhead and are sensitive to unseen API names and code context variations. In this paper, we formulate type inference as a cloze-style fill-in-blank language task. Built on source code naturalness, our approach fine-tunes a code masked language model (MLM) as a neural knowledge base of code elements with a novel "pre-train, prompt and predict" paradigm from raw source code. Our approach is lightweight and has minimum requirements on code compilation. Unlike existing symbolic name and context matching for type inference, our prompt-tuned code MLM packs FQN syntax and usage in its parameters and supports fuzzy neural type inference. We systematically evaluate our approach on a large amount of source code from GitHub and Stack Overflow. Our results confirm the effectiveness of our approach design and the practicality for partial code type inference. As the first of its kind, our neural type inference method opens the door to many innovative ways of using partial code., The submitted paper has been accepted by ASE 2022. If possible, please expedite the approval process. Thank you very much
- Published
- 2022
3. KGAMD: an API-misuse detector driven by fine-grained API-constraint knowledge graph
- Author
-
Xiaoxue Ren, Xiwei Xu, Liming Zhu, Xinyuan Ye, Jianling Sun, Xin Xia, and Zhenchang Xing
- Subjects
Application programming interface ,Computer science ,business.industry ,Programming language ,media_common.quotation_subject ,Exception handling ,Software development ,Construct (python library) ,computer.software_genre ,Information extraction ,Documentation ,Debugging ,business ,computer ,Codebase ,media_common - Abstract
Application Programming Interfaces (APIs) typically come with usage constraints. The violations of these constraints (i.e. API misuses) can cause significant problems in software development. Existing methods mine frequent API usage patterns from codebase to detect API misuses. They make a naive assumption that API usage that deviates from the most-frequent API usage is a misuse. However, there is a big knowledge gap between API usage patterns and API usage constraints in terms of comprehensiveness, explainability and best practices. Inspired by this, we propose a novel approach named KGAMD (API-Misuse Detector Driven by Fine-Grained API-Constraint Knowledge Graph) that detects API misuses directly against the API constraint knowledge, rather than API usage pat-terns. We first construct a novel API-constraint knowledge graph from API reference documentation with open information extraction methods. This knowledge graph explicitly models two types of API-constraint relations (call-order and condition-checking) and enriches return and throw relations with return conditions and exception triggers. Then, we develop the KGAMD tool that utilizes the knowledge graph to detect API misuses. There are three types of frequent API misuses we can detect - missing calls, missing condition checking and missing exception handling, while existing detectors mostly focus on only missing calls. Our quantitative evaluation and user study demonstrate that our KGAMD is promising in helping developers avoid and debug API misuses Demo Video: https://www.youtube.com/watch?v=TN4LtHJ-494 IntelliJ plug-in: https://github.com/goodchar/KGAMD
- Published
- 2021
4. Visual analytics for large networks
- Author
-
Rowan T. Hughes, Dawei Chen, Daniel Filonik, Liming Zhu, Alex Mathews, and Tomasz Bednarz
- Subjects
Visual analytics ,Large networks ,Computer science ,Data science - Published
- 2021
5. Analysing and extending privacy patterns with architectural context
- Author
-
Xiwei Xu, Su Yen Chia, Hye-Young Paik, and Liming Zhu
- Subjects
Computer science ,business.industry ,media_common.quotation_subject ,Data_MISCELLANEOUS ,Perspective (graphical) ,Internet privacy ,020207 software engineering ,Context (language use) ,02 engineering and technology ,Software quality ,Software ,Architectural pattern ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,ComputingMilieux_COMPUTERSANDSOCIETY ,Systems design ,Software design ,Quality (business) ,business ,media_common - Abstract
Privacy is now an increasingly important software quality. Software architects and developers should consider privacy from the early stages of system design to prevent privacy breaches. Both industry and academia have proposed privacy patterns as reusable design solutions to address common privacy problems. However, from the system development perspective, the existing privacy patterns do not provide architectural context to assist software design for privacy. More specifically, the current privacy patterns lack proper analysis with regards to privacy properties - the well-established software traits relating to privacy (e.g., unlinkability, identifiability). Furthermore, the impacts of privacy patterns on other quality attributes such as performance are yet to be investigated. Our paper aims to provide guidance to software architects and developers for considering privacy patterns, by adding new perspectives to the existing privacy patterns. First, we provide a new structural and interaction view of the patterns by relating privacy regulation contexts. Then, we analyse the patterns in architectural contexts and map available privacy-preserving techniques for implementing each privacy pattern. We also give an analysis of privacy patterns with regard to their impact on privacy properties, and the trade-off between privacy and other quality attributes.
- Published
- 2021
6. API-misuse detection driven by fine-grained API-constraint knowledge graph
- Author
-
Zhenchang Xing, Xiaoxue Ren, Xin Xia, Liming Zhu, Xinyuan Ye, Jianling Sun, and Xiwei Xu
- Subjects
Java ,business.industry ,Computer science ,Programming language ,media_common.quotation_subject ,Best practice ,Exception handling ,Knowledge engineering ,Software development ,020207 software engineering ,02 engineering and technology ,computer.software_genre ,Misuse detection ,Information extraction ,Documentation ,Software ,Debugging ,0202 electrical engineering, electronic engineering, information engineering ,business ,computer ,computer.programming_language ,media_common ,Codebase - Abstract
API misuses cause significant problem in software development. Existing methods detect API misuses against frequent API usage patterns mined from codebase. They make a naive assumption that API usage that deviates from the most-frequent API usage is a misuse. However, there is a big knowledge gap between API usage patterns and API usage caveats in terms of comprehensiveness, explainability and best practices. In this work, we propose a novel approach that detects API misuses directly against the API caveat knowledge, rather than API usage patterns. We develop open information extraction methods to construct a novel API-constraint knowledge graph from API reference documentation. This knowledge graph explicitly models two types of API-constraint relations (call-order and condition-checking) and enriches return and throw relations with return conditions and exception triggers. It empowers the detection of three types of frequent API misuses - missing calls, missing condition checking and missing exception handling, while existing detectors mostly focus on only missing calls. As a proof-of-concept, we apply our approach to Java SDK API Specification. Our evaluation confirms the high accuracy of the extracted API-constraint relations. Our knowledge-driven API misuse detector achieves 0.60 (68/113) precision and 0.28 (68/239) recall for detecting Java API misuses in the API misuse benchmark MuBench. This performance is significantly higher than that of existing pattern-based API misused detectors. A pilot user study with 12 developers shows that our knowledge-driven API misuse detection is very promising in helping developers avoid API misuses and debug the bugs caused by API misuses.
- Published
- 2020
7. eXplainable AI (XAI)
- Author
-
Liming Zhu, Cameron Edmond, Tomasz Bednarz, Mashhuda Glencross, Lindsay Wells, and Rowan T. Hughes
- Subjects
Cognitive science ,Computer science ,media_common.quotation_subject ,0202 electrical engineering, electronic engineering, information engineering ,020207 software engineering ,020201 artificial intelligence & image processing ,02 engineering and technology ,Soul ,Creativity ,media_common - Abstract
• Do Machine Learning algorithms have a Soul? • Could they understand every day's reality as us Humans do? • What the consequence of their Creativity? • Can they help us to understand world better?
- Published
- 2020
8. Seenomaly
- Author
-
Xiwei Xu, Chunyang Chen, Guoqiang Li, Dehai Zhao, Jinshui Wang, Zhenchang Xing, and Liming Zhu
- Subjects
Computer science ,business.industry ,020207 software engineering ,Static program analysis ,Usability ,02 engineering and technology ,Animation ,Graphical user interface testing ,User experience design ,Human–computer interaction ,020204 information systems ,Component (UML) ,0202 electrical engineering, electronic engineering, information engineering ,Unsupervised learning ,business - Abstract
GUI animations, such as card movement, menu slide in/out, snackbar display, provide appealing user experience and enhance the usability of mobile applications. These GUI animations should not violate the platform's UI design guidelines (referred to as design-don't guideline in this work) regarding component motion and interaction, content appearing and disappearing, and elevation and shadow changes. However, none of existing static code analysis, functional GUI testing and GUI image comparison techniques can "see" the GUI animations on the scree, and thus they cannot support the linting of GUI animations against design-don't guidelines. In this work, we formulate this GUI animation linting problem as a multi-class screencast classification task, but we do not have sufficient labeled GUI animations to train the classifier. Instead, we propose an unsupervised, computer-vision based adversarial autoencoder to solve this linting problem. Our autoencoder learns to group similar GUI animations by "seeing" lots of unlabeled real-application GUI animations and learning to generate them. As the first work of its kind, we build the datasets of synthetic and real-world GUI animations. Through experiments on these datasets, we systematically investigate the learning capability of our model and its effectiveness and practicality for linting GUI animations, and identify the challenges in this linting problem for future work.
- Published
- 2020
9. A RESTful architecture for data exploration as a service
- Author
-
Suhrid Satyal, Liming Zhu, Yun Zhang, Xiwei Xu, and Shiping Chen
- Subjects
Service (systems architecture) ,Data exploration ,computer.internet_protocol ,Computer science ,business.industry ,Process (engineering) ,020207 software engineering ,02 engineering and technology ,Service-oriented architecture ,Data science ,Resource (project management) ,Analytics ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Data as a service ,Architecture ,business ,computer - Abstract
Data analysis process typically starts in an exploration phase, where the goal is to gain an understanding of the underlying data. In this phase, analysts make multiple queries and expect answers from the data services. Existing data services do not meet the needs for online data exploration in practice. Some services only support the data analysts to pull the whole dataset before analysis. Others allow analysts to make one-off queries and do not provide any guidance for exploring data. In this paper, we address these limitations of data services by proposing the Data Exploration as a Service (DEaaS) approach. Our RESTful service architecture and resource design provide explicit support for interactive data exploration. In addition, we use historical query information and predefined analytics semantics based on a multidimensional data model to recommend resources to analysts and guide them through the exploration process. We evaluate DEaaS using data exploration processes on both synthetic and real-life datasets. The experimental results show that our solution adapts to different data sources and the proposed resource navigation approach can make DEaaS outperform existing data services in data exploration.
- Published
- 2019
10. A Pattern Collection for Blockchain-based Applications
- Author
-
Liming Zhu, Ingo Weber, Xiwei Xu, Cesare Pautasso, and Qinghua Lu
- Subjects
Blockchain ,Smart contract ,business.industry ,Computer science ,Data management ,020207 software engineering ,Context (language use) ,02 engineering and technology ,Software ,Software design pattern ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Software system ,business ,Software engineering ,Software architecture - Abstract
Blockchain is an emerging technology that enables new forms of decentralized software architectures, where distributed components can reach agreements on shared system states without trusting a central integration point. Blockchain provides a shared infrastructure to execute programs, called smart contracts, and to store data. Since blockchain technologies are at an early stage, there is a lack of a systematic and holistic view on designing software systems that use blockchain. We view blockchain as part of a bigger system, which requires patterns for using blockchain in the design of their software architecture. In this paper, we collect a list of patterns for blockchain-based applications. The pattern collection is categorized into four types, including interaction with external world patterns, data management patterns, security patterns and contract structural patterns. Some patterns are designed considering the nature of blockchain and how it can be specifically introduced within real-world applications. Others are variants of existing design patterns applied in the context of blockchain-based applications and smart contracts.
- Published
- 2018
11. Adopting Continuous Delivery and Deployment
- Author
-
Mansooreh Zahedi, Muhammad Ali Babar, Mojtaba Shahin, and Liming Zhu
- Subjects
Engineering ,Potential impact ,Empirical work ,Knowledge management ,business.industry ,Continuous delivery ,020206 networking & telecommunications ,020207 software engineering ,Context (language use) ,02 engineering and technology ,Collaboration ,Continuous Delivery and Deployment ,Development and Operation Teams ,Empirical research ,Empirical Software Engineering ,Software deployment ,Facilitator ,0202 electrical engineering, electronic engineering, information engineering ,business - Abstract
Context: Continuous Delivery and Deployment (CD) practices aim to deliver software features more frequently and reliably. While some efforts have been made to study different aspects of CD practices, a little empirical work has been reported on the impact of CD on team structures, collaboration and team members’ responsibilities. Goal: Our goal is to empirically investigate how Development (Dev) and Operations (Ops) teams are organized in software industry for adopting CD practices. Furthermore, we explore the potential impact of practicing CD on collaboration and team members’ responsibilities. Method: We conducted a mixed-method empirical study, which collected data from 21 in- depth, semi-structured interviews in 19 organizations and a survey with 93 software practitioners. Results: There are four common types of team structures (i.e., (1) separate Dev and Ops teams with higher collaboration; (2) separate Dev and Ops teams with facilitator(s) in the middle; (3) small Ops team with more responsibilities for Dev team; (4) no visible Ops team) for organizing Dev and Ops teams to effectively initiate and adopt CD practices. Our study also provides insights into how software organizations actually improve collaboration among teams and team members for practicing CD. Furthermore, we highlight new responsibilities and skills (e.g., monitoring and logging skills), which are needed in this regard.
- Published
- 2017
12. The Intersection of Continuous Deployment and Architecting Process
- Author
-
Muhammad Ali Babar, Liming Zhu, and Mojtaba Shahin
- Subjects
Engineering ,business.industry ,Software development ,020207 software engineering ,Context (language use) ,02 engineering and technology ,Software ,Software deployment ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Reference architecture ,DevOps ,Software engineering ,business ,Software architecture ,Reusability - Abstract
Context: Development and Operations (DevOps) is an emerging software industry movement to bridge the gap between software development and operations teams. DevOps supports frequently and reliably releasing new features and products-- thus subsuming Continuous Deployment (CD) practice. Goal: This research aims at empirically exploring the potential impact of CD practice on architecting process. Method: We carried out a case study involving interviews with 16 software practitioners. Results: We have identified (1) a range of recurring architectural challenges (i.e., highly coupled monolithic architecture, team dependencies, and ever-changing operational environments and tools) and (2) five main architectural principles (i.e., small and independent deployment units, not too much focus on reusability, aggregating logs, isolating changes, and testability inside the architecture) that should be considered when an application is (re-) architected for CD practice. This study also supports that software architecture can better support operations if an operations team is engaged at an early stage of software development for taking operational aspects into considerations. Conclusion: These findings provide evidence that software architecture plays a significant role in successfully and efficiently adopting continuous deployment. The findings contribute to establish an evidential body of knowledge about the state of the art of architecting for CD practice
- Published
- 2016
13. Continuous validation for data analytics systems
- Author
-
Mark Staples, John Grundy, and Liming Zhu
- Subjects
business.industry ,Computer science ,Data stream mining ,Corporate governance ,Stakeholder ,020207 software engineering ,02 engineering and technology ,Data science ,Data modeling ,Domain (software engineering) ,Software analytics ,Software ,Analytics ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Software verification and validation ,DevOps ,business ,Risk management - Abstract
From a future history of 2025: Continuous development is common for build/test (continuous integration) and operations (devOps). This trend continues through the lifecycle, into what we call `devUsage': continuous usage validation. In addition to ensuring systems meet user needs, organisations continuously validate their legal and ethical use. The rise of end-user programming and multi-sided platforms exacerbate validation challenges. A separate trend is the specialisation of software engineering for technical domains, including data analytics. This domain has specific validation challenges. We must validate the accuracy of statistical models, but also whether they have illegal or unethical biases. Usage needs addressed by machine learning are sometimes not specifiable in the traditional sense, and statistical models are often `black boxes'. We describe future research to investigate solutions to these devUsage challenges for data analytics systems. We will adapt risk management and governance frameworks previously used for software product qualities, use social network communities for input from aligned stakeholder groups, and perform cross-validation using autonomic experimentation, cyber-physical data streams, and online discursive feedback.
- Published
- 2016
14. Making Real Time Data Analytics Available as a Service
- Author
-
Dongyao Wu, Len Bass, Donna Xu, Liming Zhu, and Xiwei Xu
- Subjects
Service (systems architecture) ,Data model ,business.industry ,Analytics ,Computer science ,Big data ,Data as a service ,Data architecture ,Software architecture ,business ,Data science ,Data modeling - Abstract
Conducting (big) data analytics in an organization is not just about using a processing framework (e.g. Hadoop/Spark) to learn a model from data currently in a single file system (e.g. HDFS). We frequently need to pipeline real time data from other systems into the processing framework, and continually update the learned model. The processing frameworks need to be easily invokable for different purposes to produce different models. The model and the subsequent model updates need to be integrated with a product that may require a real time prediction using the latest trained model. All these need to be shared among different teams in the organization for different data analytics purposes. In this paper, we propose a real time data-analytics-as-service architecture that uses RESTful web services to wrap and integrate data services, dynamic model training services (supported by big data processing framework), prediction services and the product that uses the models. We discuss the challenges in wrapping big data processing frameworks as services and other architecturally significant factors that affect system reliability, real time performance and prediction accuracy. We evaluate our architecture using a log-driven system operation anomaly detection system where staleness of data used in model training, speed of model update and prediction are critical requirements.
- Published
- 2015
15. Detecting cloud provisioning errors using an annotated process model
- Author
-
Ingo Weber, Fei Teng, Len Bass, Liming Zhu, Xiwei Xu, and Hiroshi Wada
- Subjects
Annotation ,Workflow ,Software deployment ,Computer science ,Process (computing) ,Assertion ,Provisioning ,Data mining ,Construct (python library) ,Error detection and correction ,computer.software_genre ,computer - Abstract
In this paper, we demonstrate the feasibility of annotating a process model with assertions to detect errors in cloud provisioning in near real time. Our proposed workflow is: a) construct a process model of the desired provisioning activities using log data, b) use the process model to determine appropriate annotation triggers and annotate the process model with assertions, c) use the process model to monitor the deployment logs as they are generated, d) trigger the assertion checking based on process activities and log entries, and e) check the assertions to determine errors.For a production deployment tool, Asgard, we have implemented the steps involving constructing a process model, using the model to determine appropriate annotation triggers, triggering the annotation checking based on Asgard log files, and detecting errors. Our prototype has detected errors that cross deployment tool boundaries and go undetected by Asgard; it further has detected other errors substantially more quickly than Asgard would have.
- Published
- 2013
16. Process-oriented recovery for operations on cloud applications
- Author
-
Xiwei Xu, Min Fu, Anna Liu, Len Bass, and Liming Zhu
- Subjects
business.industry ,Computer science ,Distributed computing ,Control reconfiguration ,Cloud computing ,computer.software_genre ,Upgrade ,Scripting language ,Software deployment ,Process oriented ,Redundancy (engineering) ,business ,computer ,Rollback ,Computer network - Abstract
A large number of cloud application failures happen during sporadic operations on cloud applications, such as upgrade, deployment reconfiguration, migration and scaling-out/in. Most of them are caused by operator and process errors [1]. From a cloud consumer's perspective, recovery from these failures relies on the limited control and visibility provided by the cloud providers. In addition, a large-scale system often has multiple operation processes happening simultaneously, which exacerbates the problem during error diagnosis and recovery. Existing built-in or infrastructure-based recovery mechanisms often assume random component failures and use checkpoint-based rollback, compensation actions [2], redundancy and rejuvenation to handle recovery [3]. These recovery mechanisms do not consider the characteristics of a specific operation process that consists of a set of steps carried out by scripts and humans interacting with fragile cloud infrastructure APIs and uncertain resources [4]. Other approaches such as FATE/DESTINI [5] look at the process implied by a system's internal protocols and rely on the built-in recovery protocol to detect and recover from bugs. The problem we target is at a different level related to the external sporadic activities operating on a hosted cloud application.
- Published
- 2013
17. Availability analysis for deployment of in-cloud applications
- Author
-
Ingo Webber, Zhanwen Li, Xiwei Xu, Hiroshi Wada, Liming Zhu, Sherif Sakr, and Qinghua Lu
- Subjects
Engineering ,Calibration and validation ,business.industry ,Quality of service ,Best practice ,Control (management) ,Cloud computing ,Computer security ,computer.software_genre ,Risk analysis (engineering) ,Software deployment ,Rare events ,Visibility ,business ,computer - Abstract
Deploying critical applications in the cloud introduces uncertainties for availability that have traditionally been under the direct control of the application owner. The cloud infrastructure impact to availability is due to dynamic resource sharing as well as limited visibility/control of the underlying infrastructure and its quality of service. It is important to assess the availability of the critical application considering the weak availability guarantees provided by the cloud infrastructures under a broad range of scenarios, including rare scenarios like infrastructure failures and disasters. In this paper, we propose a deployment architecture-driven availability analysis model that considers uncertain rare events explicitly and bridges the gap of weak infrastructure availability and critical application availability. The models require initial calibration and validation, which is achieved by using data from commercial products and industry best practices. We use the proposed models to reevaluate the industry best practice under rare infrastructure events.
- Published
- 2013
18. Cloud API issues
- Author
-
Xiwei Xu, Hiroshi Wada, Len Bass, Liming Zhu, Qinghua Lu, and Zhanwen Li
- Subjects
Engineering ,business.industry ,Reliability (computer networking) ,Cloud computing ,Computer security ,computer.software_genre ,Empirical research ,Backup ,Software deployment ,Scripting language ,Scale (social sciences) ,State (computer science) ,business ,computer - Abstract
Outages to the cloud infrastructures have been widely publicized and it would be easy to conclude that application developers only need to be concerned with large scale cloud provider infrastructure outages. Unfortunately, this is not the case. In-cloud applications heavily rely on cloud infrastructure APIs (directly or indirectly through scripts and consoles) for many sporadic activities such as deployment change, scaling out/in, backup, recovery and migration. Failures and/or issues around API calls are a large source of faults that could lead to application failures, especially during sporadic activities. Infrastructure outages can also be greatly exacerbated by API-related issues.In this paper we present an empirical study of issues in Amazon EC2 APIs. Some of the major findings around API issues include: 1) A majority (60%) of the cases of API failures are related to "stuck" API calls or unresponsive API calls. 2) A significant portion (12%) of the cases of API failures are about slow responsive API calls. 3) 19% of the cases of API failures are related to the output issues of API calls, including failed calls with unclear error messages, as well as missing output, wrong output, and unexpected output of API calls. 4) There are 9% cases of API failures reporting that their calls (performing some actions and expecting a state change) were pending for a certain time and then returned to the original state without informing the caller properly or the calls were reported to be successful first but failed later. We also classify the causes of API issues and discuss the impact of API issues on application architectures.
- Published
- 2013
19. Session details: Security and safety
- Author
-
Liming Zhu
- Subjects
Multimedia ,Computer science ,Session (computer science) ,computer.software_genre ,computer - Published
- 2013
20. Analyzing differences in risk perceptions between developers and acquirers in OTS-based custom software projects using stakeholder analysis
- Author
-
Mark Staples, Dana Sulistiyo Kusumo, Liming Zhu, and Ross Jeffery
- Subjects
Risk analysis ,Engineering ,Process management ,business.industry ,Project stakeholder ,Stakeholder ,Custom software ,Stakeholder analysis ,Computer-assisted web interviewing ,Project management ,Marketing ,Audit risk ,business - Abstract
Project stakeholders can have different perceptions of risks and how they should be mitigated, but these differences are not always well understood and managed. This general issue occurs in Off-the-shelf (OTS)-based custom software development projects, which use and integrate OTS software in the development of specialized software for an individual customer. We report on a study of risk perceptions for developers and acquirers in OTS-based custom software development projects. The study used an online questionnaire-based survey. We compared stakeholders' perceptions about their level of control over and exposure to 11 shared risks in OTS-based software, in 35 OTS-based software developments and 34 OTS-based software acquisitions of Indonesian background. We found that both stakeholders can best control, and are most impacted by, risks about requirements negotiation. In general stakeholders agree who can best control risks (usually the developer), but there were different perceptions about who is most impacted by risks (the developer reported either themselves or both stakeholders; while usually the acquirer reported both stakeholders). In addition, both stakeholders agree that the acquirer is most impacted by the risk of reduced control of future evolution of the system. We also found disagreement about who is most impacted by the risk of lack of support (usually each stakeholder reported themselves). This paper makes two main contributions. First, the paper presents a method based on stakeholder analysis to compare perceptions of the respondents about which stakeholder is affected by and can control risks. Second, knowing stakeholder agreement on which stakeholder has high risk control should be helpful to rationalize responsibility for risks.
- Published
- 2012
21. An architecture framework for application-managed scaling of cloud-hosted relational databases
- Author
-
Anna Liu, Liming Zhu, Liang Zhao, Sherif Sakr, and Xiwei Xu
- Subjects
Flexibility (engineering) ,Database ,Relational database ,business.industry ,Computer science ,Distributed computing ,Cloud computing ,computer.software_genre ,Replication (computing) ,Consistency (database systems) ,Architecture framework ,Benchmark (computing) ,Business logic ,business ,computer - Abstract
Scaling relational database in the cloud is one of the critical factors in the migration of applications to the cloud. It is important that applications can directly monitor fine-grained scaling performance (such as consistency-related replication delays and query-specific response time) and specify application-specific policies for autonomic management of the scaling. However, there is no general mechanism and reusable framework and infrastructures to help this. The current facilities in cloud-hosted relational databases are also very limited in providing fine-grained and consumer-centric monitoring data. The situation is exacerbated by the complexity of the different underlying cloud technologies and the need to separate scaling policy from business logic. This paper presents an architecture framework to facilitate a consumer-centric, application-managed autonomic scaling of relational databases in cloud. The architecture framework includes a new consumer-centric monitoring infrastructure and customisable components for sensing, monitoring, analysing and actuation according to application-level scaling policies without modifying an existing application. We evaluated our framework using a modified Web 2.0 application benchmark. The results demonstrate the framework's ability to provide application-level flexibility in achieving improved throughput, data freshness (different levels of consistency) and monetary saving.
- Published
- 2012
22. Data management requirements for a knowledge discovery platform
- Author
-
Xiwei Xu, Len Bass, and Liming Zhu
- Subjects
Engineering ,Process management ,Knowledge management ,business.industry ,media_common.quotation_subject ,Software ecosystem ,Data management ,Central management ,Knowledge extraction ,Operation control ,Quality (business) ,Architecture ,business ,media_common - Abstract
This paper provides some requirements for the data management portion of a knowledge discovery ecosystem platform. The requirements are functional -- what the platform should provide for its clients; quality -- how the platform should support modifiability, performance, and availability; and management -- how the platform supports operational control to sites that use it. It also provides design guidance that reflects the lack of central management that exists in an ecosystem.
- Published
- 2012
23. Impact of process simulation on software practice
- Author
-
LiGuo Huang, He Zhang, Liming Zhu, Ross Jeffery, and Dan Houston
- Subjects
Engineering ,Engineering management ,Software Engineering Process Group ,business.industry ,Team software process ,Software Process simulation ,Personal software process ,Empirical process (process control model) ,Software development ,business ,Software engineering ,Software project management - Abstract
Process simulation has become a powerful technology in support of software project management and process improvement over the past decades. This research, inspired by the Impact Project, intends to investigate the technology transfer of software process simulation to the use in industrial settings, and further identify the best practices to release its full potential in software practice. We collected the reported applications of process simulation in software industry, and identified its wide adoption in the organizations delivering various software intensive systems. This paper, as an initial report of the research, briefs a historical perspective of the impact upon practice based on the documented evidence, and also elaborates the research-practice transition by examining one detailed case study. It is shown that research has a significant impact on practice in this area. The analysis of impact trace also reveals that the success of software process simulation in practice highly relies on the association with other software process techniques or practices and the close collaboration between researchers and practitioners.
- Published
- 2011
24. Integration of RESTfulBP with BDIM decision making
- Author
-
Jacky Keung, Qinghua Lu, Vladimir Tosic, Xiwei Xu, and Liming Zhu
- Subjects
Business process management ,Business Process Model and Notation ,Artifact-centric business process model ,business.industry ,Business process ,Business rule ,Business decision mapping ,Systems engineering ,Business ,Business activity monitoring ,Business process modeling ,Software engineering - Abstract
Software runtime adaptability is one of the desired quality attributes in modern business process systems. It helps satisfy a variety of users' needs and accommodate diverse business and technical changes, both in the running software and its operating environment. In this paper, we present and demonstrate the application of our adaptation middleware MiniMASC+MiniZinc to business processes designed and implemented using the REpresentational State Transfer (REST) architectural style. We extended MiniMASC+MiniZinc with new autonomic business-driven decision making algorithms to determine which process fragment to execute in a decision point of a RESTful business process. Our new decision making algorithms enable different adaptation decisions for different classes of consumer at runtime depending on business strategies, in a way that achieves maximum overall business value while satisfying all given constraints. We demonstrate the new decision making algorithms in a LIXI (Lending Industry XML Initiative)-compliant loan application process system that was implemented using the REST principles.
- Published
- 2010
25. Systematic selection of quality attribute techniques
- Author
-
Yin Kia Chiam, Mark Staples, and Liming Zhu
- Subjects
Software development process ,Engineering ,business.industry ,Systems development life cycle ,Empirical process (process control model) ,Software quality analyst ,Analytic hierarchy process ,Software verification and validation ,business ,Software quality control ,Software quality ,Reliability engineering - Abstract
Various techniques are used to investigate, evaluate, and control product quality risks throughout software development process. These "Quality Attribute Techniques" are used during all stages of the software development life cycle to ensure that acceptable levels of product qualities such as safety and performance are in place. In this paper, we propose a method to select from among the alternatives of these techniques. This method is based on Risk Management theory and the Analytic Hierarchy Process (AHP) approach. We apply our method to an example of real-world safety system presented in the literature. We identify advantages and limitations of the method, and discuss future research.
- Published
- 2010
26. Towards an architectural viewpoint for systems of software intensive systems
- Author
-
John Brondum and Liming Zhu
- Subjects
Architectural geometry ,System of systems ,Engineering ,Software ,Architectural pattern ,business.industry ,Architectural technology ,Design structure matrix ,business ,Software architecture ,Software engineering ,Dependency (project management) - Abstract
An important aspect of architectural knowledge is the capture of software relationships [25]. But current definitions [25, 21, 23] do not adequately capture external system relationships [5], and offer no guidance on implicit relationships [29]. This leaves architects either unaware of critical relationships or, to 'roll their own' based on aggregations of code-level call structures, resulting in critical architectural gaps and communication problems within Systems of Software intensive Systems (S3) environments [2]. These environments may also restrict the sharing of architectural knowledge due to either legal, or contractual constraints, or overwhelm due to the size and number of involved systems adding to the challenges of identifying and describing the relationships.This paper presents a novel S3 Architectural Viewpoint consisting of; 1) an extensible taxonomy of relationships (building on existing relationship concepts), 2) a systematic, repeatable technique to detect both immediate and composite relationships, and 3) proposes the Annotated Design Structure Matrix to link S3 views, with existing dependency analysis technique. The goal is an architectural approach for sharing and analysis of architectural knowledge relating to relationships, in an S3 environment. The research is ongoing and validation will be performed through case studies from industry collaborations.
- Published
- 2010
27. Investigating test-and-fix processes of incremental development using hybrid process simulation
- Author
-
He Zhang, Ross Jeffery, and Liming Zhu
- Subjects
Software development process ,Iterative and incremental development ,Process modeling ,Computer science ,business.industry ,Distributed computing ,Empirical process (process control model) ,Goal-Driven Software Development Process ,Systems engineering ,Software development ,Design process ,Incremental build model ,business - Abstract
Software process modeling has become an essential technique for managing, investigating and improving software development processes. In this area, hybrid process simulation modeling attracts an increasing research attention. This paper presents a new hybrid software process model to investigate the test-and-fix process of incremental development. Its novelty comes from its flexible model structure that focuses on the particular portion of software process by using different modeling techniques on separate but interconnected phases in incremental development. Simulation results conclude that this model can support the investigation of portions of incremental development life cycle at different granularity levels simultaneously. It also allows the tradeoff analysis and optimization of test-and-fix process, while avoids the limitation caused by incomplete process detail of other phases.
- Published
- 2008
28. Towards process-based composition of self-managing service-oriented systems
- Author
-
Min'an Tan, Yan Liu, and Liming Zhu
- Subjects
Architecture framework ,Service system ,Business process ,business.industry ,Computer science ,Separation of concerns ,Systems engineering ,Services computing ,Business process modeling ,Loose coupling ,Software engineering ,business ,Software architecture - Abstract
Loose coupling and preserving safe changes are two key criteria for composing self-managing services. Composition of adaptive control components with business services without interfering with the original service operations is complicated by the dynamic and highly distributed nature of service-oriented systems. Essentially, encapsulating control logic into abstract logical models enables a clear separation of concerns, with states and transitions indicating the logical control flow. The challenge is to seamlessly integrate these models with services and their host infrastructure as a unified self-managing environment. In this paper, we present an architectural solution towards process-based composition and coordination of self-managing services. This architecture framework leverages business process models to produce declarative and executable control models. We discuss the problem context and outline the research challenges of such an approach.
- Published
- 2008
29. Resource-oriented business process modeling for ultra-large-scale systems
- Author
-
Yan Liu, Liming Zhu, Xiwei Xu, and Mark Staples
- Subjects
World Wide Web ,Representational state transfer ,Business Process Execution Language ,Computer science ,Artifact-centric business process model ,computer.internet_protocol ,Business process ,SOAP ,Service-oriented architecture ,Business process modeling ,Web service ,computer.software_genre ,computer - Abstract
REpresentational State Transfer (REST) and the resource-oriented viewpoint are considered to be the guiding principles behind the WWW ULS ecosystem. RESTful principles are responsible for many of the desirable ULS quality attributes achieved, such as loose-coupling, reliability, data visibility and interoperability. However, many exiting Web-based or service-oriented applications (WSDL/SOAP-based) only use WWW/HTTP as a tunneling protocol or abuse URL and POX (Plain Old XML) by encoding method semantics in them. These applications are designed as fine-grained distributed Remote Procedure Calls (RPC), breaking many of the REST principles, and are subsequently harmful to the overall ULS system health. The debate on REST versus SOAP-based "Big" Web services has been raging in the industry. We observe that the main problems lie in two areas: 1) conceptually modeling process-centric business applications using a "resource-oriented" viewpoint promoted by the REST principles; and 2) decentralizing a workflow-based business process (e.g. BPEL) into distributed and dynamic process fragments. In this paper, we propose a solution to these two problems. Our approach aligns process-intensive applications with the basic Web principles and promotes dynamic and distributed process coordination.
- Published
- 2008
30. Evaluating guidelines for empirical software engineering studies
- Author
-
Felicia Kurniawati, Mike Berry, Mark Staples, Jacky Keung, Liming Zhu, Karl Cox, Hiyam Al-Khilidar, He Zhang, Barbara Kitchenham, and Muhammad Ali Babar
- Subjects
Engineering ,Empirical research ,Brainstorming ,business.industry ,Process (engineering) ,Management science ,Perspective (graphical) ,Set (psychology) ,business - Abstract
Background. Several researchers have criticized the standards of performing and reporting empirical studies in software engineering. In order to address this problem, Andreas Jedlitschka and Dietmar Pfahl have produced reporting guidelines for controlled experiments in software engineering. They pointed out that their guidelines needed evaluation. We agree that guidelines need to be evaluated before they can be widely adopted. If guidelines are flawed, they will cause more problems that they solve.Aim. The aim of this paper is to present the method we used to evaluate the guidelines and report the results of our evaluation exercise. We suggest our evaluation process may be of more general use if reporting guidelines for other types of empirical study are developed.Method. We used perspective-based inspections to perform a theoretical evaluation of the guidelines. A separate inspection was performed for each perspective. The perspectives used were: Researcher, Practitioner/Consultant, Meta-analyst, Replicator, Reviewer and Author. Apart from the Author perspective, the inspections were based on a set of questions derived by brainstorming. The inspection using the Author perspective reviewed each section of the guidelines sequentially. Results. The question-based perspective inspections detected 42 issues where the guidelines would benefit from amendment or clarification and 8 defects.Conclusions. Reporting guidelines need to specify what information goes into what section and avoid excessive duplication. Software engineering researchers need to be cautious about adopting reporting guidelines that differ from those used by other disciplines. The current guidelines need to be revised and the revised guidelines need to be subjected to further theoretical and empirical validation. Perspective-based inspection is a useful validation method but the practitioner/consultant perspective presents difficulties.
- Published
- 2006
31. Model driven benchmark generation for web services
- Author
-
Liming Zhu, Ian Gorton, Ngoc Bao Bui, and Yan Liu
- Subjects
Web standards ,Service (systems architecture) ,medicine.medical_specialty ,Engineering ,computer.internet_protocol ,business.industry ,Software performance testing ,Service-oriented architecture ,computer.software_genre ,Cloud testing ,Systems engineering ,medicine ,Web service ,WS-Policy ,Software engineering ,business ,computer ,Web modeling - Abstract
Web services solutions are being increasingly adopted in enterprise systems. However, ensuring the quality of service of Web services applications remains a costly and complicated performance engineering task. Some of the new challenges include limited controls over consumers of a service, unforeseeable operational scenarios and vastly different XML payloads. These challenges make existing manual performance analysis and benchmarking methods difficult to use effectively. This paper describes an approach for generating customized benchmark suites for Web services applications from a software architecture description following a Model Driven Architecture (MDA) approach. We have provided a performance-tailored version of the UML 2.0 Testing Profile so architects can model a flexible and reusable load testing architecture, including test data, in a standards compatible way. We extended our MDABench [27] tool to provide a Web service performance testing "cartridge" associated with the tailored testing profile. A load testing suite and automatic performance measurement infrastructure are generated using the new cartridge. Best practices in Web service testing are embodied in the cartridge and inherited by the generated code. This greatly reduces the effort needed for Web service performance benchmarking while being fully MDA compatible. We illustrate the approach using a case study on the Apache Axis platform.
- Published
- 2006
32. MDAbench
- Author
-
Liming Zhu, Ngoc Bao Bui, Ian Gorton, and Yan Liu
- Subjects
Model-based testing ,Computer science ,business.industry ,White-box testing ,Applications of UML ,System testing ,Software performance testing ,computer.software_genre ,Load testing ,Embedded system ,Benchmark (computing) ,Code generation ,business ,computer - Abstract
Designing component-based application that meets performance requirements remains a challenging problem, and usually requires a prototype to be constructed to benchmark performance. Building a custom benchmark suite is however costly and tedious. This demonstration illustrates an approach for generating customized component-based benchmark applications using a Model Driven Architecture (MDA) approach. All the platform related plumbing and basic performance testing routines are encapsulated in MDA generation "cartridges" along with default implementations of testing logic. We will show how to use a tailored version of the UML 2.0 Testing Profile to model a customized load testing client. The performance configuration (such as transaction mix and spiking simulations) can also be modeled using the UML model. Executing the generated deployable code will collect the performance testing data automatically. The tool implementation is based on a widely used open source MDA framework AndroMDA. We extended it by providing a cartridge for a performance testing tailored version of the UML 2.0 Testing Profile. Essentially, we use OO-based meta-modeling in designing and implementing a lightweight performance testing domain specific language with supporting infrastructure on top of the existing UML testing standard.
- Published
- 2005
33. Proceedings of the 1st Workshop on Software Engineering for Responsible AI, SE4RAI 2022, Pittsburgh, Pennsylvania, 19 May 2022
- Author
-
Qinghua Lu 0001, Xiwei Xu 0001, Liming Zhu 0001, and John Grundy 0001
- Published
- 2022
- Full Text
- View/download PDF
34. Proceedings of the 1st workshop on MOdel Driven Development for Middleware, MODDM 2006, Melbourne, Australia, November 27 - December 1, 2006
- Author
-
Ian Gorton, Liming Zhu 0001, Yan Liu 0001, and Shiping Chen 0001
- Published
- 2006
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.