10 results on '"Model comprehensibility"'
Search Results
2. Comprehensibility of system models during test design: a controlled experiment comparing UML activity diagrams and state machines.
- Author
-
Felderer, Michael and Herrmann, Andrea
- Subjects
TEST design ,EXPERIMENTAL design ,CHARTS, diagrams, etc. ,TEST systems ,MACHINING - Abstract
UML activity diagrams and state machines are both used for modeling system behavior from the user perspective and are frequently the basis for deriving system test cases. In practice, system test cases are often derived manually from UML activity diagrams or state machines. For this task, comprehensibility of respective models is essential and a relevant question for practice to support model selection and design, as well as subsequent test derivation. Therefore, the objective of this paper is to compare the comprehensibility of UML activity diagrams and state machines during manual test case derivation. We investigate the comprehensibility of UML activity diagrams and state machines in a controlled student experiment. Three measures for comprehensibility have been investigated: (1) the self-assessed comprehensibility, (2) the actual comprehensibility measured by the correctness of answers to comprehensibility questions, and (3) the number of errors made during test case derivation. The experiment was performed and internally replicated with overall 84 participants divided into three groups at two institutions. Our experiment indicates that activity diagrams are more comprehensible but also more error-prone with regard to manual test case derivation and discusses how these results can improve system modeling and test case design. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
3. Understanding Understandability of Conceptual Models – What Are We Actually Talking about?
- Author
-
Houy, Constantin, Fettke, Peter, Loos, Peter, Hutchison, David, editor, Kanade, Takeo, editor, Kittler, Josef, editor, Kleinberg, Jon M., editor, Mattern, Friedemann, editor, Mitchell, John C., editor, Naor, Moni, editor, Nierstrasz, Oscar, editor, Pandu Rangan, C., editor, Steffen, Bernhard, editor, Sudan, Madhu, editor, Terzopoulos, Demetri, editor, Tygar, Doug, editor, Vardi, Moshe Y., editor, Weikum, Gerhard, editor, Atzeni, Paolo, editor, Cheung, David, editor, and Ram, Sudha, editor
- Published
- 2012
- Full Text
- View/download PDF
4. The best of two worlds: Balancing model strength and comprehensibility in business failure prediction using spline-rule ensembles.
- Author
-
De Bock, Koen W.
- Subjects
- *
SPLINE theory , *BUSINESS failures , *DEBTOR & creditor , *DATA mining , *BANKRUPTCY - Abstract
Numerous organizations and companies rely upon business failure prediction to assess and minimize the risk of initiating business relationships with partners, clients, debtors or suppliers. Advances in research on business failure prediction have been largely dominated by algorithmic development and comparisons led by a focus on improvements in model accuracy. In this context, ensemble learning has recently emerged as a class of particularly well-performing methods, albeit often at the expense of increased model complexity. However, in practice, model choice is rarely based on predictive performance alone. Models should be comprehensible and justifiable to assess their compliance with common sense and business logic, and guarantee their acceptance throughout the organization. A promising ensemble classification algorithm that has been shown to reconcile performance and comprehensibility are rule ensembles. In this study, an extension entitled spline-rule ensembles is introduced and validated in the domain of business failure prediction. Spline-rule ensemble complement rules and linear terms found in conventional rule ensembles with smooth functions with the aim of better accommodating nonlinear simple effects of individual features on business failure. Experiments on a large selection of 21 datasets of European companies in various sectors and countries (i) demonstrate superior predictive performance of spline-rule ensembles over a set of well-established yet powerful benchmark methods, (ii) show the superiority of spline-rule ensembles over conventional rule ensembles and thus demonstrate the value of the incorporation of smoothing splines, (iii) investigate the impact of alternative term regularization procedures and (iv) illustrate the comprehensibility of the resulting models through a case study. In particular, the ability of the technique to reveal the extent and the way in which predictors impact business failure, and if and how variables interact, are exemplified. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
5. Quality of classification explanations with PRBF
- Author
-
Robnik-Šikonja, Marko, Kononenko, Igor, and Štrumbelj, Erik
- Subjects
- *
GAUSSIAN distribution , *VISUALIZATION , *APPROXIMATION theory , *CONFIDENCE intervals , *CONTINUOUS distributions , *DISTRIBUTION (Probability theory) - Abstract
Abstract: Recently two general methods for explaining classification models and their predictions have been introduced. Both methods are based on an idea that importance of a feature or a group of features in a specific model can be estimated by simulating lack of knowledge about the values of the feature(s). For the majority of models this requires an approximation by averaging over all possible feature values. A probabilistic radial basis function network (PRBF) is one of the models where such approximation is not necessary and therefore offers a chance to evaluate the quality of approximation by comparing it to the exact solution. We present both explanation methods and demonstrate their behavior with PRBF. The explanations make individual decisions of classifiers transparent and allow inspection and visualization of otherwise opaque models. We empirically compare the quality of explanations based on marginalization of the Gaussian distribution (the exact method) and explanation with averaging over all feature values (the approximation). The results show that the approximation method and the exact solution give very similar results, which increases the confidence in the explanation methodology also for other classification models. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
6. Explaining Classifications for Individual Instances.
- Author
-
Robnik-Šikonja, Marko and Kononenko, Igor
- Subjects
- *
PROBABILITY theory , *PREDICTION models , *ARTIFICIAL neural networks , *ARTIFICIAL intelligence , *ALGORITHMS , *VISUALIZATION , *VISUAL perception , *VISUAL programming languages (Computer science) , *MATHEMATICS - Abstract
We present a method for explaining predictions for individual instances. The presented approach is general and can be used with all classification models that output probabilities. It is based on the decomposition of a model's predictions on individual contributions of each attribute. Our method works for the so-called black box models such as support vector machines, neural networks, and nearest neighbor algorithms, as well as for ensemble methods such as boosting arid random forests. We demonstrate that the generated explanations closely follow the learned models and present a visualization technique that shows the utility of our approach and enables the comparison of different prediction methods. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
7. Comprehensibility of system models during test design : A controlled experiment comparing UML activity diagrams and state machines
- Author
-
Andrea Herrmann and Michael Felderer
- Subjects
UML models ,Controlled experiment ,Correctness ,Programvaruteknik ,Computer science ,Model comprehensibility ,System testing ,02 engineering and technology ,Activity diagram ,computer.software_genre ,Unified Modeling Language ,020204 information systems ,System models ,0202 electrical engineering, electronic engineering, information engineering ,Safety, Risk, Reliability and Quality ,computer.programming_language ,Finite-state machine ,Test design ,business.industry ,Model selection ,Software Engineering ,020207 software engineering ,Test (assessment) ,Artificial intelligence ,business ,computer ,Software ,Natural language processing - Abstract
UML activity diagrams and state machines are both used for modeling system behavior from the user perspective and are frequently the basis for deriving system test cases. In practice, system test cases are often derived manually from UML activity diagrams or state machines. For this task, comprehensibility of respective models is essential and a relevant question for practice to support model selection and design, as well as subsequent test derivation. Therefore, the objective of this paper is to compare the comprehensibility of UML activity diagrams and state machines during manual test case derivation. We investigate the comprehensibility of UML activity diagrams and state machines in a controlled student experiment. Three measures for comprehensibility have been investigated: (1) the self-assessed comprehensibility, (2) the actual comprehensibility measured by the correctness of answers to comprehensibility questions, and (3) the number of errors made during test case derivation. The experiment was performed and internally replicated with overall 84 participants divided into three groups at two institutions. Our experiment indicates that activity diagrams are more comprehensible but also more error-prone with regard to manual test case derivation and discusses how these results can improve system modeling and test case design. open access
- Published
- 2019
8. The best of two worlds: Balancing model strength and comprehensibility in business failure prediction using spline-rule ensembles
- Author
-
Koen W. De Bock, Audencia Recherche, and Audencia Business School
- Subjects
Computer science ,media_common.quotation_subject ,0211 other engineering and technologies ,02 engineering and technology ,business failure prediction ,spline-rule ensembles ,Machine learning ,computer.software_genre ,risk management ,model comprehensibility ,[STAT.ML]Statistics [stat]/Machine Learning [stat.ML] ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,Business logic ,Bankruptcy prediction ,Risk management ,media_common ,021103 operations research ,penalized cubic regression splines ,[SHS.STAT]Humanities and Social Sciences/Methods and statistics ,business.industry ,General Engineering ,Business failure ,rule ensembles ,Common sense ,data mining ,Ensemble learning ,Computer Science Applications ,Spline (mathematics) ,ensemble learning ,[SHS.GESTION]Humanities and Social Sciences/Business administration ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,computer - Abstract
International audience; Numerous organizations and companies rely upon business failure prediction to assess and minimize the risk of initiating business relationships with partners, clients, debtors or suppliers. Advances in research on business failure prediction have been largely dominated by algorithmic development and comparisons led by a focus on improvements in model accuracy. In this context, ensemble learning has recently emerged as a class of particularly well-performing methods, albeit often at the expense of increased model complexity. However, in practice, model choice is rarely based on predictive performance alone. Models should be comprehensible and justifiable to assess their compliance with common sense and business logic, and guarantee their acceptance throughout the organization. A promising ensemble classification algorithm that has been shown to reconcile performance and comprehensibility are rule ensembles. In this study, an extension entitled spline-rule ensembles is introduced and validated in the domain of business failure prediction. Spline-rule ensemble complement rules and linear terms found in conventional rule ensembles with smooth functions with the aim of better accommodating nonlinear simple effects of individual features on business failure. Experiments on a large selection of 21 datasets of European companies in various sectors and countries (i) demonstrate superior predictive performance of spline-rule ensembles over a set of well-established yet powerful benchmark methods, (ii) show the superiority of spline-rule ensembles over conventional rule ensembles and thus demonstrate the value of the incorporation of smoothing splines, (iii) investigate the impact of alternative term regularization procedures and (iv) illustrate the comprehensibility of the resulting models through a case study. In particular, the ability of the technique to reveal the extent and the way in which predictors impact business failure, and if and how variables interact, are exemplified.
- Published
- 2017
- Full Text
- View/download PDF
9. Understanding understandability of conceptual models - what are we actually talking about? - Supplement
- Author
-
Houy, Constantin, Fettke, Peter, Loos, Peter, and Institut für Wirtschaftsinformatik (IWi) im DFKI
- Subjects
Qualität ,Model Comprehensibility ,Experimental Research ,Wirtschaftsinformatik ,Modellierung ,Modellqualität ,Experiment ,Model Quality ,Conceptual Modeling ,Konzeptuelle Modellierung ,ddc:004 ,ddc:620 ,Model Understandability ,Modellverständlichkeit - Abstract
Investigating and improving the quality of conceptual models has gained tremendous importance in recent years. In general, model understandability is regarded one of the most important model quality goals and criteria. A considerable amount of empirical studies, especially experiments, have been conducted in order to investigate factors in-fluencing the understandability of conceptual models. However, a thorough review and reconstruction of 42 experiments on conceptual model understandability shows that there is a variety of different understandings and conceptualizations of the term model understandability. As a consequence, this term remains ambiguous, research results on model understandability are hardly comparable and partly imprecise, which shows the necessity of clarification what the conceptual modeling community is actually talking about when the term model understandability is used. This contribution represents a supplement to the article „ Understanding understandability of conceptual models – What are we actually talking about?” published in the Proceedings of the 31st International Conference on Conceptual Modeling (ER 2012) which aimed at overcoming the above mentioned shortcoming by investigating and further clarifying the concept of model understandability. This supplement contains a complete overview of Table 1 (p. 69 in the original contribution) which could only be partly presented in the conference proceedings due to space limitations. Furthermore, an erratum concerning the overview in Table 2 (p. 71 in the original contribution) is presented.
- Published
- 2013
- Full Text
- View/download PDF
10. Measuring The Comprehensibility Of A Uml-B Model And A B Model
- Author
-
Rozilawati Razali and Paul W. Garratt
- Subjects
Model comprehensibility ,empirical assessment ,formal and semi-formal notation - Abstract
Software maintenance, which involves making enhancements, modifications and corrections to existing software systems, consumes more than half of developer time. Specification comprehensibility plays an important role in software maintenance as it permits the understanding of the system properties more easily and quickly. The use of formal notation such as B increases a specification-s precision and consistency. However, the notation is regarded as being difficult to comprehend. Semi-formal notation such as the Unified Modelling Language (UML) is perceived as more accessible but it lacks formality. Perhaps by combining both notations could produce a specification that is not only accurate and consistent but also accessible to users. This paper presents an experiment conducted on a model that integrates the use of both UML and B notations, namely UML-B, versus a B model alone. The objective of the experiment was to evaluate the comprehensibility of a UML-B model compared to a traditional B model. The measurement used in the experiment focused on the efficiency in performing the comprehension tasks. The experiment employed a cross-over design and was conducted on forty-one subjects, including undergraduate and masters students. The results show that the notation used in the UML-B model is more comprehensible than the B model., {"references":["B. W. Boehm, J. R. Brown, and J. R. Kaspar, \"Characteristics of\nSoftware Quality\", TRW Series of Software Technology, 1978.","I. Sommerville, Software Engineering. 6th Edition, Addision-Wesley,\n2001, ISBN: 020139815.","T. M. Pigoski, Practical Software Maintenance: Best Practices for\nManaging your Software Investment. Wiley Computer Publishing, 1996,\nISBN: 0471170011.","D. Craigen, S. Gerhart, and T. Ralston, Applications of Formal Methods,\nIn M. Hinchey, J. Bowen, Eds., Prentice-Hall, Englewoodcliffs, NJ,\n1995, ISBN:0133669491","M. G. Hinchey, \"Confessions of a Formal Methodist\", In P. Lindsay,\nEd., Conferences in Research and Practice in Information Technology,\nVol.15, Australian Computer Society, 2002, pp. 17-20.","K. Finney, and A. Fedorec, \"An Empirical Study of Specification\nReadability Teaching and Learning Formal Methods\", In: N. Dean, M.\nHinchey, Eds., Teaching and Learning Formal Methods, Academic\nPress, New York, 1996, ISBN: 0123490405.","K. Finney, \"Mathematical Notation in Formal Specification: Too\ndifficult for the Masses?\", IEEE Transactions on Software Engineering,\nVol.22, No.2, pp. 158-159, 1996.","D. Carew, C. Exton, and J. Buckley, \"An Empirical Investigation of the\nComprehensibility of Requirements Specifications\", International\nSymposium on Empirical Software Engineering, 2005, pp. 256-265.","J. P. Bowen, and M. G. Hinchey, \"Ten Commandments of Formal\nMethods... Ten Years Later\", IEEE Computer, Vol.39, No.1, 2006, pp.\n40-48.\n[10] A. van Lamsweerde, \"Formal Specification: A Roadmap\", In The Future\nof Software Engineering Track (ICSE-00), ACM Press, Los Angeles\nCA, 2000, pp. 147-159.\n[11] I. Vessey, and R. Weber, \"Structured Tools and Conditional Logic: An\nEmpirical Investigation\", Communications of the ACM, Vol.29, No.1,\n1986, pp. 48-57.\n[12] D. A. Scanlan, \"Structured Flowcharts Outperform Pseudocode: An\nExperiment Comparison\", IEEE Software, Vol.6, No.5, 1989, pp. 28-36.\n[13] N. Cunniff, and R. P. Taylor, \"Graphical vs Textual Representation: An\nEmpirical Study of Novices- Program Comprehension\", In Empirical\nStudies of Programmers: 2nd Workshop, 1987, pp. 114-131.\n[14] M. Bauer, and P. Johnson-Laird, \"How Diagrams Can Improve\nReasoning\", Psychological Science, Vol.4, 1993, pp. 372-378.\n[15] K. Stenning, and J. Oberlander, \"A Cognitive Theory of Graphical and\nLinguistic Reasoning: Logic and Implementation\", Cognitive Science,\nVol.19, 1995, pp. 97-140.\n[16] M. Petre, \"Why looking isn't always seeing: Readership skills and\ngraphical programming\", Communications of the ACM, Vol.38, 1995,\npp.33-44.\n[17] M. Scaife, and Y. Rogers, \"External Cognition: How do Graphical\nRepresentations Work?\", International Journal of Human-Computer\nStudies, Vol.45, 1996, pp. 185-213.\n[18] J. R. Abrial, The B-Method - Assigning Programs to Meanings,\nCambridge University Press, 1996, ISBN: 0521496195.\n[19] Object Management Group (2006). Introduction to OMG-s Unified\nModeling Language (UML). [Online]. Available:\nhttp://www.omg.org/gettingstarted/what_is_uml.htm\n[20] C. Snook, I. Oliver, and M. Butler, \"The UML-B Profile for Formal\nSystems Modelling in UML\", In J. Mermet, , Ed., UML-B Specification\nfor Proven Embedded Systems Design, Springer, 2004, ch. 5, ISBN:\n1402028660.\n[21] C. Snook, and M. Butler, \"UML-B: Formal Modelling and Design\nAided by UML\", ACM Transactions on Software Engineering and\nMethodology, Vol.15, No.1, 2006, pp.92-122.\n[22] ClearSy, AtelierB User Manual V3.6, ClearSy System Engineering,\n2003, Aix-en-Provence, France.\n[23] B-Core (UK) Limited, Oxon, UK (1999). B-Toolkit, On-line manual,\n(Online). Available:\nhttp://www.b-core.com/ONLINEDOC/Contents.html\n[24] J. R. Abrial, and D. Cansell (2003) Click ÔÇÿn- Prove - Interactive Proofs\nwithin Set Theory B. (Online). Available: http://www.b4free.com/ and\nhttp://www.loria.fr/cansell/cnp.html\n[25] T. Pender, UML Bible, Wiley, 2003, ISBN: 0764526049.\n[26] S. Senn, Cross-over Trials in Clinical Research (Statistics in Practice),\nJohn Wiley & Sons, 2002, ISBN: 0471496537.\n[27] J. Foster, \"Program Lifetime: A Vital Statistic for Maintenance\", In\nProceedings of the IEEE Conference on Software Maintenance, 1991,\npp. 98-103.\n[28] A. Davis, S. Overmyer, K. Jordan, J. Caruso, F. Dandashi, A. Dinh, G.\nKincaid, G. Ledeboer, P. Reynolds, A. Sitaram, A. Ta, and M.\nTheofanos, \"Identifying and Measuring Quality in a Software\nRequirements Specification\", In Proceedings of the 1st. International\nSoftware Metrics Symposium, IEEE, 1993, pp. 141-152.\n[29] M. Piattini, M. Genero, G. Poels, and J. Nelson, \"Towards a Framework\nfor Conceptual Modelling Quality\", In M. Genero, M. Piattini, and C.\nCalero, Eds., Metrics for Software Conceptual Models, London :\nImperial College Press, 2005, ISBN: 1860944973.\n[30] S. L. Pfleeger, \"Experimental Design and Analysis in Software\nEngineering: Part 1-5\", ACM SIGSOFT Software Engineering\nNotes, Vol.20, No.1-5, 1995.\n[31] B. A. Kitchenham, and S. L. Pfleeger, \"Principles of Survey Research:\nPart 1-6\", ACM SIGSOFT Software Engineering Notes, Vol.27, No.1-6,\n2002.\n[32] B. Efron, and R. Tibshirani, An Introduction to the Bootstrap, Chapman\nand Hall, New York, London, 1993.\n[33] D. S. Moore, and G. P. McCabe, Introduction to the Practice of\nStatistics, 5th Edition, W. H. Freeman, New York, 2006, ISBN:\n071676282.\n[34] B. Efron, and R. Tibshirani, \"The Bootstrap Method for standard errors,\nconfidence intervals and other measures of statistical accuracy\",\nStatistical Science, Vol.1, No. 1, 1986, pp 1-35.\n[35] Insightful Corporation (2006). Available:\nhttp://www.insightful.com/products/splus/default.asp\n[36] B. Jones, and M. G. Kenward, Design and Analysis of Cross-over Trials,\n2nd Edition, Chapman and Hall, London, 2003, ISBN: 0412606402.\n[37] V. R. Basili, R. W. Selby, and D. H. Huthchens, \"Experimentation in\nSoftware Engineering\", IEEE Transactions on Software Engineering,\nVol.12, No.7, 1986, pp. 733-743.\n[38] V. R. Basili, F. Shull, and F. Lanubile, \"Building Knowledge through\nFamilies of Experiments\", IEEE Transactions on Software Engineering,\nVol.25, No.4, 1999, pp. 456-473.\n[39] R. Jeffery, and L. Scott, \"Has Twenty-five Years of Empirical Software\nEngineering Made a Difference?\", In Proceedings of the 9th Asia-Pacific\nSoftware Engineering Conference, 2002, pp. 539-546.\n[40] W. F. Tichy, \"Should Computer Scientists Experiment More?\", IEEE\nComputer, Vol.31, No.5, 1998, pp. 32-40."]}
- Published
- 2008
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.