1. Empirically evaluating modeling language ontologies: the Peira framework.
- Author
-
Liaskos, Sotirios, Zarbaf, Saba, Mylopoulos, John, and Khan, Shakil M.
- Subjects
- *
MODELING languages (Computer science) , *CONCEPTUAL models , *BUSINESS communication , *ARTIFICIAL languages , *EMPIRICAL research - Abstract
Conceptual modeling plays a central role in planning, designing, developing and maintaining software-intensive systems. One of the goals of conceptual modeling is to enable clear communication among stakeholders involved in said activities. To achieve effective communication, conceptual models must be understood by different people in the same way. To support such shared understanding, conceptual modeling languages are defined, which introduce rules and constraints on how individual models can be built and how they are to be understood. A key component of a modeling language is an ontology, i.e., a set of concepts that modelers must use to describe world phenomena. Once the concepts are chosen, a visual and/or textual vocabulary is adopted for representing the concepts. However, the choices both of the concepts and of the vocabulary used to represent them may affect the quality of the language under consideration: some choices may promote shared understanding better than other choices. To allow evaluation and comparison of alternative choices, we present Peira, a framework for empirically measuring the domain and comprehensibility appropriateness of conceptual modeling language ontologies. Given a language ontology to be evaluated, the framework is based on observing how prospective language users classify domain content under the concepts put forth by said ontology. A set of metrics is then used to analyze the observations and identify and characterize possible issues that the choice of concepts or the way they are represented may have. The metrics are abstract in that they can be operationalized into concrete implementations tailored to specific data collection instruments or study objectives. We evaluate the framework by applying it to compare an existing language against an artificial one that is manufactured to exhibit specific issues. We then test if the metrics indeed detect these issues. We find that the framework does offer the expected indications, but that it also requires good understanding of the metrics prior to committing to interpretations of the observations. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF