作者: Tao Yue , Shaukat Ali
DOI: 10.1007/978-3-319-09195-2_14
关键词:
摘要: Controlled experiments in model-based software engineering, especially those involving human subjects performing modeling tasks, often require comparing models produced by experiment with reference models, which are considered to be correct and complete. The purpose of such comparison is assess the quality so that hypotheses can accepted or rejected. typically measured quantitatively based on metrics. Manually defining metrics for a rich language cumbersome error-prone. It also result do not systematically consider relevant details turn may produce biased results. In this paper, we present framework automatically generate MOF-based metamodels, used measure (instances metamodels). This was evaluated its results manually derived UML class sequence diagrams it has been derive measuring state machine diagrams. Results show more efficient systematic define than doing manually.