Empirical validation of a quality framework for evaluating modelling languages in MDE environments

SOFTWARE QUALITY JOURNAL(2021)

引用 2|浏览12
暂无评分
摘要
In previous research, we proposed the multiple modelling quality evaluation framework (MMQEF), which is a method and tool for evaluating modelling languages in model-driven engineering (MDE) environments. Rather than being exclusive, MMQEF attempts to complement other methods of evaluation of quality such as SEQUAL. However, to date, MMQEF has not been validated beyond some concept proofs. This paper evaluates the applicability of the MMQEF method in comparison with other existing methods. We performed an evaluation in which the subjects had to detect quality issues in modelling languages. A group of expert professionals and two experimental objects (i.e. two combinations of different modelling languages based on real industrial practices) were used. To analyse the results, we applied quantitative approaches, i.e. statistical tests on the results of the performance measures and the perception of subjects. We ran four replications of the experiment in Colombia between 2016 and 2019, with a total of 50 professionals. The results of the quantitative analysis show a low performance for all of the methods, but a positive perception of MMQEF. Conclusions: The application of modelling language quality evaluation methods within MDE settings is indeed tricky, and subjects did not succeed in identifying all quality problems. This experiment paves the way for additional investigation on the trade-offs between the methods and potential situational guidelines (i.e. circumstances under which each method is convenient). We encourage further inquiries on industrial applications to incrementally improve the method and tailor it to the needs of professionals working in real industrial environments.
更多
查看译文
关键词
Quality, Model-driven engineering, Quality frameworks, Empirical evaluation, The MMQEF method
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要