A comparative study of reinforcement learning techniques to repair models.

MODELS Companion(2020)

Cited 7|Views11
No score
Abstract
In model-driven software engineering, models are used in all phases of the development process. These models may get broken due to various editions during the modeling process. To repair broken models we have developed PARMOREL, an extensible framework that uses reinforcement learning techniques. So far, we have used our version of the Markov Decision Process (MDP) adapted to the model repair problem and the Q-learning algorithm. In this paper, we revisit our MDP definition, addressing its weaknesses, and proposing a new one. After comparing the results of both MDPs using Q-Learning to repair a sample model, we proceed to compare the performance of Q-Learning with other reinforcement learning algorithms using the new MDP. We compare Q-Learning with four algorithms: Q(λ), Monte Carlo, SARSA and SARSA (λ), and perform a comparative study by repairing a set of broken models. Our results indicate that the new MDP definition and the Q(λ) algorithm can repair with faster performance.
More
Translated text
Key words
reinforcement,models,learning
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined