2019 ACM/IEEE 22nd International Conference on Model Driven Engineering Languages and Systems Companion (MODELS-C) 2019
DOI: 10.1109/models-c.2019.00030
|View full text |Cite
|
Sign up to set email alerts
|

Personalized and Automatic Model Repairing using Reinforcement Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
24
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 24 publications
(24 citation statements)
references
References 7 publications
0
24
0
Order By: Relevance
“…The chances of breaking a model increase with collaborative modeling activities, depending on the number of changes in software requirements [BRH18], and the size of the conceptual domain to be engineered. This domain model might have become invalid at any stage of the modeling activity 1 .…”
Section: Running Examplementioning
confidence: 99%
See 1 more Smart Citation
“…The chances of breaking a model increase with collaborative modeling activities, depending on the number of changes in software requirements [BRH18], and the size of the conceptual domain to be engineered. This domain model might have become invalid at any stage of the modeling activity 1 .…”
Section: Running Examplementioning
confidence: 99%
“…In previous work, we introduced PARMOREL (Personalized and Automatic Repair of MOdels using REinforcement Learning) [BRH18,BRH19], an approach that provides personalized and automatic repair of software models using reinforcement learning (RL) [TL00]. PARMOREL finds a sequence of repairing actions according to preferences introduced by the user without considering objective measures such as quality characteristics.…”
Section: Introductionmentioning
confidence: 99%
“…In this way, Pl-nodes not satisfying the condition c, are deleted. We obtain a repair program for d 1 is a repair program for e 1 ∧ e 2 = n i=1 d i . An illustration of this part of the proof is given in Figure 4.…”
Section: Definition 8 (Preservationmentioning
confidence: 99%
“…In Barriga et al 2019 [1], an algorithm for model repair based on EMF is presented, which relies on reinforcement learning. For each error in the model, a so-called Q-table is constructed, storing a weight for each error, and repair action.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation