Proceedings of the 28th ACM SIGSOFT International Symposium on Software Testing and Analysis 2019
DOI: 10.1145/3293882.3330569
|View full text |Cite
|
Sign up to set email alerts
|

Learning user interface element interactions

Abstract: When generating tests for graphical user interfaces, one central problem is to identify how individual UI elements can be interacted with-clicking, long-or right-clicking, swiping, dragging, typing, or more. We present an approach based on reinforcement learning that automatically learns which interactions can be used for which elements, and uses this information to guide test generation. We model the problem as an instance of the multi-armed bandit problem (MAB problem) from probability theory, and show how i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
26
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
1
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 48 publications
(26 citation statements)
references
References 31 publications
0
26
0
Order By: Relevance
“…Automatic GUI testing dynamically explores GUIs of an application, and several approaches [26,42] use computer vision techniques to detect GUI components to make predictions and compare different tools for GUI testing on Android applications. Recent deep learning-based techniques [20,41] have also been applied for automatic GUI testing. More work on GUI with computer vision techniques such as GUI search [11-13, 15, 17, 35, 43] and GUI code generation [14,18,19,32,34] facilitates the effective completion of computing tasks based on image features.…”
Section: Related Workmentioning
confidence: 99%
“…Automatic GUI testing dynamically explores GUIs of an application, and several approaches [26,42] use computer vision techniques to detect GUI components to make predictions and compare different tools for GUI testing on Android applications. Recent deep learning-based techniques [20,41] have also been applied for automatic GUI testing. More work on GUI with computer vision techniques such as GUI search [11-13, 15, 17, 35, 43] and GUI code generation [14,18,19,32,34] facilitates the effective completion of computing tasks based on image features.…”
Section: Related Workmentioning
confidence: 99%
“…The game interface is made to be very intuitive for users of all age categories. Android apps are event-driven; i.e., all interaction between the app and the user happens through events such as touching and swiping [27]. The app has been tested on different age categories and has been shown to be easy to use without confusing user interface elements.…”
Section: System Descriptionmentioning
confidence: 99%
“…Because we convert each screenshot into a probability distribution by Equation ( 6), Equation ( 7) is a natural choice for computing Equation (1). The procedure is simple: Firstly, two consecutive screenshots y i and y i+1 are represented as probability distributions based on Equation (6). Then, the KLD value between two resulting probabilities, P(y i ) and P(y i+1 ), is computed by Equation (7).…”
Section: Kld Based Detectionmentioning
confidence: 99%
“…For example, recent developments in object recognition have been utilized for improving random testing [5]. In [6], a reinforcement learning based approach was proposed for identifying how individual UI widgets are interacting. Saumya et al introduced the idea of the automatic generation of worst case test inputs from a model of program behavior in order to test programs under extreme loads [7].…”
Section: Introductionmentioning
confidence: 99%