2020
DOI: 10.3390/app10020518
|View full text |Cite
|
Sign up to set email alerts
|

ExtendAIST: Exploring the Space of AI-in-the-Loop System Testing

Abstract: The AI-in-the-loop system (AIS) has been widely used in various autonomous decision and control systems, such as computing vision, autonomous vehicle, and collision avoidance systems. AIS generates and updates control strategies through learning algorithms, which make the control behaviors non-deterministic and bring about the test oracle problem in AIS testing procedure. The traditional system mainly concerns about properties of safety, reliability, and real-time, while AIS concerns more about the correctness… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 51 publications
(93 reference statements)
0
1
0
Order By: Relevance
“…Besides, researchers are actively working on new areas to develop tools and methodologies that ensure safety properties for AI, known as "AI Safety". Some research works propose testing techniques for AI, as the ExtendAIST framework which includes methods and metrics to evaluate the robustness, stiffness, and behavior consistency of ML and Deep Learning (DL) models like adversarial attacks or neuron level coverage [6]. Other works proposed reference architectures to improve the reliability of ML-based systems based on N-Version programming [7], or temporal and space partitioning [8].…”
Section: Introductionmentioning
confidence: 99%
“…Besides, researchers are actively working on new areas to develop tools and methodologies that ensure safety properties for AI, known as "AI Safety". Some research works propose testing techniques for AI, as the ExtendAIST framework which includes methods and metrics to evaluate the robustness, stiffness, and behavior consistency of ML and Deep Learning (DL) models like adversarial attacks or neuron level coverage [6]. Other works proposed reference architectures to improve the reliability of ML-based systems based on N-Version programming [7], or temporal and space partitioning [8].…”
Section: Introductionmentioning
confidence: 99%