2019
DOI: 10.1007/978-3-030-19212-9_24
|View full text |Cite
|
Sign up to set email alerts
|

Repairing Learned Controllers with Convex Optimization: A Case Study

Abstract: Despite the increasing popularity of Machine Learning methods, their usage in safety-critical applications is sometimes limited by the impossibility of providing formal guarantees on their behaviour. In this work we focus on one such application, where Kernel Ridge Regression with Random Fourier Features is used to learn controllers for a prosthetic hand. Due to the non-linearity of the activation function used, these controllers sometimes fail in correctly identifying users' intention. Under specific circumst… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
2
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
3
3
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 11 publications
(3 citation statements)
references
References 24 publications
0
2
0
Order By: Relevance
“…If one has U < L, then the network is guaranteed to be robust to any small perturbations in L p norm. LP was also used by [89] in order to automatically reduce the intent detection mismatch of a prosthetic hand.…”
Section: Abstract Interpretationmentioning
confidence: 99%
“…If one has U < L, then the network is guaranteed to be robust to any small perturbations in L p norm. LP was also used by [89] in order to automatically reduce the intent detection mismatch of a prosthetic hand.…”
Section: Abstract Interpretationmentioning
confidence: 99%
“…Verification (Bak et al, 2020, Demarchi et al, 2022, Eramo et al, 2022, Ferrari et al, 2022, Guidotti, 2022, Guidotti et al, 2019b,b, 2020, 2023c,d,e, Henriksen and Lomuscio, 2021, Katz et al, 2019, Kouvaros et al, 2021, Singh et al, 2019a, which aims to provide formal assurances regarding the behavior of neural networks, has emerged as a potential solution to the aforementioned robustness issues. In addition to the development of verification tools and techniques, a substantial amount of research is also directed towards modifying networks to align with specified criteria (Guidotti et al, 2019a,b, Henriksen et al, 2022, Kouvaros et al, 2021, Sotoudeh and Thakur, 2021, and exploring methods for training networks that adhere to specific constraints on their behavior (Cohen et al, 2019, Eaton-Rosen et al, 2018, Giunchiglia and Lukasiewicz, 2021, Giunchiglia et al, 2022, Hu et al, 2016.…”
Section: Introductionmentioning
confidence: 99%
“…Providing formal guarantees on the performance of neural networks [6][7][8][9][10][11][12][13][14][15][16][17][18][19][20], known as verification, or making them compliant with such guarantees [21][22][23][24][25][26][27], known as repair, has proven to be similarly challenging, even when using models with limited complexity and size. Additionally, neural networks have recently been found to be prone to reliability issues known as adversarial perturbations [28], where seemingly insignificant variations in their inputs cause unforeseeable and undesirable changes in their behavior.…”
Section: Introductionmentioning
confidence: 99%