2021
DOI: 10.1007/978-3-030-85037-1_8
|View full text |Cite
|
Sign up to set email alerts
|

DiffRNN: Differential Verification of Recurrent Neural Networks

Abstract: Recurrent neural networks (RNNs) such as Long Short Term Memory (LSTM) networks have become popular in a variety of applications such as image processing, data classification, speech recognition, and as controllers in autonomous systems. In practical settings, there is often a need to deploy such RNNs on resource-constrained platforms such as mobile phones or embedded devices. As the memory footprint and energy consumption of such components become a bottleneck, there is interest in compressing and optimizing … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5
1
1

Relationship

3
4

Authors

Journals

citations
Cited by 11 publications
(4 citation statements)
references
References 40 publications
0
4
0
Order By: Relevance
“…We leave as future work a further investigation of variations of ADMM that can improve convergence rates in deep learning-sized problem instances, as well as extensions beyond the LP setting. Furthermore, it would be interesting to extend the proposed method to verification of recurrent neural neworks (RNNs) such as vanilla RNNs, LSTMs 9 , and GRUs 10 [46], [47], [48]. where T f (x) ∈ ∂f (x) denotes a subgradient.…”
Section: Discussionmentioning
confidence: 99%
“…We leave as future work a further investigation of variations of ADMM that can improve convergence rates in deep learning-sized problem instances, as well as extensions beyond the LP setting. Furthermore, it would be interesting to extend the proposed method to verification of recurrent neural neworks (RNNs) such as vanilla RNNs, LSTMs 9 , and GRUs 10 [46], [47], [48]. where T f (x) ∈ ∂f (x) denotes a subgradient.…”
Section: Discussionmentioning
confidence: 99%
“…At a higher level, our method for using over-approximate analysis to narrow down the search space is analogous to static analysis techniques based on abstract interpretation [11], which have been used to verify properties of both software programs [28,48,53] and machine learning models [40][41][42], including robustness to data bias [37] and individual fairness [32]. Furthermore, our method for detecting robustness violations is analogous to techniques used in bug-finding tools based on program verification and state space reduction [5,27].…”
Section: Related Workmentioning
confidence: 99%
“…Instead, they are more closely related to techniques for verifying/certifying robustness [8], noninterference [5], and sidechannel security [19,39,40,48], where a program is executed multiple times, each time for a different input drawn from a large (and sometimes infinite) set, to see if they all agree on the output. At a high level, this is closely related to differential verification [28,31,32], synthesis of relational invariants [41] and verification of hyper-properties [15,35].…”
Section: Related Workmentioning
confidence: 99%