2023
DOI: 10.48550/arxiv.2302.14500
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

FreeEagle: Detecting Complex Neural Trojans in Data-Free Cases

Abstract: Trojan attack on deep neural networks, also known as backdoor attack, is a typical threat to artificial intelligence. A trojaned neural network behaves normally with clean inputs. However, if the input contains a particular trigger, the trojaned model will have attacker-chosen abnormal behavior. Although many backdoor detection methods exist, most of them assume that the defender has access to a set of clean validation samples or samples with the trigger, which may not hold in some crucial real-world cases, e.… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
6
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(6 citation statements)
references
References 44 publications
(81 reference statements)
0
6
0
Order By: Relevance
“…Several studies have focused on identifying trojaned neural models [9], [20], [23], [24], [32]. The goal is to determine whether a given neural model is trojaned before the model is deployed in real-world applications.…”
Section: ) Trojaned Model Detectionmentioning
confidence: 99%
See 4 more Smart Citations
“…Several studies have focused on identifying trojaned neural models [9], [20], [23], [24], [32]. The goal is to determine whether a given neural model is trojaned before the model is deployed in real-world applications.…”
Section: ) Trojaned Model Detectionmentioning
confidence: 99%
“…DF-TND [32] and FREEEAGLE [9] do not require access to clean samples or samples with triggers. However, DF-TND has been shown to be ineffective against some complex attacks such as class-specific Trojans [9].…”
Section: ) Trojaned Model Detectionmentioning
confidence: 99%
See 3 more Smart Citations