2006 IEEE Symposium on Security and Privacy (S&P'06) 2006
DOI: 10.1109/sp.2006.26
|View full text |Cite
|
Sign up to set email alerts
|

Misleading worm signature generators using deliberate noise injection

Abstract: Several syntactic-based automatic worm signature generators, e.g., Polygraph, have

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
118
0

Year Published

2006
2006
2021
2021

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 135 publications
(118 citation statements)
references
References 13 publications
0
118
0
Order By: Relevance
“…Another limitation which follows from the use of machine learning is the possibility of mimicry and poisoning attacks [e.g., 25,27,35]. While obfuscation strategies, such as repackaging, code reordering or junk code insertion do not affect DREBIN, renaming of activities and components between the learning and detection phase may impair discriminative features [30,38].…”
Section: Limitationsmentioning
confidence: 99%
See 1 more Smart Citation
“…Another limitation which follows from the use of machine learning is the possibility of mimicry and poisoning attacks [e.g., 25,27,35]. While obfuscation strategies, such as repackaging, code reordering or junk code insertion do not affect DREBIN, renaming of activities and components between the learning and detection phase may impair discriminative features [30,38].…”
Section: Limitationsmentioning
confidence: 99%
“…While obfuscation strategies, such as repackaging, code reordering or junk code insertion do not affect DREBIN, renaming of activities and components between the learning and detection phase may impair discriminative features [30,38]. Similarly, an attacker may succeed in lowering the detection score of DREBIN by incorporating benign features or fake invariants into malicious applications [25,27]. Although such attacks against learning techniques cannot be ruled out in general, the thorough sanitization of learning data [see 7] and a frequent retraining on representative datasets can limit their impact.…”
Section: Limitationsmentioning
confidence: 99%
“…An adversary can inject specially-crafted noise into one of these systems to force it to, little by little, learn a malicious behavior. This is called a poisoning attack and it has been proposed both for signature-based [28,29] and anomaly-based [30] NIDS.…”
Section: Attacks On Nidsmentioning
confidence: 99%
“…But they can be evaded by injecting well crafted fake anomalous flows into normal traffic, thereby misleading signature generation process [30].…”
Section: Related Workmentioning
confidence: 99%
“…Hamsa and LESG also proved that presence of noise in suspicious pool makes the problem NP hard [8,26]. Still there is little or no attention is paid in filtering noise in suspicious pool [30].…”
Section: Related Workmentioning
confidence: 99%