Proceedings 2016 Network and Distributed System Security Symposium 2016
DOI: 10.14722/ndss.2016.23199
|View full text |Cite
|
Sign up to set email alerts
|

Pitfalls in Designing Zero-Effort Deauthentication: Opportunistic Human Observation Attacks

Abstract: Deauthentication is an important component of any authentication system. The widespread use of computing devices in daily life has underscored the need for zero-effort deauthentication schemes. However, the quest for eliminating user effort may lead to hidden security flaws in the authentication schemes.As a case in point, we investigate a prominent zero-effort deauthentication scheme, called ZEBRA, which provides an interesting and a useful solution to a difficult problem as demonstrated in the original paper… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
24
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 22 publications
(24 citation statements)
references
References 22 publications
0
24
0
Order By: Relevance
“…The attacker can either try to bypass the system via providing data from his smartwatch or can try to use the victim's smartwatch somehow obtained (e.g., can steal it or victim can leave it behind). • More Powerful Adversaries: Furthermore, a powerful adversary can be aware of WACA and try to defeat it using special tools and skills by imitating legitimate users [19], [20] or launching statistical attacks [21], [22]. This powerful adversary (insider or outsider) can be a human or a trained bot.…”
Section: Design Goals Assumptions and Adversary Modelmentioning
confidence: 99%
See 4 more Smart Citations
“…The attacker can either try to bypass the system via providing data from his smartwatch or can try to use the victim's smartwatch somehow obtained (e.g., can steal it or victim can leave it behind). • More Powerful Adversaries: Furthermore, a powerful adversary can be aware of WACA and try to defeat it using special tools and skills by imitating legitimate users [19], [20] or launching statistical attacks [21], [22]. This powerful adversary (insider or outsider) can be a human or a trained bot.…”
Section: Design Goals Assumptions and Adversary Modelmentioning
confidence: 99%
“…In this subsection, we evaluate the performance of WACA against two powerful attacks: imitation [19], [20] and statistical [21], [22] attacks. In these attacks, the attacker is aware of WACA and can try to defeat WACA using special tools and skills.…”
Section: B Advanced Attacks On Waca With More Powerful Adversariesmentioning
confidence: 99%
See 3 more Smart Citations