Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security 2021
DOI: 10.1145/3460120.3484743
|View full text |Cite
|
Sign up to set email alerts
|

Chunk-Level Password Guessing: Towards Modeling Refined Password Composition Representations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 17 publications
(9 citation statements)
references
References 46 publications
0
9
0
Order By: Relevance
“…Moreover, the pre-trained language models [85][86][87][88] have been in full swing in the field of natural language processing in recent years. However, only one work [61] has discussed the paradigm of pre-training/finetuning regarding password guessing. Therefore, it might also be a good idea to combine the powerful general language modeling capabilities of pre-trained language models with the existing password guessing efforts.…”
Section: Combing Traditional and Deep Learning Methodsmentioning
confidence: 99%
See 3 more Smart Citations
“…Moreover, the pre-trained language models [85][86][87][88] have been in full swing in the field of natural language processing in recent years. However, only one work [61] has discussed the paradigm of pre-training/finetuning regarding password guessing. Therefore, it might also be a good idea to combine the powerful general language modeling capabilities of pre-trained language models with the existing password guessing efforts.…”
Section: Combing Traditional and Deep Learning Methodsmentioning
confidence: 99%
“…As research progressed, related techniques gradually flowed from NLP to other fields, such as computer vision (CV), speech, biology, chemistry, etc. Similarly, there are some Transformer-based approaches in the research related to trawling password guessing [60,61].…”
Section: Recurrent Neural Network (Rnn)mentioning
confidence: 99%
See 2 more Smart Citations
“…To show this benefit with these distributions, we allow the brat to attack using TG-I (the most user information), but the defending site to blocklist passwords guessable in 10 6 guesses using TG-I ′′ , TG-I ′′′ , or PCFG (i.e., less user information), preventing the user from setting such a password. We select a per-user blocklist of size 10 6 in accordance with blocklist size recommendations (e.g., [34]) and since passwords guessable in 10 6 guesses are typically categorized as weak by password strength meters (e.g., [22], [43]). To estimate the effects of this blocklisting, we formulate P ( ≤ r) using two line segments, one for r ≤ 10 6 in which P ( ≤ r) is suppressed by the blocklist, and one for r > 10 6 in which the ground lost when r ≤ 10 6 is recovered.…”
Section: ) Estimating the Distribution Ofmentioning
confidence: 99%