2021
DOI: 10.3390/e23111538
|View full text |Cite
|
Sign up to set email alerts
|

Tight and Scalable Side-Channel Attack Evaluations through Asymptotically Optimal Massey-like Inequalities on Guessing Entropy

Abstract: The bounds presented at CHES 2017 based on Massey’s guessing entropy represent the most scalable side-channel security evaluation method to date. In this paper, we present an improvement of this method, by determining the asymptotically optimal Massey-like inequality and then further refining it for finite support distributions. The impact of these results is highlighted for side-channel attack evaluations, demonstrating the improvements over the CHES 2017 bounds.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1
1

Relationship

1
6

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 20 publications
0
4
0
Order By: Relevance
“…One perspective is to provide similar optimal regions relating two arbitrary randomness measures. Of course, by (6), Fano regions such as H α vs. P e can be trivially reinterpreted as regions H α vs. H ∞ (see, e.g., Figure 2 in [42] for the region H vs. H ∞ ). Using some more involved derivations, the authors of [46] have investigated the optimal regions H vs. H 2 and, more generally, the authors of [47,48] have investigated the optimal regions between two αentropies of different orders.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…One perspective is to provide similar optimal regions relating two arbitrary randomness measures. Of course, by (6), Fano regions such as H α vs. P e can be trivially reinterpreted as regions H α vs. H ∞ (see, e.g., Figure 2 in [42] for the region H vs. H ∞ ). Using some more involved derivations, the authors of [46] have investigated the optimal regions H vs. H 2 and, more generally, the authors of [47,48] have investigated the optimal regions between two αentropies of different orders.…”
Section: Discussionmentioning
confidence: 99%
“…Massey [3] has shown that the guessing entropy G is exponentially increasing as entropy H increases. A recent improved inequality is [5,6] G >…”
Section: Guessing Entropymentioning
confidence: 99%
“…Choudary et al [CP17,TCRP21] propose an efficient method to bound Massey's guessing entropy (GM) that scales well with large keys, though GM and rank are different metrics [Gro18]. Radulescu et al [RPC22] show that GM is generally a lower bound to the empirical guessing entropy computed as the average rank over multiple experiments (each using [GGP + 15]) or the Guessing Entropy Estimation Algorithm by Zhang et al [ZDF20], which Young et al [YMO22] outperform with classic rank estimation (based on [MOOS15]).…”
Section: Key Rankmentioning
confidence: 99%
“…In general, key ranking algorithms are slow for large keys (see, for example, the comparison by Grosso [Gro18]). Recent work has improved scalability, optimizing the Histogram Enumeration method to obtain linear scaling with respect to the key size [Gro18], or taking the different approach of computing bounds for Massey's guessing entropy [CP17,TCRP21]. We compare MCRank with both approaches, but the comparison with [CP17, TCRP21] is only qualitative, because the Guessing Entropy is a different metric than the rank [Gro18].…”
Section: Very Large Keysmentioning
confidence: 99%