2020
DOI: 10.48550/arxiv.2010.08410
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Ease.ML/Snoopy: Towards Automatic Feasibility Studies for ML via Quantitative Understanding of "Data Quality for ML"

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(3 citation statements)
references
References 0 publications
0
3
0
Order By: Relevance
“…The notion of lowest possible error generalizes the framework proposed by Cai et al [7] in that the lowest error is naturally achieved if the features of all website are indistinguishable. Estimating the BER or its bounds using finite datasets is an extensively researched problem in the field of machine learning [5,12,15,19,20,50,55,61]. Inspired by Cover and Hart [12], Cherubin reduces the WF problem to a classification task and leverages the error of the Nearest Neighbor classifier as a proxy to estimate the lower bound for the error of any potential classifier used on predefined features.…”
Section: Security Estimation Of Wf Defensesmentioning
confidence: 99%
See 2 more Smart Citations
“…The notion of lowest possible error generalizes the framework proposed by Cai et al [7] in that the lowest error is naturally achieved if the features of all website are indistinguishable. Estimating the BER or its bounds using finite datasets is an extensively researched problem in the field of machine learning [5,12,15,19,20,50,55,61]. Inspired by Cover and Hart [12], Cherubin reduces the WF problem to a classification task and leverages the error of the Nearest Neighbor classifier as a proxy to estimate the lower bound for the error of any potential classifier used on predefined features.…”
Section: Security Estimation Of Wf Defensesmentioning
confidence: 99%
“…Guided by the reasoning in observation (2) and the fact that the sum of finite-sample bias and transformation bias are positive and typically strictly larger than the tightness of the lower bound estimate, it is natural that even without having knowledge of the additional induced biases the NN-based estimators can only benefit from deterministic feature transformations in the finite-sample case. A natural consequence of this is that one can simply achieve a better estimate of the BER by not only inspecting a single representation, but rather try many different transformations and report the minimal achieved BER [55]. Note that a theoretical counterpart for NN-based MI estimators (e.g., negative finite-sample bias) is missing.…”
Section: Theoretical Reasoningmentioning
confidence: 99%
See 1 more Smart Citation