2020
DOI: 10.48550/arxiv.2003.00610
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Gimme That Model!: A Trusted ML Model Trading Protocol

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 2 publications
0
3
0
Order By: Relevance
“…As outlined above, consumers of servitized DL models in particular need to be aware of the risks their black-box nature poses and establish similarly strict protocols as with human operators for similar decisions. As the market of AIaaS is only emerging, guidelines for responsible transfer learning have yet to be established (e.g., Amorós et al 2020).…”
Section: Resource Limitations and Transfer Learningmentioning
confidence: 99%
“…As outlined above, consumers of servitized DL models in particular need to be aware of the risks their black-box nature poses and establish similarly strict protocols as with human operators for similar decisions. As the market of AIaaS is only emerging, guidelines for responsible transfer learning have yet to be established (e.g., Amorós et al 2020).…”
Section: Resource Limitations and Transfer Learningmentioning
confidence: 99%
“…For the latter, a pre-trained general model is tuned for its new task with comparably few specific observations in a process called transfer learning [26]. However, acquiring and using third-party pre-trained models, such as NLP models for chatbots, often means using a black box, which can exhibit any kind of prejudicial behavior such as local social or geographical biases or even susceptibility to adversarial attacks or [27].…”
Section: Ai1: Model and Training Data Selectionmentioning
confidence: 99%
“…Likewise, we show how these challenges should be solved and whether they will occur in the short or long term in an intelligent RPA implementation project. ), this process also causes trust and compliance concerns [26,27]. In this context, companies need to build trust with vendors and developers that their AI models are unbiased and robust against adversarial attacks [12].…”
Section: Overview Of Challengesmentioning
confidence: 99%