2022
DOI: 10.48550/arxiv.2201.05159
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Structured access: an emerging paradigm for safe AI deployment

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
2
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 0 publications
0
4
0
Order By: Relevance
“…Another objective would be preventing users from circumventing a model's restrictions to modify or reproduce it. In that regard, Shevlane [39] proposes structured access as an emerging paradigm that constructs a controlled, arm's length interaction between an AI system and its user.…”
Section: Theoretical Proposals To Regulate Frontier Ai Modelsmentioning
confidence: 99%
“…Another objective would be preventing users from circumventing a model's restrictions to modify or reproduce it. In that regard, Shevlane [39] proposes structured access as an emerging paradigm that constructs a controlled, arm's length interaction between an AI system and its user.…”
Section: Theoretical Proposals To Regulate Frontier Ai Modelsmentioning
confidence: 99%
“…Consistent with the recommendations of , we believe research access to publicly benchmark and document these models is necessary, even if the broader practices for model release will differ across model providers. To this end, we recommend patterns of developer-mediated access as potential middlegrounds to ensure these models can be benchmarked transparently as a form of structured model access (Shevlane, 2022).…”
Section: Missing Modelsmentioning
confidence: 99%
“…Additionally, for many contemporary LLMs, there are other individuation choices which must be made. For example, model developers (especially commercial developers) often provide 'structured access' to models (Shevlane, 2022), either via an API or a web application. This means that the function p θ underlying these models is always evaluated with not only a tokenizer and inference procedure, but various additional elements.…”
Section: Models and Background Conditions: Using Llms As A Case Studymentioning
confidence: 99%