2023
DOI: 10.2139/ssrn.4361607
|View full text |Cite
|
Sign up to set email alerts
|

Auditing Large Language Models: A Three-Layered Approach

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
27
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 29 publications
(28 citation statements)
references
References 0 publications
1
27
0
Order By: Relevance
“…There is a broad range of ways that LLMs can be customised or adapted to specific use-cases. The paradigm upon which LLMs are built is designed for adaption via transfer learning -where models are first pre-trained, then adapted via fine-tuning or in-context demonstrations for a specific task [161]. Some recent work suggests LLMs require no additional training to 'role-play' as different individuals, adopting their worldview [9], mirroring their play in economic games [91,5] or predicting their voting preferences [11].…”
Section: Customisation Of Llms Already Happensmentioning
confidence: 99%
See 3 more Smart Citations
“…There is a broad range of ways that LLMs can be customised or adapted to specific use-cases. The paradigm upon which LLMs are built is designed for adaption via transfer learning -where models are first pre-trained, then adapted via fine-tuning or in-context demonstrations for a specific task [161]. Some recent work suggests LLMs require no additional training to 'role-play' as different individuals, adopting their worldview [9], mirroring their play in economic games [91,5] or predicting their voting preferences [11].…”
Section: Customisation Of Llms Already Happensmentioning
confidence: 99%
“…Machine learning workflows affect personalisation as, in principle, every actor involved in creating an LLM application could exert control over how it is adapted or personalised [161]. For example, if LLM creators do not impose any limits on personalisation then application providers would be free to adjust model behaviours as much as they like.…”
Section: How People Interact With Llmsmentioning
confidence: 99%
See 2 more Smart Citations
“…External model audit, i.e. model evaluation by an independent, external auditor for the purpose of providing a judgement -or input to a judgement -about the safety of deploying a model (or training a new one) (ARC Evals, 2023; Mökander et al, 2023;Raji et al, 2022b).…”
Section: Model Evaluation As Critical Governance Infrastructurementioning
confidence: 99%