2021
DOI: 10.48550/arxiv.2104.08164
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Editing Factual Knowledge in Language Models

Abstract: The factual knowledge acquired during pretraining and stored in the parameters of Language Models (LM) can be useful in downstream tasks (e.g., question answering or textual inference). However, some facts can be incorrectly induced or become obsolete over time. We present KNOWLEDGEEDITOR, a method which can be used to edit this knowledge and, thus, fix 'bugs' or unexpected predictions without the need for expensive retraining or fine-tuning. Besides being computationally efficient, KNOWLEDGEEDITOR does not re… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(14 citation statements)
references
References 22 publications
0
14
0
Order By: Relevance
“…Provable point repair algorithm [38] finds a provably minimal repair satisfying the safety specification over a finite set of points. Cao et al [5] propose to train a hypernetwork with constrained optimization to modify without affecting the rest of the knowledge, which is then used to predict the weight update at test time.…”
Section: Network Editingmentioning
confidence: 99%
“…Provable point repair algorithm [38] finds a provably minimal repair satisfying the safety specification over a finite set of points. Cao et al [5] propose to train a hypernetwork with constrained optimization to modify without affecting the rest of the knowledge, which is then used to predict the weight update at test time.…”
Section: Network Editingmentioning
confidence: 99%
“…Constrained Finetuning tackle this problem by enforcing a norm-based constraint on the model's weights θ while finetuning the model on the dataset of revisions D such that it minimally interferes with the facts that should not be modified. However, such a constraint on the parameter space ignores the highly non-linear nature of LMs (De Cao et al, 2021).…”
Section: Finetuningmentioning
confidence: 99%
“…Several works have already approached improving LMs through the lens of KBs: Petroni et al (2019); ; Wang et al (2020); Heinzerling and Inui (2021); Sung et al (2021). Many of these works include updating factoids stored within the parameters of LMs (De Cao et al, 2021;Mitchell et al, 2021;Hase et al, 2021) to creating new methods for extracting factual knowledge (Petroni et al, 2019). Despite significant progress towards achieving parity between LMs and KBs, LMs still lack specific aspects that KBs have.…”
Section: Introductionmentioning
confidence: 99%
“…In general, GPT-3 output has a loose relationship with reality; nowhere in its training process is truth prioritized over falsehood. Getting GPT-3-type systems to produce more factual output is an active area of research [1,10,13].…”
Section: Article (Gpt-3 Text)mentioning
confidence: 99%
“…Last year I wrote a book, aimed at a general audience, that explores how data-driven algorithms have impacted the news industry and our ability to separate fact from fiction [6]. 1 This article zeros in on, and amplifies, some of the more mathematical aspects of that story in what I hope will be both informative and engaging to a mathematical audience. As you'll soon see, there are many fun ingredients at play here, ranging from elementary notions (fractions, linear functions, and weighted sums) to intermediate level concepts (eigenvalues and Shannon information) to sophisticated uses of probability theory, network analysis, and deep learning.…”
Section: Introductionmentioning
confidence: 99%