Back to Search Start Over

Memory-Based Model Editing at Scale

Authors :
Mitchell, Eric
Lin, Charles
Bosselut, Antoine
Manning, Christopher D.
Finn, Chelsea
Publication Year :
2022

Abstract

Even the largest neural networks make errors, and once-correct predictions can become invalid as the world changes. Model editors make local updates to the behavior of base (pre-trained) models to inject updated knowledge or correct undesirable behaviors. Existing model editors have shown promise, but also suffer from insufficient expressiveness: they struggle to accurately model an edit's intended scope (examples affected by the edit), leading to inaccurate predictions for test inputs loosely related to the edit, and they often fail altogether after many edits. As a higher-capacity alternative, we propose Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model (SERAC), which stores edits in an explicit memory and learns to reason over them to modulate the base model's predictions as needed. To enable more rigorous evaluation of model editors, we introduce three challenging language model editing problems based on question answering, fact-checking, and dialogue generation. We find that only SERAC achieves high performance on all three problems, consistently outperforming existing approaches to model editing by a significant margin. Code, data, and additional project information will be made available at https://sites.google.com/view/serac-editing.<br />Comment: ICML 2022. Project site at https://sites.google.com/view/serac-editing

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2206.06520
Document Type :
Working Paper