Back to Search Start Over

Forgetting Curve: A Reliable Method for Evaluating Memorization Capability for Long-context Models

Authors :
Liu, Xinyu
Zhao, Runsong
Huang, Pengcheng
Xiao, Chunyang
Li, Bei
Wang, Jingang
Xiao, Tong
Zhu, Jingbo
Publication Year :
2024

Abstract

Numerous recent works target to extend effective context length for language models and various methods, tasks and benchmarks exist to measure model's effective memorization length. However, through thorough investigations, we find limitations for currently existing evaluations on model's memorization capability. We provide an extensive survey for limitations in this work and propose a new method called forgetting curve to measure the memorization capability of long-context models. We show that forgetting curve has the advantage of being robust to the tested corpus and the experimental settings, of not relying on prompts and can be applied to any model size. We apply our forgetting curve to a large variety of models involving both transformer and RNN/SSM based architectures. Our measurement provides empirical evidence for the effectiveness of transformer extension techniques while raises questions for the effective length of RNN/SSM based models. We also examine the difference between our measurement and existing benchmarks as well as popular metrics for various models. Our code and results can be found at https://github.com/1azybug/ForgettingCurve.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2410.04727
Document Type :
Working Paper