Back to Search Start Over

Large Language Models Are Zero-Shot Time Series Forecasters

Authors :
Gruver, Nate
Finzi, Marc
Qiu, Shikai
Wilson, Andrew Gordon
Publication Year :
2023

Abstract

By encoding time series as a string of numerical digits, we can frame time series forecasting as next-token prediction in text. Developing this approach, we find that large language models (LLMs) such as GPT-3 and LLaMA-2 can surprisingly zero-shot extrapolate time series at a level comparable to or exceeding the performance of purpose-built time series models trained on the downstream tasks. To facilitate this performance, we propose procedures for effectively tokenizing time series data and converting discrete distributions over tokens into highly flexible densities over continuous values. We argue the success of LLMs for time series stems from their ability to naturally represent multimodal distributions, in conjunction with biases for simplicity, and repetition, which align with the salient features in many time series, such as repeated seasonal trends. We also show how LLMs can naturally handle missing data without imputation through non-numerical text, accommodate textual side information, and answer questions to help explain predictions. While we find that increasing model size generally improves performance on time series, we show GPT-4 can perform worse than GPT-3 because of how it tokenizes numbers, and poor uncertainty calibration, which is likely the result of alignment interventions such as RLHF.<br />Comment: NeurIPS 2023. Code available at: https://github.com/ngruver/llmtime

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2310.07820
Document Type :
Working Paper