Back to Search Start Over

Autoformalization with Large Language Models

Authors :
Wu, Yuhuai
Jiang, Albert Q.
Li, Wenda
Rabe, Markus N.
Staats, Charles
Jamnik, Mateja
Szegedy, Christian
Publication Year :
2022

Abstract

Autoformalization is the process of automatically translating from natural language mathematics to formal specifications and proofs. A successful autoformalization system could advance the fields of formal verification, program synthesis, and artificial intelligence. While the long-term goal of autoformalization seemed elusive for a long time, we show large language models provide new prospects towards this goal. We make the surprising observation that LLMs can correctly translate a significant portion ($25.3\%$) of mathematical competition problems perfectly to formal specifications in Isabelle/HOL. We demonstrate the usefulness of this process by improving a previously introduced neural theorem prover via training on these autoformalized theorems. Our methodology results in a new state-of-the-art result on the MiniF2F theorem proving benchmark, improving the proof rate from $29.6\%$ to $35.2\%$.<br />Comment: 44 pages

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2205.12615
Document Type :
Working Paper