Back to Search Start Over

Fine-Tuning Language Models Using Formal Methods Feedback

Authors :
Yang, Yunhao
Bhatt, Neel P.
Ingebrand, Tyler
Ward, William
Carr, Steven
Wang, Zhangyang
Topcu, Ufuk
Publication Year :
2023

Abstract

Although pre-trained language models encode generic knowledge beneficial for planning and control, they may fail to generate appropriate control policies for domain-specific tasks. Existing fine-tuning methods use human feedback to address this limitation, however, sourcing human feedback is labor intensive and costly. We present a fully automated approach to fine-tune pre-trained language models for applications in autonomous systems, bridging the gap between generic knowledge and domain-specific requirements while reducing cost. The method synthesizes automaton-based controllers from pre-trained models guided by natural language task descriptions. These controllers are verifiable against independently provided specifications within a world model, which can be abstract or obtained from a high-fidelity simulator. Controllers with high compliance with the desired specifications receive higher ranks, guiding the iterative fine-tuning process. We provide quantitative evidences, primarily in autonomous driving, to demonstrate the method's effectiveness across multiple tasks. The results indicate an improvement in percentage of specifications satisfied by the controller from 60% to 90%.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2310.18239
Document Type :
Working Paper