Back to Search Start Over

DiLA: Enhancing LLM Tool Learning with Differential Logic Layer

Authors :
Zhang, Yu
Zhen, Hui-Ling
Pei, Zehua
Lian, Yingzhao
Yin, Lihao
Yuan, Mingxuan
Yu, Bei
Publication Year :
2024

Abstract

Considering the challenges faced by large language models (LLMs) in logical reasoning and planning, prior efforts have sought to augment LLMs with access to external solvers. While progress has been made on simple reasoning problems, solving classical constraint satisfaction problems, such as the Boolean Satisfiability Problem (SAT) and Graph Coloring Problem (GCP), remains difficult for off-the-shelf solvers due to their intricate expressions and exponential search spaces. In this paper, we propose a novel differential logic layer-aided language modeling (DiLA) approach, where logical constraints are integrated into the forward and backward passes of a network layer, to provide another option for LLM tool learning. In DiLA, LLM aims to transform the language description to logic constraints and identify initial solutions of the highest quality, while the differential logic layer focuses on iteratively refining the LLM-prompted solution. Leveraging the logic layer as a bridge, DiLA enhances the logical reasoning ability of LLMs on a range of reasoning problems encoded by Boolean variables, guaranteeing the efficiency and correctness of the solution process. We evaluate the performance of DiLA on two classic reasoning problems and empirically demonstrate its consistent outperformance against existing prompt-based and solver-aided approaches.<br />Comment: arXiv admin note: text overlap with arXiv:2305.12295 by other authors

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2402.11903
Document Type :
Working Paper