Back to Search Start Over

Large Language Models and Logical Reasoning

Authors :
Robert Friedman
Source :
Encyclopedia, Vol 3, Iss 2, Pp 687-697 (2023)
Publication Year :
2023
Publisher :
MDPI AG, 2023.

Abstract

In deep learning, large language models are typically trained on data from a corpus as representative of current knowledge. However, natural language is not an ideal form for the reliable communication of concepts. Instead, formal logical statements are preferable since they are subject to verifiability, reliability, and applicability. Another reason for this preference is that natural language is not designed for an efficient and reliable flow of information and knowledge, but is instead designed as an evolutionary adaptation as formed from a prior set of natural constraints. As a formally structured language, logical statements are also more interpretable. They may be informally constructed in the form of a natural language statement, but a formalized logical statement is expected to follow a stricter set of rules, such as with the use of symbols for representing the logic-based operators that connect multiple simple statements and form verifiable propositions.

Details

Language :
English
ISSN :
26738392
Volume :
3
Issue :
2
Database :
Directory of Open Access Journals
Journal :
Encyclopedia
Publication Type :
Academic Journal
Accession number :
edsdoj.852ff97cd068498abb973652b900212b
Document Type :
article
Full Text :
https://doi.org/10.3390/encyclopedia3020049