Back to Search Start Over

Leveraging Large Language Models for Wireless Symbol Detection via In-Context Learning

Authors :
Abbas, Momin
Kar, Koushik
Chen, Tianyi
Publication Year :
2024

Abstract

Deep neural networks (DNNs) have made significant strides in tackling challenging tasks in wireless systems, especially when an accurate wireless model is not available. However, when available data is limited, traditional DNNs often yield subpar results due to underfitting. At the same time, large language models (LLMs) exemplified by GPT-3, have remarkably showcased their capabilities across a broad range of natural language processing tasks. But whether and how LLMs can benefit challenging non-language tasks in wireless systems is unexplored. In this work, we propose to leverage the in-context learning ability (a.k.a. prompting) of LLMs to solve wireless tasks in the low data regime without any training or fine-tuning, unlike DNNs which require training. We further demonstrate that the performance of LLMs varies significantly when employed with different prompt templates. To solve this issue, we employ the latest LLM calibration methods. Our results reveal that using LLMs via ICL methods generally outperforms traditional DNNs on the symbol demodulation task and yields highly confident predictions when coupled with calibration techniques.<br />Comment: Accepted at IEEE GLOBECOM 2024

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2409.00124
Document Type :
Working Paper