Back to Search Start Over

Unnatural Languages Are Not Bugs but Features for LLMs

Authors :
Duan, Keyu
Zhao, Yiran
Feng, Zhili
Ni, Jinjie
Pang, Tianyu
Liu, Qian
Cai, Tianle
Dou, Longxu
Kawaguchi, Kenji
Goyal, Anirudh
Kolter, J. Zico
Shieh, Michael Qizhe
Publication Year :
2025

Abstract

Large Language Models (LLMs) have been observed to process non-human-readable text sequences, such as jailbreak prompts, often viewed as a bug for aligned LLMs. In this work, we present a systematic investigation challenging this perception, demonstrating that unnatural languages - strings that appear incomprehensible to humans but maintain semantic meanings for LLMs - contain latent features usable by models. Notably, unnatural languages possess latent features that can be generalized across different models and tasks during inference. Furthermore, models fine-tuned on unnatural versions of instruction datasets perform on-par with those trained on natural language, achieving 49.71 win rates in Length-controlled AlpacaEval 2.0 in average across various base models. In addition, through comprehensive analysis, we demonstrate that LLMs process unnatural languages by filtering noise and inferring contextual meaning from filtered words.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2503.01926
Document Type :
Working Paper