Back to Search
Start Over
From Tarzan to Tolkien: Controlling the Language Proficiency Level of LLMs for Content Generation
- Source :
- In Findings of the Association for Computational Linguistics (ACL 2024)
- Publication Year :
- 2024
-
Abstract
- We study the problem of controlling the difficulty level of text generated by Large Language Models (LLMs) for contexts where end-users are not fully proficient, such as language learners. Using a novel framework, we evaluate the effectiveness of several key approaches for this task, including few-shot prompting, supervised finetuning, and reinforcement learning (RL), utilising both GPT-4 and open source alternatives like LLama2-7B and Mistral-7B. Our findings reveal a large performance gap between GPT-4 and the open source models when using prompt-based strategies. However, we show how to bridge this gap with a careful combination of finetuning and RL alignment. Our best model, CALM (CEFR-Aligned Language Model), surpasses the performance of GPT-4 and other strategies, at only a fraction of the cost. We further validate the quality of our results through a small-scale human study.
- Subjects :
- Computer Science - Computation and Language
Computer Science - Machine Learning
Subjects
Details
- Database :
- arXiv
- Journal :
- In Findings of the Association for Computational Linguistics (ACL 2024)
- Publication Type :
- Report
- Accession number :
- edsarx.2406.03030
- Document Type :
- Working Paper