Back to Search Start Over

OptiSeq: Optimizing Example Ordering for In-Context Learning

Authors :
Bhope, Rahul Atul
Venkateswaran, Praveen
Jayaram, K. R.
Isahagian, Vatche
Muthusamy, Vinod
Venkatasubramanian, Nalini
Publication Year :
2025

Abstract

Developers using LLMs in their applications and agents have provided plenty of anecdotal evidence that in-context-learning (ICL) is fragile. In addition to the quantity and quality of examples, we show that the order in which the in-context examples are listed in the prompt affects the output of the LLM and, consequently, their performance. In this paper, we present OptiSeq, which introduces a score based on log probabilities of LLM outputs to prune the universe of possible example orderings in few-shot ICL and recommend the best order(s) by distinguishing between correct and incorrect outputs resulting from different order permutations. Through a detailed empirical evaluation on multiple LLMs, datasets and prompts, we demonstrate that OptiSeq improves accuracy by 6 - 10.5 percentage points across multiple tasks.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2501.15030
Document Type :
Working Paper