Back to Search Start Over

Risk-Averse Finetuning of Large Language Models

Authors :
Chaudhary, Sapana
Dinesha, Ujwal
Kalathil, Dileep
Shakkottai, Srinivas
Publication Year :
2025

Abstract

We consider the challenge of mitigating the generation of negative or toxic content by the Large Language Models (LLMs) in response to certain prompts. We propose integrating risk-averse principles into LLM fine-tuning to minimize the occurrence of harmful outputs, particularly rare but significant events. By optimizing the risk measure of Conditional Value at Risk (CVaR), our methodology trains LLMs to exhibit superior performance in avoiding toxic outputs while maintaining effectiveness in generative tasks. Empirical evaluations on sentiment modification and toxicity mitigation tasks demonstrate the efficacy of risk-averse reinforcement learning with human feedback (RLHF) in promoting a safer and more constructive online discourse environment.<br />Comment: Neurips 2024

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2501.06911
Document Type :
Working Paper