Evolution Strategies at Scale: LLM Fine-Tuning Beyond Reinforcement Learning
About Event
🔬 AI4Science on alphaXiv
🗓 Wednesday February 25th 2026 · 3PM PT
🎙 Featuring Xin Qiu
💬 Casual Talk + Open Discussion
🎥 Zoom: Upon Registration
Description: Fine-tuning large language models (LLMs) for downstream tasks is an essential stage of modern AI deployment. Reinforcement learning (RL) has emerged as the dominant fine-tuning paradigm, underpinning many state-of-the-art LLMs. In contrast, evolution strategies (ES) has largely been overlooked due to the widespread belief that it does not scale to modern model sizes. This paper overturns this assumption by demonstrating the first successful application of ES to full-parameter fine-tuning of LLMs at the billion-parameter scale, without dimensionality reduction. ES can indeed search over extremely high-dimensional parameter spaces and outperform established RL implementations across multiple axes, including improved tolerance to long-horizon and delayed rewards, robustness across diverse base LLMs, reduced susceptibility to reward hacking, and improved training stability. These findings suggest that ES is not merely a viable alternative to RL, but a fundamentally different and powerful backpropagation-free post-training paradigm that opens a new direction for LLM fine-tuning beyond current RL-based approaches. The source codes are provided at: this https URL.
Check out the full paper here!
Whether you’re working on the frontier of LLMs or just curious about anything AI4Science, we’d love to have you there.
Hosted by alphaXiv
