Felix, qui, quod amat, defendere fortiter audet
Home -> Publications
Home
  Publications
    
edited volumes
  Awards
  Research
  Teaching
  Miscellaneous
  Full CV [pdf]
  BLOG
  bio






  Events








  Past Events





Publications of Torsten Hoefler
Maciej Besta, Julia Barth, Eric Schreiber, Ales Kubicek, Afonso Catarino, Robert Gerstenberger, Piotr Nyczyk, Patrick Iff, Yueling Li, Sam Houliston, Tomasz Sternal, Marcin Copik, Grzegorz Kwaśniewski, Jürgen Müller, Lukasz Flis, Hannes Eberhard, Zixuan Chen, Hubert Niewiadomski, Torsten Hoefler:

 Reasoning Language Models: A Blueprint

(arXiv:2501.11223. Jun. 2025)

Abstract

Reasoning language models (RLMs), also known as Large Reasoning Models (LRMs), such as OpenAI's o1 and o3, DeepSeek-R1, and Alibaba's QwQ, have redefined AI's problem-solving capabilities by extending LLMs with advanced reasoning mechanisms. Yet, their high costs, proprietary nature, and complex architectures - uniquely combining reinforcement learning (RL), search heuristics, and LLMs - present accessibility and scalability challenges. To address these, we propose a comprehensive blueprint that organizes RLM components into a modular framework, based on a survey and analysis of all RLM works. This blueprint incorporates diverse reasoning structures (chains, trees, graphs, and nested forms), reasoning strategies (e.g., Monte Carlo Tree Search, Beam Search), RL concepts (policy, value models and others), supervision schemes (Outcome-Based and Process-Based Supervision), and other related concepts (e.g., Test-Time Compute, Retrieval-Augmented Generation, agent tools). We also provide detailed mathematical formulations and algorithmic specifications to simplify RLM implementation. By showing how schemes like LLaMA-Berry, QwQ, Journey Learning, and Graph of Thoughts fit as special cases, we demonstrate the blueprint's versatility and unifying potential. To illustrate its utility, we introduce x1, a modular implementation for rapid RLM prototyping and experimentation. Using x1 and a literature review, we provide key insights, such as multi-phase training for policy and value models, and the importance of familiar training distributions. Finally, we discuss scalable RLM cloud deployments and we outline how RLMs can integrate with a broader LLM ecosystem. Our work demystifies RLM construction, democratizes advanced reasoning capabilities, and fosters innovation, aiming to mitigate the gap between 'rich AI' and 'poor AI' by lowering barriers to RLM design and experimentation.

Documents

download article:     
 

BibTeX

@article{besta2025reasoning,
  author={Maciej Besta and Julia Barth and Eric Schreiber and Ales Kubicek and Afonso Catarino and Robert Gerstenberger and Piotr Nyczyk and Patrick Iff and Yueling Li and Sam Houliston and Tomasz Sternal and Marcin Copik and Grzegorz Kwaśniewski and Jürgen Müller and Lukasz Flis and Hannes Eberhard and Zixuan Chen and Hubert Niewiadomski and Torsten Hoefler},
  title={{Reasoning Language Models: A Blueprint}},
  journal={arXiv:2501.11223},
  year={2025},
  month={Jun.},
  source={http://www.unixer.de/~htor/publications/},
}


serving: 216.73.216.217:25787© Torsten Hoefler