✨✨Latest Papers and Benchmarks in Reasoning with Foundation Models
-
Updated
Apr 25, 2025
✨✨Latest Papers and Benchmarks in Reasoning with Foundation Models
TxAgent: An AI Agent for Therapeutic Reasoning Across a Universe of Tools
Project Page For "Seg-Zero: Reasoning-Chain Guided Segmentation via Cognitive Reinforcement"
Latest Advances on Long Chain-of-Thought Reasoning
Deep Reasoning Translation via Reinforcement Learning (arXiv preprint 2025); DRT: Deep Reasoning Translation via Long Chain-of-Thought (arXiv preprint 2024)
ToolUniverse is a collection of biomedical tools designed for AI agents
OpenVLThinker: An Early Exploration to Vision-Language Reasoning via Iterative Self-Improvement
Official Implementation of "Reasoning Language Models: A Blueprint"
A Comprehensive Survey on Evaluating Reasoning Capabilities in Multimodal Large Language Models.
a-m-team's exploration in large language modeling
Lightweight replication study of DeepSeek-R1-Zero. Interesting findings include "No Aha Moment", "Longer CoT ≠ Accuracy", and "Language Mixing in Instruct Models".
This is the repo of developing reasoning models in the specific domain of financial, aim to enhance models capabilities in handling financial reasoning tasks.
Pure RL to post-train base models for social reasoning capabilities. Lightweight replication of DeepSeek-R1-Zero with Social IQa dataset.
🔥🔥🔥Breaking long thought processes of o1-like LLMs, such as DeepSeek-R1, QwQ
☁️ KUMO: Generative Evaluation of Complex Reasoning in Large Language Models
An effective weight-editing method for mitigating overly short reasoning in LLMs, and a mechanistic study uncovering how reasoning length is encoded in the model’s representation space.
Reasoning-from-Zero using gemma.JAX.nnx on TPUs
Official code for "Divide and Translate: Compositional First-Order Logic Translation and Verification for Complex Logical Reasoning", ICLR 2025.
📖Curated list about reasoning abilitiy of MLLM, including OpenAI o1, OpenAI o3-mini, and Slow-Thinking.
Sudoku4LLM is a Sudoku dataset generator for training and evaluating reasoning in Large Language Models (LLMs). It offers customizable puzzles, difficulty levels, and 11 serialization formats to support structured data reasoning and Chain of Thought (CoT) experiments.
Add a description, image, and links to the reasoning-language-models topic page so that developers can more easily learn about it.
To associate your repository with the reasoning-language-models topic, visit your repo's landing page and select "manage topics."