News | Highlights | Installation | Quickstart | Community
PaddleFormers is a Transformer model library built on the PaddlePaddle deep learning framework, delivering both ease of use and high-performance capabilities. It provides a unified model definition interface, modular training components, and comprehensive distributed training strategies specifically designed for large language model development pipelines. This enables developers to train large models efficiently with minimal complexity, making it suitable for diverse scenarios ranging from academic research to industrial applications.
[2025/06/28] 🎉 PaddleFormers 0.1 is officially released! This initial version supports SFT/DPO training paradigms, configurable distributed training via unified Trainer API, and integrates PEFT, MergeKit, and Quantization APIs for diverse LLM applications.
Implements 4D parallel strategies through unified Trainer API, lowering the barrier to distributed LLM training.
Integrates Packing dataflow and FlashMask operators for SFT/DPO training, eliminating padding waste and boosting throughput.
Features Unified Checkpoint storage tools for LLMs, enabling training resumption and dynamic resource scaling. Additionally implements asynchronous storage (up to 95% faster) and Optimizer State Quantization (78% storage reduction), ensuring industrial training meets both efficiency and stability requirements.
Requires Python 3.10+
# Install via source code
git clone https://github.com/PaddlePaddle/PaddleFormers.git
cd PaddleFormers
# If you don’t need to train models, you can install only the lightweight basic version of paddleformers.
pip install -e .
# If you need to train models, you should install paddleformers with paddlefleet
# cuda12.6
pip install -e . '.[paddlefleet]' --extra-index-url https://www.paddlepaddle.org.cn/packages/nightly/cu126/
# cuda12.9
pip install -e . '.[paddlefleet]' --extra-index-url https://www.paddlepaddle.org.cn/packages/nightly/cu129/
# cuda13.0
pip install -e . '.[paddlefleet]' --extra-index-url https://www.paddlepaddle.org.cn/packages/nightly/cu130/This example shows how to load Qwen model for text generation with PaddleFormers Auto API:
from paddleformers.transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen3-0.6B-Base")
model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen3-0.6B-Base", dtype="bfloat16", convert_from_hf=True).eval()
input_features = tokenizer("Give me a short introduction to large language model.", return_tensors="pd")
outputs = model.generate(**input_features, max_new_tokens=128)
print(tokenizer.batch_decode(outputs[0], skip_special_tokens=True))Getting started with supervised fine-tuning (SFT) using PaddleFormers:
paddleformers-cli train examples/config/sft/full.yamlWe welcome all contributions! See CONTRIBUTING.md for guidelines.
This repository's source code is available under the Apache 2.0 License.
