Skip to content

segmentation while lora finetuning #1810

@yao-matrix

Description

@yao-matrix

🐛 Describe the bug

$ git clone https://github.com/huggingface/peft.git
$ pip install -e .
$ cd ./method_comparison/MetaMathQA
$ touch ./experiments/lora/llama-3.2-3B-rank32/adapter_config.json
$ python run.py -v ./experiments/lora/llama-3.2-3B-rank32

segmentation fault will be reported, while doing the first evaluation of finetuning process, as below:

Image

Versions

pytorch-triton-xpu 3.4.0+gitae324eea
torch 2.8.0.dev20250627+xpu
torchao 0.11.0
torchaudio 2.8.0.dev20250627+xpu
torchdata 0.11.0
torchvision 0.23.0.dev20250627+xpu

Metadata

Metadata

Assignees

Type

Projects

No projects

Milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions