Reference implementation of the probabilistic evaluation framework proposed in the paper:
A Probabilistic Perspective on Unlearning and Alignment for Large Language Models
Yan Scholten, Stephan Günnemann, Leo Schwinn
International Conference on Learning Representations, ICLR 2025 (Oral)
[ Project page | PDF ]
You can explore our demo notebook, where we demonstrate that greedy evaluations can misleadingly suggest successful unlearning, while our probabilistic evaluations provide more accurate assessments of model capabilities.
Instructions for dependencies and configurations before running code:
python -m venv venv
source venv/bin/activate
pip install -r requirements.txt
The code was tested with Python 3.11.9, pip 24.0, PyTorch 2.3.1+cu118, and CUDA 11.8.
The following steps show how to start unlearning experiments. You can find the implementation of the proposed entropy objective in unlearning/unlearning_trainer.py#66
.
1. Finetuning on full dataset
cd finetuning
python3 main.py -m -cd=configs -cn=phi
2. Unlearning
Set the path to previously finetuned models in the configuration files unlearning/configs/phi-*
.
cd unlearning
python3 main.py -m -cd=configs -cn=phi-GA
python3 main.py -m -cd=configs -cn=phi-GD
python3 main.py -m -cd=configs -cn=phi-NPO
Please cite our paper if you use this code in your own work:
@inproceedings{scholten2024probabilistic,
title={A Probabilistic Perspective on Unlearning and Alignment for Large Language Models},
author={Yan Scholten and Stephan G{\"u}nnemann and Leo Schwinn},
booktitle={The Thirteenth International Conference on Learning Representations},
year={2025},
url={https://openreview.net/forum?id=51WraMid8K}
}
Model finetuning and unlearning builds upon the TOFU repository, adapted to show the effectiveness of our method.
For questions and feedback please contact:
Yan Scholten, Technical University of Munich
Stephan Günnemann, Technical University of Munich
Leo Schwinn, Technical University of Munich
The code by Yan Scholten, Stephan Günnemann and Leo Schwinn is licensed under MIT license.