Skip to content

Commit 59396fb

Browse files
authored
Update README.md
1 parent a76652c commit 59396fb

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
1-
# Continual learning for OOD generalization in PLMs of code
1+
# On the Usage of Continual Learning for Out-of-Distribution Generalization in Pre-trained Languages Models of Code
22

3-
This is the replication package associated with the FSE 23' submission:
3+
This is the official replication package associated with the FSE 23' submission:
44
```On the Usage of Continual Learning for Out-of-Distribution Generalization" in Pre-trained Languages Models of Code```.
55

66
In this readme, we provide details on how to setup our codebase for experimenting with continual fine-tuning. We include links to download our datasets and models locally. Unfortunately, we cannot leverage HuggingFace's hub as it does not allow double-blind sharing of repositories.
@@ -115,4 +115,4 @@ python run_inference.py \
115115
model.model_name_or_path=./models/code-gpt2-small_ft_$method$/exp_$id$
116116
hydra=output_inference
117117
```
118-
Note that fine-tuning produces five checkpoints, *i.e.*, one after each fine-tuning step. Therefore, you need to specify which checkpoint you want to test, *e.g.*, `exp_0`, `exp_1`, etc.
118+
Note that fine-tuning produces five checkpoints, *i.e.*, one after each fine-tuning step. Therefore, you need to specify which checkpoint you want to test, *e.g.*, `exp_0`, `exp_1`, etc.

0 commit comments

Comments
 (0)