Skip to content

Commit dcd947f

Browse files
wangliuyiHanqingWangAI
authored andcommitted
Update VLN-PE baseline evaluations
* update vln-pe baseline evaluation * update vln-pe baseline contents
1 parent bfd7a50 commit dcd947f

File tree

1 file changed

+38
-0
lines changed

1 file changed

+38
-0
lines changed

source/en/user_guide/internnav/quick_start/train_eval.md

Lines changed: 38 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -174,6 +174,7 @@ Currently, we only support training of small VLN models (CMA, RDP, Seq2Seq) in t
174174
```
175175
### Evaluation
176176

177+
#### InternVLA-N1-S2
177178
Currently we only support evaluate single System2 on Habitat:
178179

179180
Evaluate on Single-GPU:
@@ -187,3 +188,40 @@ For multi-gpu inference, currently we only support inference on SLURM.
187188
```bash
188189
./scripts/eval/eval_system2.sh
189190
```
191+
192+
#### Baseline Models
193+
We provide three small VLN baselines (Seq2Seq, CMA, RDP) for evaluation in the InterUtopia (Isaac-Sim) environment.
194+
195+
Download the baseline models:
196+
```bash
197+
# ddppo-models
198+
$ mkdir -p checkpoints/ddppo-models
199+
$ wget -P checkpoints/ddppo-models https://dl.fbaipublicfiles.com/habitat/data/baselines/v1/ddppo/ddppo-models/gibson-4plus-mp3d-train-val-test-resnet50.pth
200+
# longclip-B
201+
$ huggingface-cli download --include 'longclip-B.pt' --local-dir-use-symlinks False --resume-download Beichenzhang/LongCLIP-B --local-dir checkpoints/clip-long
202+
# download r2r finetuned baseline checkpoints
203+
$ git clone https://huggingface.co/InternRobotics/VLN-PE && mv VLN-PE/r2r checkpoints/
204+
```
205+
Start the Ray server:
206+
```bash
207+
ray disable-usage-stats
208+
ray stop
209+
ray start --head
210+
```
211+
212+
Start the evaluation server:
213+
```bash
214+
python -m internnav.agent.utils.server --config scripts/eval/configs/h1_xxx_cfg.py
215+
```
216+
217+
Start Evaluation:
218+
```bash
219+
# seq2seq model
220+
./scripts/eval/start_eval.sh --config scripts/eval/configs/h1_seq2seq_cfg.py
221+
# cma model
222+
./scripts/eval/start_eval.sh --config scripts/eval/configs/h1_cma_cfg.py
223+
# rdp model
224+
./scripts/eval/start_eval.sh --config scripts/eval/configs/h1_rdp_cfg.py
225+
```
226+
227+
The evaluation results will be saved in the `eval_results.log` file in the `output_dir` of the config file.

0 commit comments

Comments
 (0)