[TL;DR] Target-Bench is the first benchmark and dataset for evaluating video world models (WMs) on mapless robotic path planning for semantic targets.
If you find our work useful, please star ⭐ our repo!
- Fine-tune code release (Scheduled 05 Dec)
- Benchmark code release
- Dataset release
- Paper release
- Website launch
git clone https://github.com/TUM-AVS/target-bench.git
cd target-benchEnsure you have miniconda installed.
You can set up all environments at once or individually. For a quick start with VGGT:
# Install VGGT environment
bash set_env.sh vggtFor other options (installing all environments or specific ones like SpaTracker/ViPE), please refer to docs/env.md.
Download the benchmark_data (scenarios) and wm_videos (generated videos) into the dataset/ directory:
cd dataset
# Download Benchmark scenarios
huggingface-cli download target-bench/benchmark_data --repo-type dataset --local-dir Benchmark --local-dir-use-symlinks False
# Download World Model generated videos
huggingface-cli download target-bench/wm_videos --repo-type dataset --local-dir wm_videos --local-dir-use-symlinks False
cd ..Now, the project directory structure should look like this:
target-bench/
├── assets/ # Images and project assets
├── dataset/ # Benchmark data and generated videos
│ ├── Benchmark/ # Benchmark scenarios
│ └── wm_videos/ # Videos generated by world models
├── evaluation/ # Evaluation scripts and configs
├── models/ # Source code for evaluated models
│ ├── spatracker/
│ ├── vggt/
│ └── vipe/
└── pipelines/ # World decoders adapted for each model
├── spatracker/
├── vggt/
└── vipe/
Run a quick evaluation with 3 scenes using VGGT as the spatial-temporal tool:
conda activate vggt
cd evaluation
python target_eval_vggt.py -n 3 Then you should be able to see the evaluation results and visualizations in the evaluation_results folder:

@article{wang2025target,
title={Target-Bench: Can World Models Achieve Mapless Path Planning with Semantic Targets?},
author={Wang, Dingrui and Ye, Hongyuan and Liang, Zhihao and Sun, Zhexiao and Lu, Zhaowei and Zhang, Yuchen and Zhao, Yuyu and Gao, Yuan and Seegert, Marvin and Sch{\"a}fer, Finn and others},
journal={arXiv preprint arXiv:2511.17792},
year={2025}
}This project builds upon the following open-source works:
Please refer to their respective directories for detailed credits and license information.
