SoccerLab is a specialized extension built on the NVIDIA IsaacLab framework, designed for developing, training, and deploying advanced control strategies for legged robots in a dynamic, multi-agent soccer environment.
The platform's core philosophy centers around a Finite State Machine (FSM)-driven architecture, enabling seamless, high-level skill switching for complex tasks like chasing, shooting, and defense.
This repo is under developing , hence not runnable yet
- FSM-Driven Control: Implement modular and robust high-level strategies through parallel state machines (
fsmLab). - Modular Skill Training: Dedicated environments (Train Tasks) for mastering atomic skills such as locomotion, ball tracking, and precise shooting.
- Multi-Agent Environment: Full support for team-based soccer scenarios (Battle Tasks) for cooperative and adversarial policy learning.
- IsaacLab Integration: Leverages IsaacLab's high-fidelity physics simulation, asset management, and Reinforcement Learning (RL) tools.
SoccerLab adheres to the IsaacLab extension structure, organizing its components logically:
| Path | Description | Based On/Purpose |
|---|---|---|
soccerLab/source/fsmLab |
Control Core: Implements the Parallel FSM logic for skill switching. | High-level strategy |
soccerLab/source/robotlib |
Robot Configurations: Defines robot specific physics, observation, and action parameters. | Configuration templates |
soccerLab/source/soccerTask |
RL Tasks: Contains both train (single skill) and battle (multi-agent) environments. |
IsaacLab TaskBase |
soccerLab/data/assets/assetslib |
Robot Assets: Stores USD models, collision shapes, and inertial properties. | 3D Assets |
soccerLab/data/ckpts |
Checkpoints: Stores pre-trained policies and FSM strategy configurations. | Policy storage |
This repository is heavily inspired by, and in parts structurally adapted from, related dynamics and robotics research projects:
- Configuration Library: Renforce-Dynamics/robotlib
- Asset Management: Renforce-Dynamics/assetslib
- Utility & Tracking Components: Renforce-Dynamics/trackerLab
- Finite State Machine: Renforce-Dynamics/fsmLab
Prerequisite: Ensure you have a functioning installation of NVIDIA IsaacLab.
You can integrate SoccerLab into your IsaacLab environment using two primary methods:
This method installs SoccerLab as a Python package extension, which is typically cleaner for dependency management.
# Navigate to the SoccerLab directory
cd soccerLab
# Execute the setup script
./scripts/setup_ext.shThis method is useful for active development, linking the repository directly into the IsaacLab source directory.
# Execute the linking script
python soccerLab/scripts/setup_isaaclab_link.pyTo ensure proper code intelligence and type hinting within your development environment, run the setup script for VSCode configuration:
python soccerLab/scripts/setup_vscode.pyThe core functionalities for running simulations (training or playing pre-trained policies) are handled via dedicated scripts leveraging IsaacLab's backend.
| Action | Script | Description |
|---|---|---|
| Training | soccerLab/scripts/factoryIsaac/train.py |
Starts the Reinforcement Learning process for a specified task (e.g., locomotion, balls). |
| Playing | soccerLab/scripts/factoryIsaac/play.py |
Runs a simulation using a pre-trained policy checkpoint (from data/ckpts/) or the FSM controller. |
To access and control the simulation via a web browser interface, please refer to the dedicated repository for the viewer implementation:
- Web Viewer: Renforce-Dynamics/labWebView
For deploying the trained FSM strategies onto a physical robot platform, consult the Sim-to-Real deployment guide:
- FSM Sim-to-Real: Renforce-Dynamics/FsmSimDeploy
Contributions are welcome! Please feel free to open issues or submit pull requests.