A PettingZoo-compatible environment for Teamfight Tactics Set 4, providing a complete simulation of TFT mechanics for reinforcement learning research.
- Complete TFT Set 4 Simulation: All champions, items, and synergies from TFT Set 4
- PettingZoo Compatible: Standard multi-agent RL environment interface
- Gymnasium Integration: Compatible with modern RL libraries
- Combat Simulation: Detailed combat mechanics and interactions
- Champion Abilities: Full implementation of champion abilities and effects
- Item System: Complete item crafting and effect system
- Multi-agent Support: 8-player games with proper elimination mechanics
git clone https://github.com/Lobotuerk/TFT-Set4-Gym.git
cd TFT-Set4-Gym
pip install -e .pip install tft-set4-gymfrom tft_set4_gym import parallel_env
# Create environment
env = parallel_env()
# Reset environment
observations, infos = env.reset()
# Game loop
while env.agents:
# Sample random actions for all agents
actions = {agent: env.action_space(agent).sample() for agent in env.agents}
# Step environment
observations, rewards, terminations, truncations, infos = env.step(actions)
# Remove terminated/truncated agents
env.agents = [agent for agent in env.agents
if not (terminations[agent] or truncations[agent])]
env.close()from tft_set4_gym import TFTSingleAgentWrapper
from stable_baselines3 import PPO
# Create single-agent wrapper for SB3
env = TFTSingleAgentWrapper()
# Create and train model
model = PPO('MlpPolicy', env, verbose=1)
model.learn(total_timesteps=100000)
# Test trained model
obs, _ = env.reset()
for _ in range(1000):
action, _ = model.predict(obs)
obs, reward, terminated, truncated, _ = env.step(action)
if terminated or truncated:
obs, _ = env.reset()observation_space = Dict({
'tensor': Box(0.0, 55.0, (5152,), float64), # Game state vector
'action_mask': Box(0, 1, (54,), int8) # Valid action mask
})action_space = MultiDiscrete([7, 37, 10])
# [action_type, target/item_id, position]- Placement-based: Higher placement = higher reward
- Winner: 250 points
- Elimination order: (8 - placement) × 25 points
- Example: 1st place = 250, 2nd place = 175, ..., 8th place = 25
- 58 unique champions from TFT Set 4
- Star levels: 1⭐, 2⭐, 3⭐ upgrades
- Abilities: Unique champion abilities with mana system
- Origins & Classes: Synergy bonuses (Warlord, Mystic, etc.)
- Component items: Basic items that can be combined
- Completed items: Powerful items with unique effects
- Item crafting: Combine components to create completed items
- Spatial items: Items that affect positioning
- Auto-chess combat: Champions fight automatically
- Positioning matters: Frontline, backline, corner positioning
- Damage calculation: Complex damage, armor, and resistance system
- Crowd control: Stuns, fears, charms, and other effects
- Gold management: Income, interest, and spending decisions
- Shop system: Rolling for champions, costs based on level
- Experience: Leveling up increases shop pool and board size
from tft_set4_gym import parallel_env
# Custom game settings
env = parallel_env(
num_players=6, # 6-player game instead of 8
max_rounds=30, # Shorter games
debug_mode=True # Enable debug logging
)from tft_set4_gym.wrappers import (
RewardShapingWrapper,
ActionMaskingWrapper,
ObservationWrapper
)
env = parallel_env()
env = RewardShapingWrapper(env) # Add intermediate rewards
env = ActionMaskingWrapper(env) # Enforce action masking
env = ObservationWrapper(env) # Custom observation formatparallel_env(): Create parallel multi-agent environmentTFTSingleAgentWrapper: Single-agent wrapper for SB3 compatibility
TFT_Simulator: Main game simulation enginePlayer: Player state and actionsChampion: Individual champion with abilitiesGame_Round: Round management and progressionObservation: Environment observation generation
utils.py: Helper functions for game state analysisconfig.py: Configuration constants and settings
pytest tests/black tft_set4_gym/
isort tft_set4_gym/- Fork the repository
- Create a feature branch
- Make your changes
- Add tests for new functionality
- Submit a pull request
- Python: 3.8+
- PettingZoo: 1.24+
- Gymnasium: 0.29+
- NumPy: 1.21+
- Environment speed: ~50-200 FPS depending on hardware
- Memory usage: ~200MB per environment
- Vectorization: Supports multiple parallel environments
This environment is designed for:
- Multi-agent reinforcement learning research
- Game AI development
- Strategic decision making studies
- Curriculum learning experiments
- Meta-learning research
If you use this environment in your research, please cite:
@software{tft_set4_gym,
title={TFT Set 4 Gymnasium Environment},
author={Lobotuerk},
url={https://github.com/Lobotuerk/TFT-Set4-Gym},
year={2025}
}This project is licensed under the MIT License - see the LICENSE file for details.
- Riot Games for creating Teamfight Tactics
- PettingZoo team for the multi-agent RL framework
- Gymnasium team for the RL environment standards
- TFT community for game mechanics documentation
- Issues: GitHub Issues
- Discussions: GitHub Discussions
- Documentation: Wiki