This repository contains a real-world Soft Actor-Critic (SAC) reinforcement learning setup using:
- UR robot (via RTDE interface)
- ✊ Robotis PRO Series Gripper
- 🔍 FSR force sensors (2-channel analog readout)
- 💻 All controlled directly from a single machine (no ZMQ)
⚠️ IMPORTANT: When cloning this repository, rename the folder toRL_2025to match internal module paths.
git clone https://github.com/omletkang/RL-2025.git RL_2025
cd RL_2025RL_2025/
├── robot/ # Hardware interface modules
│ ├── gripper.py
│ ├── ur_robot.py
│ └── fsr_sensor.py
├── rollout.py # Real-world environment wrapper
├── train_sac.py # Training script using SAC
├── sac.py # Soft Actor-Critic implementation
├── run/ # Automatically created for storing logs and checkpoints
└── README.md
Train SAC on the real robot for 100 episodes:
python train_sac.py --n_episodes 100To resume from a previous run:
python train_sac.py --resume run/2025_05_25_1530- Observation:
[TCP z-height, gripper_pos, FSR A0, FSR A1] - Action:
Gripper position(normalized -1 to 1, mapped to 0–550 internally)
This project is developed in a research setting. Please contact the author for reuse or collaboration.
Seung Hoon Kang (Soft Robotics and Bionics Lab, Seoul National University) GitHub: omletkang