This guide provides step-by-step instructions for setting up the environment, installing dependencies, recording demonstrations, and running experiments.
git clone https://github.com/AssistiveRoboticsUNH/Safe_diffusion_policy
conda create --name safe_lfd
conda activate safe_lfd
git clone https://github.com/ubc-vision/vivid123
Follow the official PyTorch Installation Guide to install the appropriate version.
Example (for CUDA 11.8):
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
pip install "diffusers==0.24" transformers accelerate einops kornia imageio[ffmpeg] opencv-python pydantic scikit-image lpips h5py
conda install conda-forge::matplotlib
pip install carvekit --extra-index-url https://download.pytorch.org/whl/cu113
pip install h5py
pip install scipy
pip install omegaconf
pip install dill
pip install scikit-learn
cd <PATH_TO_YOUR_INSTALL_DIRECTORY>
git clone https://github.com/ARISE-Initiative/robomimic.git
cd robomimic
pip install -e .
Install robosuite (Use from source installation, don't use pip install robosuite)
cd <PATH_TO_INSTALL_DIR>
git clone https://github.com/ARISE-Initiative/robosuite.git
cd robosuite
git checkout v1.4.1
pip install -r requirements.txt
git clone https://github.com/real-stanford/diffusion_policy
conda env create -f conda_environment.yaml
You can either record a demonstration using the robot or test your installation with our sample dataset.
pip install gdown
gdown --fuzzy "https://drive.google.com/file/d/1KnVeUR2r97q7at0cCsCNG-uVJqIuRdwl/view?usp=drive_link"
Modify the following line in run_vivid_123_experiment_v1.py
:
dataset_path = "demo.hdf5"
python run_vivid_123_experiment_v1.py
To generate safe sets and synthesize trajectories, install the required dependencies:
pip install mpl-tools
conda install conda-forge::scipy
conda install conda-forge::plotly
conda install conda-forge::pyvista
Copy
- train_object_detector_using_visionencoder.ipynb
- safe_image_franka_image_240_320.yaml
- safe_train_franka.ipynb
to the diffusion policy folder.
To train the diffusion policy:
- edit safe_image_franka_image_240_320.yaml
- run safe_train_franka.ipynb
To train object detector using the same vision encoder:
Run train_object_detector_using_visionencoder.ipynb
Download the demo files either from robomimic website or collect your own data edit safe_image_lift_sim_train.yaml file (update the data path)
Copy
- safe_image_lift_sim_train.yaml
- safe_train.py
- eval_dp_sim.ipynb
to the diffusion policy folder.
safe_train.py
eval_dp_sim.ipynb
To generate safe sets on Simulation environment, install the required dependencies:
create_safeset_sim.ipynb
- Ensure that you have CUDA installed and properly configured to leverage GPU acceleration.
- If you encounter installation issues, check the package compatibility with your system.
- For PyTorch version compatibility with CUDA, refer to PyTorch's official guide.
This project is licensed under the MIT License. See the LICENSE file for more details.
- [Riad Ahmed] – Maintainer & Developer
- [Moniruzzaman Akash] – Contributors
For any issues or queries, feel free to open an issue or reach out at [[email protected]].