GUARDIAN: Guarding Against Uncertainty and Adversarial Risks in Robot-Assisted Surgeries (MICCAI'24 - UNSURE)
GUARDIAN: Guarding Against Uncertainty and Adversarial Risks in Robot-Assisted Surgeries
Ufaq Khan, Umair Nawaz, Tooba Tehreem Sheikh, Asif Hanif and Mohammad Yaqub
- Oct 03, 2024 : Final code and models will be released soon
- July 15, 2024 : Accepted in UNSURE - MICCAI 2024 🎊 🎉
- Installation
- Models
- Datasets
- Code Structure
- Run Experiments
- Results
- Citation
- Contact
- Acknowledgement
The model depends on the following libraries:
- sklearn
- PIL
- Python >= 3.5
- ivtmetrics
- Developer's framework:
- For Tensorflow version 1:
- TF >= 1.10
- For Tensorflow version 2:
- TF >= 2.1
- For PyTorch version:
- Pyorch >= 1.10.1
- TorchVision >= 0.11
- For Tensorflow version 1:
Steps to install dependencies
- Create conda environment
conda create --name aiproject python=3.8
conda activate aiproject- Install PyTorch and other dependencies
pip install -r requirements.txtWe have shown the efficacy of Guardian on two type of models. One being the recongition model and the second for task of object detection. The detection model is used to validate the cross-task transferability of our attacks.:
The dataset folders for Synapse should be organized as follows:
This folder includes:
- CholecT45 dataset:
- data: 45 cholecystectomy videos
- triplet: triplet annotations on 45 videos
- instrument: tool annotations on 45 videos
- verb: action annotations on 45 videos
- target: target annotations on 45 videos
- dict: id-to-name mapping files
- a LICENCE file
- a README file
Expand this to visualize the dataset directory structure.
──CholecT45
├───data
│ ├───VID01
│ │ ├───000000.png
│ │ ├───000001.png
│ │ ├───000002.png
│ │ ├───
│ │ └───N.png
│ ├───VID02
│ │ ├───000000.png
│ │ ├───000001.png
│ │ ├───000002.png
│ │ ├───
│ │ └───N.png
│ ├───
│ ├───
│ ├───
│ |
│ └───VIDN
│ ├───000000.png
│ ├───000001.png
│ ├───000002.png
│ ├───
│ └───N.png
|
├───triplet
│ ├───VID01.txt
│ ├───VID02.txt
│ ├───
│ └───VIDNN.txt
|
├───instrument
│ ├───VID01.txt
│ ├───VID02.txt
│ ├───
│ └───VIDNN.txt
|
├───verb
│ ├───VID01.txt
│ ├───VID02.txt
│ ├───
│ └───VIDNN.txt
|
├───target
│ ├───VID01.txt
│ ├───VID02.txt
│ ├───
│ └───VIDNN.txt
|
├───dict
│ ├───triplet.txt
│ ├───instrument.txt
│ ├───verb.txt
│ ├───target.txt
│ └───maps.txt
|
├───LICENSE
└───README.md
| Dataset | Link |
|---|---|
| CholecT45 | Download |
| m2cai16-tool-locations | Download |
The code can be run in a trianing mode (-t) or testing mode (-e) or both (-t -e) if you want to evaluate at the end of training :
Simple training on CholecT45 dataset:
python run.py -t --data_dir="/path/to/dataset" --dataset_variant=cholect45-crossval --version=1
You can include more details such as epoch, batch size, cross-validation and evaluation fold, weight initialization, learning rates for all subtasks, etc.:
python3 run.py -t -e --data_dir="/path/to/dataset" --dataset_variant=cholect45-crossval --kfold=1 --epochs=180 --batch=64 --version=2 -l 1e-2 1e-3 1e-4 --pretrain_dir='path/to/imagenet/weights'
All the flags can been seen in the run.py file.
The experimental setup of the published model is contained in the paper.
python3 run.py -e --data_dir="/path/to/dataset" --dataset_variant=cholect45-crossval --kfold 1 --batch 32 --version=1 --test_ckpt="/path/to/model-k3/weights"
| Dataset | Model | Link |
|---|---|---|
| CholecT45 Cross-Val | Rendezvous |
Download |
If you find our work useful, please consider giving a star ⭐ and citation.
@inproceedings{khanguardian,
title={GUARDIAN: Guarding Against Uncertainty and Adversarial Risks in Robot-Assisted Surgeries},
author={Khan, Ufaq and Nawaz, Umair and Sheikh, Tooba and Hanif, Asif and Yaqub, Mohammad},
booktitle={Uncertainty for Safe Utilization of Machine Learning in Medical Imaging-6th International Workshop}
year={2024}
}Should you have any question, please create an issue on this repository or contact us at [email protected]

