This project is detailed in the following paper
The implementations and results are based/generated in the following file structure:
Before starting, we recommend creating a new conda environment or a virtual environment with Python 3.10+.
conda create -y -n topo_reg -c conda-forge python=3.11
conda activate topo_regThis project is implemented within our Im2Im Transformation framework. To install it, follow the instructions in the corresponding repository:
git clone https://github.com/MMV-Lab/mmv_im2im.git
cd mmv_im2im
pip install -e .[all]Run
pip install -r requirements.txtIf you want to access the implementation of the regularizers for improvements or adaptation to different base loss functions or architectures, you can find the code within the Im2Im Transformation project in the utils folder folder.
With the project installed, you can now train and run inference with the models.
For training, use the provided YAML templates. These files contain clear descriptions of the parameters and training options. Simply set your values, save your configuration, and run:
run_im2im --config /path/to/your_train_config.yaml
We provide an inference Jupyter notebook to facilitate predictions. It works whether you want to make a prediction with your trained model or manage cases where you have multiple trained versions of the same model and want to run inference over the same dataset.
To use it, run Jupyter Lab, define your model in the provided YAML templates, and follow the instructions in the notebook.
If you need to avoid the Jupyter Notebook execution we provide a full cli version executable trought:
python core/inference_cli.py --yaml_path /your/path/ --images_folder /your/path/ --models_folder /your/path/ --multi_output_dir /your/path/ --weight_option last --pipeline_mode multi
Where the inputs are the same required in the Jupyter notebook
For more detailed info related to training and inference, please refer to the documentation in our Im2Im Transformation repo.
We provide a model evaluation Jupyter notebook that allows you to extract information about model predictions and compare multiple training runs for a single model.
If you need to avoid the Jupyter Notebook execution we provide a full cli version executable for each step trought:
python core/csv_metric_generation.py --gt-path /your/path/ --predictions-path /your/path/ --eval-class 1
python core/summary_generation.py --csv-path /your/path/
python core/single_model_plots_generation.py --csv-path /your/path/
Where the inputs are the same required in the Jupyter notebook
The training objective
Base Loss Function
ELBO (In the case the Probabilistic U-Net)
GDL (In the case the Attention U-Net)
Regularizator terms
(
(
(
(
(
(
