Skip to content

Commit

Permalink
Merge pull request #20 from SyneRBI/main-backend
Browse files Browse the repository at this point in the history
main (submission) & petric (backend) framework
  • Loading branch information
casperdcl authored Jun 27, 2024
2 parents a82141d + b4e3d85 commit a8d3155
Show file tree
Hide file tree
Showing 9 changed files with 268 additions and 174 deletions.
7 changes: 6 additions & 1 deletion .github/workflows/run.yml
Original file line number Diff line number Diff line change
Expand Up @@ -8,12 +8,15 @@ jobs:
check:
runs-on: ubuntu-latest
container:
image: ghcr.io/synerbi/sirf:latest
image: ghcr.io/synerbi/sirf:edge
options: --user root # https://github.com/actions/checkout/issues/956
steps:
- uses: actions/checkout@v4
- name: Install dependencies
shell: bash -el {0}
run: |
source /opt/SIRF-SuperBuild/INSTALL/bin/env_sirf.sh
conda install -y tensorboard tensorboardx
if test -f apt.txt; then
sudo apt-get update
xargs -a apt.txt sudo apt-get install -y
Expand All @@ -25,7 +28,9 @@ jobs:
pip install -r requirements.txt
fi
- name: Test imports
shell: bash -el {0}
run: |
source /opt/SIRF-SuperBuild/INSTALL/bin/env_sirf.sh
python <<EOF
from main import Submission, submission_callbacks
from cil.optimisation.algorithms import Algorithm
Expand Down
22 changes: 17 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,21 +13,30 @@ The organisers will provide GPU-enabled cloud runners which have access to large
## Layout

Only [`main.py`](main.py) is required.
[SIRF](https://github.com/SyneRBI/SIRF), [CIL](https://github.com/TomographicImaging/CIL), and CUDA are already installed (using [synerbi/sirf:latest-gpu](https://github.com/synerbi/SIRF-SuperBuild/pkgs/container/sirf)).
[SIRF](https://github.com/SyneRBI/SIRF), [CIL](https://github.com/TomographicImaging/CIL), and CUDA are already installed (using [synerbi/sirf](https://github.com/synerbi/SIRF-SuperBuild/pkgs/container/sirf)).
Additional dependencies may be specified via `apt.txt`, `environment.yml`, and/or `requirements.txt`.

- (required) `main.py`: must define a `class Submission(cil.optimisation.algorithms.Algorithm)`
- `notebook.ipynb`: can be used for experimenting. Runs `main.Submission()` on test `data` with basic `metrics`
- `apt.txt`: passed to `apt install`
- `environment.yml`: passed to `conda install`
- `requirements.txt`: passed to `pip install`

Some `example*.ipynb` notebooks are provided and can be used for experimenting.

## Organiser setup

The organisers will execute:
The organisers will effectively execute:

```sh
docker run --rm -it -v data:/mnt/share/petric:ro ghcr.io/synerbi/sirf:edge-gpu
# or ideally ghcr.io/synerbi/sirf:latest-gpu after the next SIRF release!
conda install tensorboard tensorboardx
python
```

```python
from main import Submission, submission_callbacks
from petric import data, metrics
assert issubclass(Submission, cil.optimisation.algorithms.Algorithm)
with Timeout(minutes=5):
Submission(data).run(np.inf, callbacks=metrics + submission_callbacks)
Expand All @@ -37,5 +46,8 @@ with Timeout(minutes=5):
> To avoid timing out, please disable any debugging/plotting code before submitting!
> This includes removing any progress/logging from `submission_callbacks`.
The organisers will have private versions of `data` and `metrics`.
Smaller test (public) versions of `data` and `metrics` are defined in the [`notebook.ipynb`](notebook.ipynb).
- `metrics` are described in the [wiki](https://github.com/SyneRBI/PETRIC/wiki), but are not yet part of this repository
- `data` to test/train your `Algorithm`s is available at https://petric.tomography.stfc.ac.uk/data/ and is likely to grow (more info to follow soon)
+ fewer datasets will be used by the organisers to provide a temporary leaderboard

Any modifications to `petric.py` are ignored.
18 changes: 16 additions & 2 deletions SIRF_data_preparation/BSREM_NeuroLF_Hoffman.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,17 @@
from BSREM_common import run
from petric import MetricsWithTimeout, get_data
from sirf.contrib.BSREM.BSREM import BSREM1
from sirf.contrib.partitioner import partitioner

run(num_subsets=16, transverse_slice=72)
data = get_data(srcdir="./data/NeuroLF_Hoffman_Dataset", outdir="./output/BSREM_NeuroLF_Hoffman")
num_subsets = 16
data_sub, acq_models, obj_funs = partitioner.data_partition(data.acquired_data, data.additive_term, data.mult_factors,
num_subsets, initial_image=data.OSEM_image)
# WARNING: modifies prior strength with 1/num_subsets (as currently needed for BSREM implementations)
data.prior.set_penalisation_factor(data.prior.get_penalisation_factor() / len(obj_funs))
data.prior.set_up(data.OSEM_image)
for f in obj_funs: # add prior evenly to every objective function
f.set_prior(data.prior)

algo = BSREM1(data_sub, obj_funs, initial=data.OSEM_image, initial_step_size=.3, relaxation_eta=.01,
update_objective_interval=10)
algo.run(5000, callbacks=[MetricsWithTimeout(transverse_slice=72)])
18 changes: 16 additions & 2 deletions SIRF_data_preparation/BSREM_Vision600_thorax.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,17 @@
from BSREM_common import run
from petric import MetricsWithTimeout, get_data
from sirf.contrib.BSREM.BSREM import BSREM1
from sirf.contrib.partitioner import partitioner

run(num_subsets=5, transverse_slice=None)
data = get_data(srcdir="./data/Siemens_Vision600_thorax", outdir="./output/BSREM_Vision600_thorax")
num_subsets = 5
data_sub, acq_models, obj_funs = partitioner.data_partition(data.acquired_data, data.additive_term, data.mult_factors,
num_subsets, initial_image=data.OSEM_image)
# WARNING: modifies prior strength with 1/num_subsets (as currently needed for BSREM implementations)
data.prior.set_penalisation_factor(data.prior.get_penalisation_factor() / len(obj_funs))
data.prior.set_up(data.OSEM_image)
for f in obj_funs: # add prior evenly to every objective function
f.set_prior(data.prior)

algo = BSREM1(data_sub, obj_funs, initial=data.OSEM_image, initial_step_size=.3, relaxation_eta=.01,
update_objective_interval=10)
algo.run(5000, callbacks=[MetricsWithTimeout()])
137 changes: 0 additions & 137 deletions SIRF_data_preparation/BSREM_common.py

This file was deleted.

18 changes: 16 additions & 2 deletions SIRF_data_preparation/BSREM_mMR_NEMA_IQ.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,17 @@
from BSREM_common import run
from petric import MetricsWithTimeout, get_data
from sirf.contrib.BSREM.BSREM import BSREM1
from sirf.contrib.partitioner import partitioner

run(num_subsets=7, transverse_slice=72, coronal_slice=109)
data = get_data(srcdir="./data/Siemens_mMR_NEMA_IQ", outdir="./output/BSREM_mMR_NEMA_IQ")
num_subsets = 7
data_sub, acq_models, obj_funs = partitioner.data_partition(data.acquired_data, data.additive_term, data.mult_factors,
num_subsets, initial_image=data.OSEM_image)
# WARNING: modifies prior strength with 1/num_subsets (as currently needed for BSREM implementations)
data.prior.set_penalisation_factor(data.prior.get_penalisation_factor() / len(obj_funs))
data.prior.set_up(data.OSEM_image)
for f in obj_funs: # add prior evenly to every objective function
f.set_prior(data.prior)

algo = BSREM1(data_sub, obj_funs, initial=data.OSEM_image, initial_step_size=.3, relaxation_eta=.01,
update_objective_interval=10)
algo.run(5000, callbacks=[MetricsWithTimeout(transverse_slice=72, coronal_slice=109)])
1 change: 0 additions & 1 deletion SIRF_data_preparation/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,4 +20,3 @@ Participants should never have to use these (unless you want to create your own
## Helpers

- `PET_plot_functions.py`: plotting helpers

68 changes: 44 additions & 24 deletions main.py
Original file line number Diff line number Diff line change
@@ -1,32 +1,52 @@
"""Usage in notebook.ipynb:
from main import Submission, submission_callbacks
from cil.optimisation.utilities import callbacks
data = "TODO"
metrics = [callbacks.ProgressCallback()]
"""Main file to modify for submissions. It is used by e.g. example.ipynb and petric.py as follows:
algorithm = Submission(data)
algorithm.run(np.inf, callbacks=metrics + submission_callbacks)
>>> from main import Submission, submission_callbacks
>>> from petric import data, metrics
>>> algorithm = Submission(data)
>>> algorithm.run(np.inf, callbacks=metrics + submission_callbacks)
"""
from cil.optimisation.algorithms import GD
from cil.optimisation.utilities.callbacks import Callback

from cil.optimisation.algorithms import Algorithm
from cil.optimisation.utilities import callbacks
from petric import Dataset
from sirf.contrib.BSREM.BSREM import BSREM1
from sirf.contrib.partitioner import partitioner

class EarlyStopping(Callback):
def __call__(self, algorithm):
if algorithm.x <= -15: # arbitrary stopping criterion
raise StopIteration
assert issubclass(BSREM1, Algorithm)


submission_callbacks = [EarlyStopping()]
class MaxIteration(callbacks.Callback):
"""
The organisers try to `Submission(data).run(inf)` i.e. for infinite iterations (until timeout).
This callback forces stopping after `max_iteration` instead.
"""
def __init__(self, max_iteration: int, verbose: int = 1):
super().__init__(verbose)
self.max_iteration = max_iteration

def __call__(self, algorithm: Algorithm):
if algorithm.iteration >= self.max_iteration:
raise StopIteration

class Submission(GD):
def __init__(self, data, *args, **kwargs):
super().__init__(*args, **kwargs)
# Your code here
self.data = data

def update(self):
# Your code here
return super().update()
class Submission(BSREM1):
# note that `issubclass(BSREM1, Algorithm) == True`
def __init__(self, data: Dataset, num_subsets: int = 7, update_objective_interval: int = 10):
"""
Initialisation function, setting up data & (hyper)parameters.
NB: in practice, `num_subsets` should likely be determined from the data.
This is just an example. Try to modify and improve it!
"""
data_sub, acq_models, obj_funs = partitioner.data_partition(data.acquired_data, data.additive_term,
data.mult_factors, num_subsets,
initial_image=data.OSEM_image)
# WARNING: modifies prior strength with 1/num_subsets (as currently needed for BSREM implementations)
data.prior.set_penalisation_factor(data.prior.get_penalisation_factor() / len(obj_funs))
data.prior.set_up(data.OSEM_image)
for f in obj_funs: # add prior evenly to every objective function
f.set_prior(data.prior)

super().__init__(data_sub, obj_funs, initial=data.OSEM_image, initial_step_size=.3, relaxation_eta=.01,
update_objective_interval=update_objective_interval)


submission_callbacks = [MaxIteration(660)]
Loading

0 comments on commit a8d3155

Please sign in to comment.