Skip to content
This repository was archived by the owner on Mar 21, 2024. It is now read-only.

Commit d902e02

Browse files
authored
ENH: Adapt HelloWorld to run using the Inference Service (#837)
Closes #696. Changes the HelloWorld model to only use a single channel, as this is the format that the [Inference Service](https://github.com/microsoft/InnerEye-Inference/) expects for its inputs. As many tests require the 2-channel data, this PR also creates a new class, `HelloWorld2Channel`, which inherits from `HelloWorld` but uses 2 channels and can be used for testing. This avoids the pain of having to alter all unit and regression tests that relied on the 2-channel data and model. Both models can be trained on the same data, but now the `HelloWorld` model can be run by the Inference Service.
1 parent 8f9d823 commit d902e02

File tree

2 files changed

+13
-5
lines changed

2 files changed

+13
-5
lines changed

Diff for: InnerEye/ML/configs/segmentation/HelloWorld.py

+3-2
Original file line numberDiff line numberDiff line change
@@ -31,12 +31,13 @@ class HelloWorld(SegmentationModelBase):
3131
* If you want to test that your AzureML workspace is working, please follow the instructions in
3232
<repo_root>/docs/hello_world_model.md.
3333
34-
In this example, the model is trained on 2 input image channels channel1 and channel2, and
34+
In this example, the model is trained on 1 input image channels "channel1", and
3535
predicts 2 foreground classes region, region_1.
3636
"""
3737

3838
def __init__(self, **kwargs: Any) -> None:
3939
fg_classes = ["region", "region_1"]
40+
image_channels = kwargs.pop("image_channels", ["channel1"])
4041
super().__init__(
4142
# Data definition - in this section we define where to load the dataset from
4243
local_dataset=full_ml_test_data_path(),
@@ -45,7 +46,7 @@ def __init__(self, **kwargs: Any) -> None:
4546
architecture="UNet3D",
4647
feature_channels=[4],
4748
crop_size=(64, 64, 64),
48-
image_channels=["channel1", "channel2"],
49+
image_channels=image_channels,
4950
ground_truth_ids=fg_classes,
5051
class_weights=equally_weighted_classes(fg_classes, background_weight=0.02),
5152
mask_id="mask",

Diff for: docs/source/md/hello_world_model.md

+10-3
Original file line numberDiff line numberDiff line change
@@ -68,14 +68,21 @@ A "datastore" in AzureML lingo is an abstraction for the ML systems to access fi
6868
Instructions to create the datastore are given
6969
[in the AML setup instructions](setting_up_aml.md) in step 5.
7070

71-
## Run the HelloWorld model in AzureML
71+
## Train the HelloWorld model in AzureML
7272

73-
Double-check that you have copied your Azure settings into the settings file, as described
74-
[in the AML setup instructions](setting_up_aml.md) in step 6.
73+
Double-check that you have copied your Azure settings into the settings file, as described [in the AML setup instructions](setting_up_aml.md) in step 6.
7574

7675
Then execute:
7776

7877
```shell
7978
conda activate InnerEye
8079
python InnerEye/ML/runner.py --model=HelloWorld --azureml
8180
```
81+
82+
This will submit a training job to your AzureML workspace. You should see a URL for the run output in your terminal. Follow this link to monitor the job in the AzureML portal.
83+
84+
Once the training job completes, it will register a trained HelloWorld model to your workspace. To see this model, navigate to the completed training run and under the "Overview" tab you will see your model and version under "Registered models". It will be in the format `HelloWorld:<Model Version>`.
85+
86+
## (Optional) Run InnerEye-Inference on the HelloWorld model in AzureML
87+
88+
If you wish to faciliate easily running inference on your models, you can set up the [InnerEye-Inference Service](https://github.com/microsoft/InnerEye-Inference/). Follow instructions in the Inference Service README.md to set it up either locally or as an Azure App Service. You will then be able to run the [start](https://github.com/microsoft/InnerEye-Inference/#start) and [monitor](https://github.com/microsoft/InnerEye-Inference/#results) commands, replacing the model name and version with the model trained in the previous step.

0 commit comments

Comments
 (0)