You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Mar 21, 2024. It is now read-only.
Copy file name to clipboardExpand all lines: CHANGELOG.md
+2-2Lines changed: 2 additions & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -181,7 +181,7 @@ institution id and series id columns are missing.
181
181
- ([#441](https://github.com/microsoft/InnerEye-DeepLearning/pull/441)) Add script to move models from one AzureML workspace to another: `python InnerEye/Scripts/move_model.py`
182
182
- ([#417](https://github.com/microsoft/InnerEye-DeepLearning/pull/417)) Added a generic way of adding PyTorch Lightning
183
183
models to the toolbox. It is now possible to train almost any Lightning model with the InnerEye toolbox in AzureML,
184
-
with only minimum code changes required. See [the MD documentation](docs/bring_your_own_model.md) for details.
184
+
with only minimum code changes required. See [the MD documentation](docs/source/md/bring_your_own_model.md) for details.
185
185
- ([#430](https://github.com/microsoft/InnerEye-DeepLearning/pull/430)) Update conversion to 1.0.1 InnerEye-DICOM-RT to
186
186
add: manufacturer, SoftwareVersions, Interpreter and ROIInterpretedTypes.
187
187
- ([#385](https://github.com/microsoft/InnerEye-DeepLearning/pull/385)) Add the ability to train a model on multiple
@@ -354,7 +354,7 @@ console for easier diagnostics.
354
354
355
355
#### Fixed
356
356
357
-
- When registering a model, it now has a consistent folder structured, described [here](docs/deploy_on_aml.md). This
357
+
- When registering a model, it now has a consistent folder structured, described [here](docs/source/md/deploy_on_aml.md). This
358
358
folder structure is present irrespective of using InnerEye as a submodule or not. In particular, exactly 1 Conda
InnerEye-DeepLearning (IE-DL) is a toolbox for easily training deep learning models on 3D medical images. Simple to run both locally and in the cloud with [AzureML](https://docs.microsoft.com/en-gb/azure/machine-learning/), it allows users to train and run inference on the following:
6
6
7
-
This is a deep learning toolbox to train models on medical images (or more generally, 3D images).
8
-
It integrates seamlessly with cloud computing in Azure.
7
+
- Segmentation models.
8
+
- Classification and regression models.
9
+
- Any PyTorch Lightning model, via a [bring-your-own-model setup](docs/source/md/bring_your_own_model.md).
9
10
10
-
On the modelling side, this toolbox supports
11
+
In addition, this toolbox supports:
11
12
12
-
- Segmentation models
13
-
- Classification and regression models
14
-
- Adding cloud support to any PyTorch Lightning model, via a [bring-your-own-model setup](docs/bring_your_own_model.md)
15
-
16
-
On the user side, this toolbox focusses on enabling machine learning teams to achieve more. It is cloud-first, and
17
-
relies on [Azure Machine Learning Services (AzureML)](https://docs.microsoft.com/en-gb/azure/machine-learning/) for execution,
18
-
bookkeeping, and visualization. Taken together, this gives:
19
-
20
-
-**Traceability**: AzureML keeps a full record of all experiments that were executed, including a snapshot of
21
-
the code. Tags are added to the experiments automatically, that can later help filter and find old experiments.
22
-
-**Transparency**: All team members have access to each other's experiments and results.
23
-
-**Reproducibility**: Two model training runs using the same code and data will result in exactly the same metrics. All
24
-
sources of randomness like multithreading are controlled for.
25
-
-**Cost reduction**: Using AzureML, all compute (virtual machines, VMs) is requested at the time of starting the
26
-
training job, and freed up at the end. Idle VMs will not incur costs. In addition, Azure low priority
27
-
nodes can be used to further reduce costs (up to 80% cheaper).
28
-
-**Scale out**: Large numbers of VMs can be requested easily to cope with a burst in jobs.
29
-
30
-
Despite the cloud focus, all training and model testing works just as well on local compute, which is important for
31
-
model prototyping, debugging, and in cases where the cloud can't be used. In particular, if you already have GPU
32
-
machines available, you will be able to utilize them with the InnerEye toolbox.
33
-
34
-
In addition, our toolbox supports:
35
-
36
-
- Cross-validation using AzureML's built-in support, where the models for
37
-
individual folds are trained in parallel. This is particularly important for the long-running training jobs
- Cross-validation using AzureML, where the models for individual folds are trained in parallel. This is particularly important for the long-running training jobs often seen with medical images.
14
+
- Hyperparameter tuning using [Hyperdrive](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-tune-hyperparameters).
41
15
- Building ensemble models.
42
-
- Easy creation of new models via a configuration-based approach, and inheritance from an existing
43
-
architecture.
16
+
- Easy creation of new models via a configuration-based approach, and inheritance from an existing architecture.
44
17
45
-
Once training in AzureML is done, the models can be deployed from within AzureML.
18
+
## Documentation
46
19
47
-
## Quick Setup
20
+
For all documentation, including setup guides and APIs, please refer to the [IE-DL Read the Docs site](https://innereye-deeplearning.readthedocs.io/#).
48
21
49
-
This quick setup assumes you are using a machine running Ubuntu with Git, Git LFS, Conda and Python 3.7+ installed. Please refer to the [setup guide](docs/environment.md) for more detailed instructions on getting InnerEye set up with other operating systems and installing the above prerequisites.
22
+
## Quick Setup
50
23
51
-
### Instructions
24
+
This quick setup assumes you are using a machine running Ubuntu with Git, Git LFS, Conda and Python 3.7+ installed. Please refer to the [setup guide](docs/source/md/environment.md) for more detailed instructions on getting InnerEye set up with other operating systems and installing the above prerequisites.
52
25
53
26
1. Clone the InnerEye-DeepLearning repo by running the following command:
54
27
@@ -73,46 +46,31 @@ If the above runs with no errors: Congratulations! You have successfully built y
73
46
If it fails, please check the
74
47
[troubleshooting page on the Wiki](https://github.com/microsoft/InnerEye-DeepLearning/wiki/Issues-with-code-setup-and-the-HelloWorld-model).
75
48
76
-
## Other Documentation
77
-
78
-
Further detailed instructions, including setup in Azure, are here:
79
-
80
-
1.[Setting up your environment](docs/environment.md)
81
-
1.[Setting up Azure Machine Learning](docs/setting_up_aml.md)
82
-
1.[Training a simple segmentation model in Azure ML](docs/hello_world_model.md)
83
-
1.[Creating a dataset](docs/creating_dataset.md)
84
-
1.[Building models in Azure ML](docs/building_models.md)
85
-
1.[Sample Segmentation and Classification tasks](docs/sample_tasks.md)
86
-
1.[Debugging and monitoring models](docs/debugging_and_monitoring.md)
87
-
1.[Model diagnostics](docs/model_diagnostics.md)
88
-
1.[Move a model to a different workspace](docs/move_model.md)
89
-
1.[Working with FastMRI models](docs/fastmri.md)
90
-
1.[Active label cleaning and noise robust learning toolbox](https://github.com/microsoft/InnerEye-DeepLearning/blob/1606729c7a16e1bfeb269694314212b6e2737939/InnerEye-DataQuality/README.md)
91
-
1.[Using InnerEye as a git submodule](docs/innereye_as_submodule.md)
In combiniation with the power of AzureML, InnerEye provides the following benefits:
66
+
67
+
-**Traceability**: AzureML keeps a full record of all experiments that were executed, including a snapshot of the code. Tags are added to the experiments automatically, that can later help filter and find old experiments.
68
+
-**Transparency**: All team members have access to each other's experiments and results.
69
+
-**Reproducibility**: Two model training runs using the same code and data will result in exactly the same metrics. All sources of randomness are controlled for.
70
+
-**Cost reduction**: Using AzureML, all compute resources (virtual machines, VMs) are requested at the time of starting the training job and freed up at the end. Idle VMs will not incur costs. Azure low priority nodes can be used to further reduce costs (up to 80% cheaper).
71
+
-**Scalability**: Large numbers of VMs can be requested easily to cope with a burst in jobs.
1.[How to do pull requests](docs/pull_requests.md)
115
-
1.[Contributing](docs/contributing.md)
73
+
Despite the cloud focus, InnerEye is designed to be able to run locally too, which is important for model prototyping, debugging, and in cases where the cloud can't be used. Therefore, if you already have GPU machines available, you will be able to utilize them with the InnerEye toolbox.
0 commit comments