Skip to content
This repository was archived by the owner on Mar 21, 2024. It is now read-only.

Commit c1b363e

Browse files
authored
ENH: Move docs to ReadTheDocs (#768)
* 📝Move docs folder to sphinx-docs * Trigger build for new URL * 📝 Fix build to include README + CHANGLOG * 📝 Add back in link fixing * 🐛 Fix docs links * 🚨 📝 Fix markdown linting * 📝 Change relative links to GitHub ones permanently * 📝 Replace more relative paths * ⚡️ 📝 Switch to symlinks * 📝 Replace README in toctree * 📝 Update README * 🐛 Attempt to fix images not rendering * 🐛 Fix broken links * Remove IDE settings from gitignore * ⚡️ Move docs to `docs/` and add Makefile back * 🙈 Update gitignore * ♻️ ⚡️ Resolve review comments and change theme * 📝 🔀 Rebase + markdown linting * 🔥 Remove build files (again) * 🙈 Remove pieline-breaking symlink * ➕ Add furo to sphinx dependencies * 📌 Move sphinx deps to environment.yml + lock * 📝 Improve doc folder structure * Return to copying instead of symlink * 📝 Update indexing and titles * 📝 Address review comments
1 parent 4e12cec commit c1b363e

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

49 files changed

+437
-392
lines changed

.gitignore

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -84,8 +84,10 @@ instance/
8484
.scrapy
8585

8686
# Sphinx documentation
87-
sphinx-docs/build/
88-
sphinx-docs/source/md/
87+
docs/build/
88+
docs/source/md/CHANGELOG.md
89+
docs/source/md/README.md
90+
docs/source/md/LICENSE
8991

9092
# PyBuilder
9193
target/

.readthedocs.yaml

Lines changed: 1 addition & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -9,11 +9,7 @@ build:
99
python: miniconda3-4.7
1010

1111
sphinx:
12-
configuration: sphinx-docs/source/conf.py
13-
14-
python:
15-
install:
16-
- requirements: sphinx-docs/requirements.txt
12+
configuration: docs/source/conf.py
1713

1814
conda:
1915
environment: environment.yml

CHANGELOG.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -181,7 +181,7 @@ institution id and series id columns are missing.
181181
- ([#441](https://github.com/microsoft/InnerEye-DeepLearning/pull/441)) Add script to move models from one AzureML workspace to another: `python InnerEye/Scripts/move_model.py`
182182
- ([#417](https://github.com/microsoft/InnerEye-DeepLearning/pull/417)) Added a generic way of adding PyTorch Lightning
183183
models to the toolbox. It is now possible to train almost any Lightning model with the InnerEye toolbox in AzureML,
184-
with only minimum code changes required. See [the MD documentation](docs/bring_your_own_model.md) for details.
184+
with only minimum code changes required. See [the MD documentation](docs/source/md/bring_your_own_model.md) for details.
185185
- ([#430](https://github.com/microsoft/InnerEye-DeepLearning/pull/430)) Update conversion to 1.0.1 InnerEye-DICOM-RT to
186186
add: manufacturer, SoftwareVersions, Interpreter and ROIInterpretedTypes.
187187
- ([#385](https://github.com/microsoft/InnerEye-DeepLearning/pull/385)) Add the ability to train a model on multiple
@@ -354,7 +354,7 @@ console for easier diagnostics.
354354

355355
#### Fixed
356356

357-
- When registering a model, it now has a consistent folder structured, described [here](docs/deploy_on_aml.md). This
357+
- When registering a model, it now has a consistent folder structured, described [here](docs/source/md/deploy_on_aml.md). This
358358
folder structure is present irrespective of using InnerEye as a submodule or not. In particular, exactly 1 Conda
359359
environment will be contained in the model.
360360

README.md

Lines changed: 26 additions & 68 deletions
Original file line numberDiff line numberDiff line change
@@ -2,53 +2,26 @@
22

33
[![Build Status](https://innereye.visualstudio.com/InnerEye/_apis/build/status/InnerEye-DeepLearning/InnerEye-DeepLearning-PR?branchName=main)](https://innereye.visualstudio.com/InnerEye/_build?definitionId=112&branchName=main)
44

5-
## Overview
5+
InnerEye-DeepLearning (IE-DL) is a toolbox for easily training deep learning models on 3D medical images. Simple to run both locally and in the cloud with [AzureML](https://docs.microsoft.com/en-gb/azure/machine-learning/), it allows users to train and run inference on the following:
66

7-
This is a deep learning toolbox to train models on medical images (or more generally, 3D images).
8-
It integrates seamlessly with cloud computing in Azure.
7+
- Segmentation models.
8+
- Classification and regression models.
9+
- Any PyTorch Lightning model, via a [bring-your-own-model setup](docs/source/md/bring_your_own_model.md).
910

10-
On the modelling side, this toolbox supports
11+
In addition, this toolbox supports:
1112

12-
- Segmentation models
13-
- Classification and regression models
14-
- Adding cloud support to any PyTorch Lightning model, via a [bring-your-own-model setup](docs/bring_your_own_model.md)
15-
16-
On the user side, this toolbox focusses on enabling machine learning teams to achieve more. It is cloud-first, and
17-
relies on [Azure Machine Learning Services (AzureML)](https://docs.microsoft.com/en-gb/azure/machine-learning/) for execution,
18-
bookkeeping, and visualization. Taken together, this gives:
19-
20-
- **Traceability**: AzureML keeps a full record of all experiments that were executed, including a snapshot of
21-
the code. Tags are added to the experiments automatically, that can later help filter and find old experiments.
22-
- **Transparency**: All team members have access to each other's experiments and results.
23-
- **Reproducibility**: Two model training runs using the same code and data will result in exactly the same metrics. All
24-
sources of randomness like multithreading are controlled for.
25-
- **Cost reduction**: Using AzureML, all compute (virtual machines, VMs) is requested at the time of starting the
26-
training job, and freed up at the end. Idle VMs will not incur costs. In addition, Azure low priority
27-
nodes can be used to further reduce costs (up to 80% cheaper).
28-
- **Scale out**: Large numbers of VMs can be requested easily to cope with a burst in jobs.
29-
30-
Despite the cloud focus, all training and model testing works just as well on local compute, which is important for
31-
model prototyping, debugging, and in cases where the cloud can't be used. In particular, if you already have GPU
32-
machines available, you will be able to utilize them with the InnerEye toolbox.
33-
34-
In addition, our toolbox supports:
35-
36-
- Cross-validation using AzureML's built-in support, where the models for
37-
individual folds are trained in parallel. This is particularly important for the long-running training jobs
38-
often seen with medical images.
39-
- Hyperparameter tuning using
40-
[Hyperdrive](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-tune-hyperparameters).
13+
- Cross-validation using AzureML, where the models for individual folds are trained in parallel. This is particularly important for the long-running training jobs often seen with medical images.
14+
- Hyperparameter tuning using [Hyperdrive](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-tune-hyperparameters).
4115
- Building ensemble models.
42-
- Easy creation of new models via a configuration-based approach, and inheritance from an existing
43-
architecture.
16+
- Easy creation of new models via a configuration-based approach, and inheritance from an existing architecture.
4417

45-
Once training in AzureML is done, the models can be deployed from within AzureML.
18+
## Documentation
4619

47-
## Quick Setup
20+
For all documentation, including setup guides and APIs, please refer to the [IE-DL Read the Docs site](https://innereye-deeplearning.readthedocs.io/#).
4821

49-
This quick setup assumes you are using a machine running Ubuntu with Git, Git LFS, Conda and Python 3.7+ installed. Please refer to the [setup guide](docs/environment.md) for more detailed instructions on getting InnerEye set up with other operating systems and installing the above prerequisites.
22+
## Quick Setup
5023

51-
### Instructions
24+
This quick setup assumes you are using a machine running Ubuntu with Git, Git LFS, Conda and Python 3.7+ installed. Please refer to the [setup guide](docs/source/md/environment.md) for more detailed instructions on getting InnerEye set up with other operating systems and installing the above prerequisites.
5225

5326
1. Clone the InnerEye-DeepLearning repo by running the following command:
5427

@@ -73,46 +46,31 @@ If the above runs with no errors: Congratulations! You have successfully built y
7346
If it fails, please check the
7447
[troubleshooting page on the Wiki](https://github.com/microsoft/InnerEye-DeepLearning/wiki/Issues-with-code-setup-and-the-HelloWorld-model).
7548

76-
## Other Documentation
77-
78-
Further detailed instructions, including setup in Azure, are here:
79-
80-
1. [Setting up your environment](docs/environment.md)
81-
1. [Setting up Azure Machine Learning](docs/setting_up_aml.md)
82-
1. [Training a simple segmentation model in Azure ML](docs/hello_world_model.md)
83-
1. [Creating a dataset](docs/creating_dataset.md)
84-
1. [Building models in Azure ML](docs/building_models.md)
85-
1. [Sample Segmentation and Classification tasks](docs/sample_tasks.md)
86-
1. [Debugging and monitoring models](docs/debugging_and_monitoring.md)
87-
1. [Model diagnostics](docs/model_diagnostics.md)
88-
1. [Move a model to a different workspace](docs/move_model.md)
89-
1. [Working with FastMRI models](docs/fastmri.md)
90-
1. [Active label cleaning and noise robust learning toolbox](https://github.com/microsoft/InnerEye-DeepLearning/blob/1606729c7a16e1bfeb269694314212b6e2737939/InnerEye-DataQuality/README.md)
91-
1. [Using InnerEye as a git submodule](docs/innereye_as_submodule.md)
92-
1. [Evaluating pre-trained models](docs/hippocampus_model.md)
93-
94-
## Deployment
49+
## Full InnerEye Deployment
9550

9651
We offer a companion set of open-sourced tools that help to integrate trained CT segmentation models with clinical
9752
software systems:
9853

9954
- The [InnerEye-Gateway](https://github.com/microsoft/InnerEye-Gateway) is a Windows service running in a DICOM network,
10055
that can route anonymized DICOM images to an inference service.
10156
- The [InnerEye-Inference](https://github.com/microsoft/InnerEye-Inference) component offers a REST API that integrates
102-
with the InnnEye-Gateway, to run inference on InnerEye-DeepLearning models.
57+
with the InnerEye-Gateway, to run inference on InnerEye-DeepLearning models.
58+
59+
Details can be found [here](docs/source/md/deploy_on_aml.md).
10360

104-
Details can be found [here](docs/deploy_on_aml.md).
61+
![docs/deployment.png](docs/source/images/deployment.png)
10562

106-
![docs/deployment.png](docs/deployment.png)
63+
## Benefits of InnerEye-DeepLearning
10764

108-
## More information
65+
In combiniation with the power of AzureML, InnerEye provides the following benefits:
66+
67+
- **Traceability**: AzureML keeps a full record of all experiments that were executed, including a snapshot of the code. Tags are added to the experiments automatically, that can later help filter and find old experiments.
68+
- **Transparency**: All team members have access to each other's experiments and results.
69+
- **Reproducibility**: Two model training runs using the same code and data will result in exactly the same metrics. All sources of randomness are controlled for.
70+
- **Cost reduction**: Using AzureML, all compute resources (virtual machines, VMs) are requested at the time of starting the training job and freed up at the end. Idle VMs will not incur costs. Azure low priority nodes can be used to further reduce costs (up to 80% cheaper).
71+
- **Scalability**: Large numbers of VMs can be requested easily to cope with a burst in jobs.
10972

110-
1. [Project InnerEye](https://www.microsoft.com/en-us/research/project/medical-image-analysis/)
111-
1. [Releases](docs/releases.md)
112-
1. [Changelog](CHANGELOG.md)
113-
1. [Testing](docs/testing.md)
114-
1. [How to do pull requests](docs/pull_requests.md)
115-
1. [Contributing](docs/contributing.md)
73+
Despite the cloud focus, InnerEye is designed to be able to run locally too, which is important for model prototyping, debugging, and in cases where the cloud can't be used. Therefore, if you already have GPU machines available, you will be able to utilize them with the InnerEye toolbox.
11674

11775
## Licensing
11876

0 commit comments

Comments
 (0)