Skip to content

Commit 214d1d1

Browse files
authored
CI: check dead links in docs (#114)
* CI: check dead links in docs * fix PL logo * azure agents
1 parent 8039f19 commit 214d1d1

File tree

25 files changed

+33
-34
lines changed

25 files changed

+33
-34
lines changed

.actions/helpers.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -79,7 +79,7 @@
7979
#
8080
# ### Great thanks from the entire Pytorch Lightning Team for your interest !
8181
#
82-
# ![Pytorch Lightning](https://github.com/PyTorchLightning/pytorch-lightning/blob/master/docs/source/_static/images/logo.png){height="60px" width="240px"}
82+
# [![Pytorch Lightning](https://raw.githubusercontent.com/PyTorchLightning/pytorch-lightning/master/docs/source/_static/images/logo.png){height="60px" width="240px"}](https://pytorchlightning.ai)
8383
8484
"""
8585
TEMPLATE_CARD_ITEM = """

.azure-pipelines/ipynb-publish.yml

+1-1
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ jobs:
1717
# - For 60 minutes on Microsoft-hosted agents with a private project or private repository
1818
timeoutInMinutes: 0
1919

20-
pool: gridai-spot-pool
20+
pool: azure-gpus-persist
2121
# this need to have installed docker in the base image...
2222
container:
2323
# base ML image: mcr.microsoft.com/azureml/openmpi3.1.2-cuda10.2-cudnn8-ubuntu18.04

.azure-pipelines/ipynb-tests.yml

+1-1
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ jobs:
1313
# how much time to give 'run always even if cancelled tasks' before stopping them
1414
cancelTimeoutInMinutes: 2
1515

16-
pool: gridai-spot-pool
16+
pool: azure-gpus-spot
1717
# this need to have installed docker in the base image...
1818
container:
1919
# base ML image: mcr.microsoft.com/azureml/openmpi3.1.2-cuda10.2-cudnn8-ubuntu18.04

.github/workflows/ci_docs.yml

+2-2
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
name: test Docs
1+
name: validate Docs
22

33
on: # Trigger the workflow on push or pull request
44
pull_request: {}
@@ -71,7 +71,7 @@ jobs:
7171
working-directory: ./docs
7272
run: |
7373
# First run the same pipeline as Read-The-Docs
74-
make html --debug --jobs $(nproc) SPHINXOPTS="-W --keep-going"
74+
make html --debug --jobs $(nproc) SPHINXOPTS="-W --keep-going" -b linkcheck
7575
7676
- name: Upload built docs
7777
uses: actions/upload-artifact@v2

course_UvA-DL/01-introduction-to-pytorch/.meta.yml

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
title: "Tutorial 1: Introduction to PyTorch"
22
author: Phillip Lippe
33
created: 2021-08-27
4-
updated: 2021-08-27
4+
updated: 2021-11-29
55
license: CC BY-SA
66
description: |
77
This tutorial will give a short introduction to PyTorch basics, and get you setup for writing your own neural networks.

course_UvA-DL/01-introduction-to-pytorch/Introduction_to_PyTorch.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44
# The following notebook is meant to give a short introduction to PyTorch basics, and get you setup for writing your own neural networks.
55
# PyTorch is an open source machine learning framework that allows you to write your own neural networks and optimize them efficiently.
66
# However, PyTorch is not the only framework of its kind.
7-
# Alternatives to PyTorch include [TensorFlow](https://www.tensorflow.org/), [JAX](https://github.com/google/jax#quickstart-colab-in-the-cloud) and [Caffe](http://caffe.berkeleyvision.org/).
7+
# Alternatives to PyTorch include [TensorFlow](https://www.tensorflow.org/), [JAX](https://github.com/google/jax) and [Caffe](http://caffe.berkeleyvision.org/).
88
# We choose to teach PyTorch at the University of Amsterdam because it is well established, has a huge developer community (originally developed by Facebook), is very flexible and especially used in research.
99
# Many current papers publish their code in PyTorch, and thus it is good to be familiar with PyTorch as well.
1010
# Meanwhile, TensorFlow (developed by Google) is usually known for being a production-grade deep learning library.

course_UvA-DL/03-initialization-and-optimization/.meta.yml

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
title: "Tutorial 3: Initialization and Optimization"
22
author: Phillip Lippe
33
created: 2021-08-27
4-
updated: 2021-08-27
4+
updated: 2021-11-29
55
license: CC BY-SA
66
tags:
77
- Image

course_UvA-DL/03-initialization-and-optimization/Initialization_and_Optimization.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -323,7 +323,7 @@ def visualize_activations(model, color="C0", print_variance=False):
323323
# %% [markdown]
324324
# ## Initialization
325325
#
326-
# Before starting our discussion about initialization, it should be noted that there exist many very good blog posts about the topic of neural network initialization (for example [deeplearning.ai](https://www.deeplearning.ai/ai-notes/initialization/), or a more [math-focused blog post](https://pouannes.github.io/blog/initialization/#mjx-eqn-eqfwd_K)).
326+
# Before starting our discussion about initialization, it should be noted that there exist many very good blog posts about the topic of neural network initialization (for example [deeplearning.ai](https://www.deeplearning.ai/ai-notes/initialization/), or a more [math-focused blog post](https://pouannes.github.io/blog/initialization)).
327327
# In case something remains unclear after this tutorial, we recommend skimming through these blog posts as well.
328328
#
329329
# When initializing a neural network, there are a few properties we would like to have.
@@ -457,7 +457,7 @@ def equal_var_init(model):
457457
# Besides the variance of the activations, another variance we would like to stabilize is the one of the gradients.
458458
# This ensures a stable optimization for deep networks.
459459
# It turns out that we can do the same calculation as above starting from $\Delta x=W\Delta y$, and come to the conclusion that we should initialize our layers with $1/d_y$ where $d_y$ is the number of output neurons.
460-
# You can do the calculation as a practice, or check a thorough explanation in [this blog post](https://pouannes.github.io/blog/initialization/#mjx-eqn-eqfwd_K).
460+
# You can do the calculation as a practice, or check a thorough explanation in [this blog post](https://pouannes.github.io/blog/initialization).
461461
# As a compromise between both constraints, [Glorot and Bengio (2010)](http://proceedings.mlr.press/v9/glorot10a/glorot10a.pdf?hc_location=ufi) proposed to use the harmonic mean of both values.
462462
# This leads us to the well-known Xavier initialization:
463463
#

course_UvA-DL/04-inception-resnet-densenet/.meta.yaml

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
title: "Tutorial 4: Inception, ResNet and DenseNet"
22
author: Phillip Lippe
33
created: 2021-08-27
4-
updated: 2021-08-27
4+
updated: 2021-11-29
55
license: CC BY-SA
66
tags:
77
- Image

course_UvA-DL/04-inception-resnet-densenet/Inception_ResNet_DenseNet.py

+3-3
Original file line numberDiff line numberDiff line change
@@ -208,7 +208,7 @@
208208
# 5. Test loop (`test_step`) which is the same as validation, only on a test set.
209209
#
210210
# Therefore, we don't abstract the PyTorch code, but rather organize it and define some default operations that are commonly used.
211-
# If you need to change something else in your training/validation/test loop, there are many possible functions you can overwrite (see the [docs](https://pytorch-lightning.readthedocs.io/en/stable/lightning_module.html) for details).
211+
# If you need to change something else in your training/validation/test loop, there are many possible functions you can overwrite (see the [docs](https://pytorch-lightning.readthedocs.io/en/stable/common/lightning_module.html) for details).
212212
#
213213
# Now we can look at an example of how a Lightning Module for training a CNN looks like:
214214

@@ -322,7 +322,7 @@ def create_model(model_name, model_hparams):
322322
# Besides the Lightning module, the second most important module in PyTorch Lightning is the `Trainer`.
323323
# The trainer is responsible to execute the training steps defined in the Lightning module and completes the framework.
324324
# Similar to the Lightning module, you can override any key part that you don't want to be automated, but the default settings are often the best practice to do.
325-
# For a full overview, see the [documentation](https://pytorch-lightning.readthedocs.io/en/stable/trainer.html).
325+
# For a full overview, see the [documentation](https://pytorch-lightning.readthedocs.io/en/stable/common/trainer.html).
326326
# The most important functions we use below are:
327327
#
328328
# * `trainer.fit`: Takes as input a lightning module, a training dataset, and an (optional) validation dataset.
@@ -764,7 +764,7 @@ def forward(self, x):
764764
#
765765
# The three groups operate on the resolutions $32\times32$, $16\times16$ and $8\times8$ respectively.
766766
# The blocks in orange denote ResNet blocks with downsampling.
767-
# The same notation is used by many other implementations such as in the [torchvision library](https://pytorch.org/docs/stable/_modules/torchvision/models/resnet.html#resnet18) from PyTorch.
767+
# The same notation is used by many other implementations such as in the [torchvision library](https://pytorch.org/vision/0.11/models.html#torchvision.models.resnet18) from PyTorch.
768768
# Thus, our code looks as follows:
769769

770770

course_UvA-DL/05-transformers-and-MH-attention/.meta.yml

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
title: "Tutorial 5: Transformers and Multi-Head Attention"
22
author: Phillip Lippe
33
created: 2021-06-30
4-
updated: 2021-06-30
4+
updated: 2021-11-29
55
license: CC BY-SA
66
build: 0
77
tags:

course_UvA-DL/05-transformers-and-MH-attention/Transformers_MHAttention.py

+2-3
Original file line numberDiff line numberDiff line change
@@ -669,10 +669,9 @@ def forward(self, x):
669669
# Improved optimizers like [RAdam](https://arxiv.org/abs/1908.03265) have been shown to overcome this issue,
670670
# not requiring warm-up for training Transformers.
671671
# Secondly, the iteratively applied Layer Normalization across layers can lead to very high gradients during
672-
# the first iterations, which can be solved by using
673-
# [Pre-Layer Normalization](https://proceedings.icml.cc/static/paper_files/icml/2020/328-Paper.pdf)
672+
# the first iterations, which can be solved by using Pre-Layer Normalization
674673
# (similar to Pre-Activation ResNet), or replacing Layer Normalization by other techniques
675-
# ([Adaptive Normalization](https://proceedings.icml.cc/static/paper_files/icml/2020/328-Paper.pdf),
674+
# (Adaptive Normalization,
676675
# [Power Normalization](https://arxiv.org/abs/2003.07845)).
677676
#
678677
# Nevertheless, many applications and papers still use the original Transformer architecture with Adam,

course_UvA-DL/06-graph-neural-networks/GNN_overview.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -805,7 +805,7 @@ def print_results(result_dict):
805805
# Torch geometric uses a different, more efficient approach: we can view the $N$ graphs in a batch as a single large graph with concatenated node and edge list.
806806
# As there is no edge between the $N$ graphs, running GNN layers on the large graph gives us the same output as running the GNN on each graph separately.
807807
# Visually, this batching strategy is visualized below (figure credit - PyTorch Geometric team,
808-
# [tutorial here](https://colab.research.google.com/drive/1I8a0DfQ3fI7Njc62__mVXUlcAleUclnb?usp=sharing#scrollTo=2owRWKcuoALo)).
808+
# [tutorial here](https://colab.research.google.com/drive/1I8a0DfQ3fI7Njc62__mVXUlcAleUclnb)).
809809
#
810810
# <center width="100%"><img src="torch_geometric_stacking_graphs.png" width="600px"></center>
811811
#

flash_tutorials/electricity_forecasting/.meta.yml

+1-1
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ author: Ethan Harris ([email protected])
33
created: 2021-11-23
44
updated: 2021-11-23
55
license: CC BY-SA
6-
build: 2
6+
build: 3
77
tags:
88
- Tabular
99
- Forecasting

lightning_examples/augmentation_kornia/.meta.yml

+1-1
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ author: PL/Kornia team
33
created: 2021-06-11
44
updated: 2021-06-16
55
license: CC BY-SA
6-
build: 2
6+
build: 3
77
tags:
88
- Image
99
description: |

lightning_examples/basic-gan/.meta.yaml

+1-1
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ author: PL team
33
created: 2020-12-21
44
updated: 2021-06-16
55
license: CC BY-SA
6-
build: 3
6+
build: 4
77
tags:
88
- Image
99
description: |

lightning_examples/cifar10-baseline/.meta.yml

+1-1
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ author: PL team
33
created: 2020-12-21
44
updated: 2021-06-16
55
license: CC BY-SA
6-
build: 1
6+
build: 2
77
tags:
88
- Image
99
description: >

lightning_examples/datamodules/.meta.yml

+1-1
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ author: PL team
33
created: 2020-12-21
44
updated: 2021-06-07
55
license: CC BY-SA
6-
build: 1
6+
build: 2
77
description: This notebook will walk you through how to start using Datamodules. With
88
the release of `pytorch-lightning` version 0.9.0, we have included a new class called
99
`LightningDataModule` to help you decouple data related hooks from your `LightningModule`.

lightning_examples/datamodules/datamodules.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -9,9 +9,9 @@
99
import torch
1010
import torch.nn.functional as F
1111
from pytorch_lightning import LightningDataModule, LightningModule, Trainer
12-
from pytorch_lightning.metrics.functional import accuracy
1312
from torch import nn
1413
from torch.utils.data import DataLoader, random_split
14+
from torchmetrics.functional import accuracy
1515
from torchvision import transforms
1616

1717
# Note - you must have torchvision installed for this example

lightning_examples/mnist-hello-world/.meta.yml

+1-1
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ author: PL team
33
created: 2020-12-21
44
updated: 2021-06-16
55
license: CC BY-SA
6-
build: 1
6+
build: 2
77
tags:
88
- Image
99
description: In this notebook, we'll go over the basics of lightning by preparing

lightning_examples/mnist-hello-world/hello-world.py

+3-3
Original file line numberDiff line numberDiff line change
@@ -79,17 +79,17 @@ def configure_optimizers(self):
7979
#
8080
# ### Note what the following built-in functions are doing:
8181
#
82-
# 1. [prepare_data()](https://pytorch-lightning.readthedocs.io/en/latest/api/pytorch_lightning.core.lightning.html#pytorch_lightning.core.lightning.LightningModule.prepare_data) 💾
82+
# 1. [prepare_data()](https://pytorch-lightning.readthedocs.io/en/stable/common/lightning_module.html#prepare-data) 💾
8383
# - This is where we can download the dataset. We point to our desired dataset and ask torchvision's `MNIST` dataset class to download if the dataset isn't found there.
8484
# - **Note we do not make any state assignments in this function** (i.e. `self.something = ...`)
8585
#
86-
# 2. [setup(stage)](https://pytorch-lightning.readthedocs.io/en/latest/common/lightning-module.html#setup) ⚙️
86+
# 2. [setup(stage)](https://pytorch-lightning.readthedocs.io/en/stable/common/lightning_module.html#setup) ⚙️
8787
# - Loads in data from file and prepares PyTorch tensor datasets for each split (train, val, test).
8888
# - Setup expects a 'stage' arg which is used to separate logic for 'fit' and 'test'.
8989
# - If you don't mind loading all your datasets at once, you can set up a condition to allow for both 'fit' related setup and 'test' related setup to run whenever `None` is passed to `stage` (or ignore it altogether and exclude any conditionals).
9090
# - **Note this runs across all GPUs and it *is* safe to make state assignments here**
9191
#
92-
# 3. [x_dataloader()](https://pytorch-lightning.readthedocs.io/en/latest/common/lightning-module.html#data-hooks) ♻️
92+
# 3. [x_dataloader()](https://pytorch-lightning.readthedocs.io/en/stable/api/pytorch_lightning.core.hooks.html) ♻️
9393
# - `train_dataloader()`, `val_dataloader()`, and `test_dataloader()` all return PyTorch `DataLoader` instances that are created by wrapping their respective datasets that we prepared in `setup()`
9494

9595

lightning_examples/mnist-tpu-training/.meta.yml

+1-1
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ author: PL team
33
created: 2020-12-21
44
updated: 2021-06-25
55
license: CC BY-SA
6-
build: 0
6+
build: 1
77
tags:
88
- Image
99
description: In this notebook, we'll train a model on TPUs. Updating one Trainer flag is all you need for that.

lightning_examples/reinforce-learning-DQN/.meta.yml

+1-1
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ author: PL team
33
created: 2021-01-31
44
updated: 2021-06-17
55
license: CC BY-SA
6-
build: 1
6+
build: 2
77
tags:
88
- RL
99
description: |

lightning_examples/text-transformers/text-transformers.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -274,7 +274,7 @@ def configure_optimizers(self):
274274
)
275275

276276
trainer = Trainer(max_epochs=1, gpus=AVAIL_GPUS)
277-
trainer.fit(model, dm)
277+
trainer.fit(model, datamodule=dm)
278278

279279
# %% [markdown]
280280
# ### MRPC
@@ -298,7 +298,7 @@ def configure_optimizers(self):
298298
)
299299

300300
trainer = Trainer(max_epochs=3, gpus=AVAIL_GPUS)
301-
trainer.fit(model, dm)
301+
trainer.fit(model, datamodule=dm)
302302

303303
# %% [markdown]
304304
# ### MNLI

sample-template/.meta.yml

+1-1
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ author: PL team
33
created: 2021-06-15
44
updated: 2021-06-17
55
license: CC
6-
build: 4
6+
build: 5
77
description: |
88
This is a template to show how to contribute a tutorial.
99
requirements:

0 commit comments

Comments
 (0)