Skip to content

Commit 8f29aa9

Browse files
authored
Fix URLs (#10316)
1 parent 0620fc6 commit 8f29aa9

File tree

20 files changed

+43
-43
lines changed

20 files changed

+43
-43
lines changed

backends/vulkan/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -133,7 +133,7 @@ will be executed on the GPU.
133133

134134

135135
::::{note}
136-
The [supported ops list](https://github.com/pytorch/executorch/blob/main/backends/vulkan/partitioner/supported_ops.py)
136+
The [supported ops list](https://github.com/pytorch/executorch/blob/main/backends/vulkan/op_registry.py#L194)
137137
Vulkan partitioner code can be inspected to examine which ops are currently
138138
implemented in the Vulkan delegate.
139139
::::

docs/source/Doxyfile

+6-5
Original file line numberDiff line numberDiff line change
@@ -399,9 +399,9 @@ BUILTIN_STL_SUPPORT = NO
399399
CPP_CLI_SUPPORT = NO
400400

401401
# Set the SIP_SUPPORT tag to YES if your project consists of sip (see:
402-
# https://www.riverbankcomputing.com/software/sip/intro) sources only. Doxygen
403-
# will parse them like normal C++ but will assume all classes use public instead
404-
# of private inheritance when no explicit protection keyword is present.
402+
# https://python-sip.readthedocs.io/en/stable/introduction.html) sources only.
403+
# Doxygen will parse them like normal C++ but will assume all classes use public
404+
# instead of private inheritance when no explicit protection keyword is present.
405405
# The default value is: NO.
406406

407407
SIP_SUPPORT = NO
@@ -1483,8 +1483,9 @@ HTML_INDEX_NUM_ENTRIES = 100
14831483
# output directory. Running make will produce the docset in that directory and
14841484
# running make install will install the docset in
14851485
# ~/Library/Developer/Shared/Documentation/DocSets so that Xcode will find it at
1486-
# startup. See https://developer.apple.com/library/archive/featuredarticles/Doxy
1487-
# genXcode/_index.html for more information.
1486+
# startup. See
1487+
# https://developer.apple.com/library/archive/featuredarticles/DoxygenXcode/_index.html
1488+
# for more information.
14881489
# The default value is: NO.
14891490
# This tag requires that the tag GENERATE_HTML is set to YES.
14901491

docs/source/backends-cadence.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -89,7 +89,7 @@ executorch
8989

9090
***AoT (Ahead-of-Time) Components***:
9191

92-
The AoT folder contains all of the python scripts and functions needed to export the model to an ExecuTorch `.pte` file. In our case, [export_example.py](https://github.com/pytorch/executorch/blob/main/backends/cadence/aot/export_example.py) is an API that takes a model (nn.Module) and representative inputs and runs it through the quantizer (from [quantizer.py](https://github.com/pytorch/executorch/blob/main/backends/cadence/aot/quantizer.py)). Then a few compiler passes, also defined in [quantizer.py](https://github.com/pytorch/executorch/blob/main/backends/cadence/aot/quantizer.py), will replace operators with custom ones that are supported and optimized on the chip. Any operator needed to compute things should be defined in [ops_registrations.py](https://github.com/pytorch/executorch/blob/main/backends/cadence/aot/ops_registrations.py) and have corresponding implemetations in the other folders.
92+
The AoT folder contains all of the python scripts and functions needed to export the model to an ExecuTorch `.pte` file. In our case, [export_example.py](https://github.com/pytorch/executorch/blob/main/backends/cadence/aot/export_example.py) is an API that takes a model (nn.Module) and representative inputs and runs it through the quantizer (from [quantizer.py](https://github.com/pytorch/executorch/blob/main/backends/cadence/aot/quantizer/quantizer.py)). Then a few compiler passes, also defined in [quantizer.py](https://github.com/pytorch/executorch/blob/main/backends/cadence/aot/quantizer/quantizer.py), will replace operators with custom ones that are supported and optimized on the chip. Any operator needed to compute things should be defined in [ops_registrations.py](https://github.com/pytorch/executorch/blob/main/backends/cadence/aot/ops_registrations.py) and have corresponding implemetations in the other folders.
9393

9494
***Operators***:
9595

@@ -115,8 +115,8 @@ python3 -m examples.portable.scripts.export --model_name="add"
115115
***Quantized Operators***:
116116

117117
The other, more complex model are custom operators, including:
118-
- a quantized [linear](https://pytorch.org/docs/stable/generated/torch.nn.Linear.html) operation. The model is defined [here](https://github.com/pytorch/executorch/blob/main/examples/cadence/operators/quantized_linear_op.py#L28). Linear is the backbone of most Automatic Speech Recognition (ASR) models.
119-
- a quantized [conv1d](https://pytorch.org/docs/stable/generated/torch.nn.Conv1d.html) operation. The model is defined [here](https://github.com/pytorch/executorch/blob/main/examples/cadence/operators/quantized_conv1d_op.py#L36). Convolutions are important in wake word and many denoising models.
118+
- a quantized [linear](https://pytorch.org/docs/stable/generated/torch.nn.Linear.html) operation. The model is defined [here](https://github.com/pytorch/executorch/blob/main/examples/cadence/operators/test_quantized_linear_op.py#L30). Linear is the backbone of most Automatic Speech Recognition (ASR) models.
119+
- a quantized [conv1d](https://pytorch.org/docs/stable/generated/torch.nn.Conv1d.html) operation. The model is defined [here](https://github.com/pytorch/executorch/blob/main/examples/cadence/operators/test_quantized_conv1d_op.py#L40). Convolutions are important in wake word and many denoising models.
120120

121121
In both cases the generated file is called `CadenceDemoModel.pte`.
122122

docs/source/backends-vulkan.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -133,7 +133,7 @@ will be executed on the GPU.
133133

134134

135135
::::{note}
136-
The [supported ops list](https://github.com/pytorch/executorch/blob/main/backends/vulkan/partitioner/supported_ops.py)
136+
The [supported ops list](https://github.com/pytorch/executorch/blob/main/backends/vulkan/op_registry.py#L194)
137137
Vulkan partitioner code can be inspected to examine which ops are currently
138138
implemented in the Vulkan delegate.
139139
::::

docs/source/conf.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -192,7 +192,7 @@
192192
# Example configuration for intersphinx: refer to the Python standard library.
193193
intersphinx_mapping = {
194194
"python": ("https://docs.python.org/", None),
195-
"numpy": ("https://docs.scipy.org/doc/numpy/", None),
195+
"numpy": ("https://numpy.org/doc/stable/", None),
196196
"torch": ("https://pytorch.org/docs/stable/", None),
197197
}
198198

docs/source/new-contributor-guide.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -92,8 +92,8 @@ Before you can start writing any code, you need to get a copy of ExecuTorch code
9292
Depending on how you cloned your repo (HTTP, SSH, etc.), this should print something like:
9393
9494
```bash
95-
origin https://github.com/YOUR_GITHUB_USERNAME/executorch.git (fetch)
96-
origin https://github.com/YOUR_GITHUB_USERNAME/executorch.git (push)
95+
origin https://github.com/{YOUR_GITHUB_USERNAME}/executorch.git (fetch)
96+
origin https://github.com/{YOUR_GITHUB_USERNAME}/executorch.git (push)
9797
upstream https://github.com/pytorch/executorch.git (fetch)
9898
upstream https://github.com/pytorch/executorch.git (push)
9999
```

docs/source/runtime-profiling.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -20,4 +20,4 @@ We provide access to all the profiling data via the Python [Inspector API](model
2020
- Through the Inspector API, users can do a wide range of analysis varying from printing out performance details to doing more finer granular calculation on module level.
2121

2222

23-
Please refer to the [Developer Tools tutorial](https://pytorch.org/executorch/main/tutorials/devtools-integration-tutorial.rst) for a step-by-step walkthrough of the above process on a sample model.
23+
Please refer to the [Developer Tools tutorial](https://pytorch.org/executorch/main/tutorials/devtools-integration-tutorial) for a step-by-step walkthrough of the above process on a sample model.

docs/source/tutorials_source/template_tutorial.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@
99
Template Tutorial
1010
=================
1111
12-
**Author:** `FirstName LastName <https://github.com/username>`_
12+
**Author:** `FirstName LastName <https://github.com/{username}>`_
1313
1414
.. grid:: 2
1515

docs/source/using-executorch-android.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -59,8 +59,8 @@ You can also directly specify an AAR file in the app. We upload pre-built AAR to
5959
### Snapshots from main branch
6060

6161
Starting from 2025-04-12, you can download nightly `main` branch snapshots:
62-
* `executorch.aar`: `https://ossci-android.s3.amazonaws.com/executorch/release/snapshot-YYYYMMDD/executorch.aar`
63-
* `executorch.aar.sha256sums`: `https://ossci-android.s3.amazonaws.com/executorch/release/snapshot-YYYYMMDD/executorch.aar.sha256sums`
62+
* `executorch.aar`: `https://ossci-android.s3.amazonaws.com/executorch/release/snapshot-{YYYYMMDD}/executorch.aar`
63+
* `executorch.aar.sha256sums`: `https://ossci-android.s3.amazonaws.com/executorch/release/snapshot-{YYYYMMDD}/executorch.aar.sha256sums`
6464
* Replace `YYYYMMDD` with the actual date you want to use.
6565
* AAR file is generated by [this workflow](https://github.com/pytorch/executorch/blob/c66b37d010c88a113560693b14dc6bd112593c11/.github/workflows/android-release-artifacts.yml#L14-L15).
6666

examples/demo-apps/apple_ios/LLaMA/docs/delegates/xnnpack_README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -73,7 +73,7 @@ python -m examples.models.llama.export_llama --model "llama3_2" --checkpoint <pa
7373
```
7474
For convenience, an [exported ExecuTorch bf16 model](https://huggingface.co/executorch-community/Llama-3.2-1B-ET/blob/main/llama3_2-1B.pte) is available on Hugging Face. The export was created using [this detailed recipe notebook](https://huggingface.co/executorch-community/Llama-3.2-1B-ET/blob/main/ExportRecipe_1B.ipynb).
7575

76-
For more detail using Llama 3.2 lightweight models including prompt template, please go to our official [website](https://www.llama.com/docs/model-cards-and-prompt-formats/llama3_2#-llama-3.2-lightweight-models-(1b/3b)-).
76+
For more detail using Llama 3.2 lightweight models including prompt template, please go to our official [website](https://www.llama.com/docs/model-cards-and-prompt-formats/llama3_2/#-llama-3.2-lightweight-models-(1b/3b)-).
7777

7878
### For Llama 3.1 and Llama 2 models
7979

@@ -134,7 +134,7 @@ BUCK2_RELEASE_DATE="2024-12-16"
134134
BUCK2_ARCHIVE="buck2-aarch64-apple-darwin.zst"
135135
BUCK2=".venv/bin/buck2"
136136
137-
curl -LO "https://github.com/facebook/buck2/releases/download/$BUCK2_RELEASE_DATE/$BUCK2_ARCHIVE"
137+
curl -LO "https://github.com/facebook/buck2/releases/download/${BUCK2_RELEASE_DATE}/${BUCK2_ARCHIVE}"
138138
zstd -cdq "$BUCK2_ARCHIVE" > "$BUCK2" && chmod +x "$BUCK2"
139139
rm "$BUCK2_ARCHIVE"
140140

examples/llm_pte_finetuning/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -63,7 +63,7 @@ shuffle: True
6363
batch_size: 1
6464
```
6565

66-
Torchtune supports datasets using huggingface dataloaders, so custom datasets could also be defined. For examples on defining your own datasets, review the [torchtune docs](https://pytorch.org/torchtune/stable/tutorials/datasets.html#hugging-face-datasets).
66+
Torchtune supports datasets using huggingface dataloaders, so custom datasets could also be defined. For examples on defining your own datasets, review the [torchtune docs](https://pytorch.org/torchtune/stable/basics/text_completion_datasets.html#loading-text-completion-datasets-from-hugging-face).
6767

6868
### Loss
6969

examples/models/deepseek-r1-distill-llama-8B/README.md

+3-6
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ pip install -U "huggingface_hub[cli]"
1717
huggingface-cli download deepseek-ai/DeepSeek-R1-Distill-Llama-8B --local-dir /target_dir/DeepSeek-R1-Distill-Llama-8B --local-dir-use-symlinks False
1818
```
1919

20-
2. Download the [tokenizer.model](https://huggingface.co/meta-llama/Llama-3.1-8B/blob/main/original/tokenizer.model) from the Llama3.1 repo which will be needed later on when running the model using the runtime.
20+
2. Download the [tokenizer.model](https://huggingface.co/meta-llama/Llama-3.1-8B/tree/main/original) from the Llama3.1 repo which will be needed later on when running the model using the runtime.
2121

2222
3. Convert the model to pth file.
2323
```
@@ -48,16 +48,13 @@ print("saving checkpoint")
4848
torch.save(sd, "/tmp/deepseek-ai/DeepSeek-R1-Distill-Llama-8B/checkpoint.pth")
4949
```
5050

51-
4. Download and save the params.json file
52-
```
53-
wget https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct/blob/main/original/params.json -o /tmp/params.json
54-
```
51+
4. Download and save the [params.json](https://huggingface.co/meta-llama/Llama-3.1-8B/tree/main/original) file.
5552

5653
5. Generate a PTE file for use with the Llama runner.
5754
```
5855
python -m examples.models.llama.export_llama \
5956
--checkpoint /tmp/deepseek-ai/DeepSeek-R1-Distill-Llama-8B/checkpoint.pth \
60-
-p /tmp/params.json \
57+
-p params.json \
6158
-kv \
6259
--use_sdpa_with_kv_cache \
6360
-X \

examples/models/llama3_2_vision/preprocess/test_preprocess.py

+3-3
Original file line numberDiff line numberDiff line change
@@ -124,9 +124,9 @@ class TestImageTransform:
124124
same output as the reference model.
125125
126126
Reference model: CLIPImageTransform
127-
https://github.com/pytorch/torchtune/blob/main/torchtune/models/clip/inference/_transforms.py#L115
127+
https://github.com/pytorch/torchtune/blob/main/torchtune/models/clip/inference/_transform.py#L127
128128
Eager and exported models: _CLIPImageTransform
129-
https://github.com/pytorch/torchtune/blob/main/torchtune/models/clip/inference/_transforms.py#L26
129+
https://github.com/pytorch/torchtune/blob/main/torchtune/models/clip/inference/_transform.py#L28
130130
"""
131131

132132
models_no_resize = initialize_models(resize_to_max_canvas=False)
@@ -147,7 +147,7 @@ def prepare_inputs(
147147
without distortion.
148148
149149
These calculations are done by the reference model inside __init__ and __call__
150-
https://github.com/pytorch/torchtune/blob/main/torchtune/models/clip/inference/_transforms.py#L115
150+
https://github.com/pytorch/torchtune/blob/main/torchtune/models/clip/inference/_transform.py#L198
151151
"""
152152
image_tensor = F.to_dtype(
153153
F.grayscale_to_rgb_image(F.to_image(image)), scale=True

examples/qualcomm/qaihub_scripts/llama/README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ Note that the pre-compiled context binaries could not be futher fine-tuned for o
1919
2. Follow instructions in https://huggingface.co/qualcomm/Llama-v2-7B-Chat to export context binaries (will take some time to finish)
2020

2121
```bash
22-
# tokenizer.model: https://huggingface.co/meta-llama/Llama-2-7b-chat-hf/blob/main/tokenizer.model
22+
# tokenizer.model: https://huggingface.co/meta-llama/Llama-2-7b-chat-hf/tree/main
2323
# tokenizer.bin:
2424
python -m examples.models.llama.tokenizer.tokenizer -t tokenizer.model -o tokenizer.bin
2525
```
@@ -54,4 +54,4 @@ Please refer to [Check context binary version](../../README.md#check-context-bin
5454
```bash
5555
# AIHUB_CONTEXT_BINARIES: ${PATH_TO_AIHUB_WORKSPACE}/build/llama_v3_8b_chat_quantized
5656
python examples/qualcomm/qaihub_scripts/llama/llama3/qaihub_llama3_8b.py -b build-android -s ${SERIAL_NUM} -m ${SOC_MODEL} --context_binaries ${AIHUB_CONTEXT_BINARIES} --tokenizer_model tokenizer.model --prompt "What is baseball?"
57-
```
57+
```

examples/qualcomm/scripts/mobilebert_fine_tune.py

+1-2
Original file line numberDiff line numberDiff line change
@@ -103,8 +103,7 @@ def get_fine_tuned_mobilebert(artifacts_dir, pretrained_weight, batch_size):
103103

104104
# grab dataset
105105
url = (
106-
"https://raw.githubusercontent.com/susanli2016/NLP-with-Python"
107-
"/master/data/title_conference.csv"
106+
"https://raw.githubusercontent.com/susanli2016/NLP-with-Python/master/data/title_conference.csv"
108107
)
109108
content = requests.get(url, allow_redirects=True).content
110109
data = pd.read_csv(BytesIO(content))

runtime/core/portable_type/c10/c10/macros/Macros.h

+1-1
Original file line numberDiff line numberDiff line change
@@ -241,7 +241,7 @@ using namespace c10::xpu;
241241
#ifdef __HIPCC__
242242
// Unlike CUDA, HIP requires a HIP header to be included for __host__ to work.
243243
// We do this #include here so that C10_HOST_DEVICE and friends will Just Work.
244-
// See https://github.com/ROCm-Developer-Tools/HIP/issues/441
244+
// See https://github.com/ROCm/hip/issues/441
245245
#include <hip/hip_runtime.h>
246246
#endif
247247

scripts/check_urls.sh

+8-4
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,7 @@ set -euo pipefail
99

1010
status=0
1111
green='\e[1;32m'; red='\e[1;31m'; cyan='\e[1;36m'; yellow='\e[1;33m'; reset='\e[0m'
12+
user_agent="Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36"
1213
last_filepath=
1314

1415
while IFS=: read -r filepath url; do
@@ -18,7 +19,7 @@ while IFS=: read -r filepath url; do
1819
fi
1920
code=$(curl -gsLm30 -o /dev/null -w "%{http_code}" -I "$url") || code=000
2021
if [ "$code" -ge 400 ]; then
21-
code=$(curl -gsLm30 -o /dev/null -w "%{http_code}" -r 0-0 -A "Mozilla/5.0" "$url") || code=000
22+
code=$(curl -gsLm30 -o /dev/null -w "%{http_code}" -r 0-0 -A "$user_agent" "$url") || code=000
2223
fi
2324
if [ "$code" -ge 200 ] && [ "$code" -lt 400 ]; then
2425
printf "${green}%s${reset} ${cyan}%s${reset}\n" "$code" "$url"
@@ -27,17 +28,20 @@ while IFS=: read -r filepath url; do
2728
status=1
2829
fi
2930
done < <(
30-
git --no-pager grep --no-color -I -o -E \
31-
'https?://[^[:space:]<>\")\{\(\$]+' \
31+
git --no-pager grep --no-color -I -P -o \
32+
'(?<!git\+)(?<!\$\{)https?://(?![^\s<>\")]*[\{\}\$])[^[:space:]<>\")\[\]\(]+' \
3233
-- '*' \
3334
':(exclude).*' \
3435
':(exclude)**/.*' \
3536
':(exclude)**/*.lock' \
3637
':(exclude)**/*.svg' \
3738
':(exclude)**/*.xml' \
39+
':(exclude)**/*.gradle*' \
40+
':(exclude)**/*gradle*' \
3841
':(exclude)**/third-party/**' \
39-
| sed 's/[[:punct:]]*$//' \
42+
| sed -E 's/[^/[:alnum:]]+$//' \
4043
| grep -Ev '://(0\.0\.0\.0|127\.0\.0\.1|localhost)([:/])' \
44+
| grep -Ev 'fwdproxy:8080' \
4145
|| true
4246
)
4347

setup.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -606,8 +606,8 @@ def run(self):
606606
# be found in the pip package. This is the subset of headers that are
607607
# essential for building custom ops extensions.
608608
# TODO: Use cmake to gather the headers instead of hard-coding them here.
609-
# For example: https://discourse.cmake.org/t/installing-headers-the-modern-
610-
# way-regurgitated-and-revisited/3238/3
609+
# For example:
610+
# https://discourse.cmake.org/t/installing-headers-the-modern-way-regurgitated-and-revisited/3238/3
611611
for include_dir in [
612612
"runtime/core/",
613613
"runtime/kernel/",

util/collect_env.py

+1-2
Original file line numberDiff line numberDiff line change
@@ -220,8 +220,7 @@ def get_cudnn_version(run_lambda):
220220
cudnn_cmd = '{} /R "{}\\bin" cudnn*.dll'.format(where_cmd, cuda_path)
221221
elif get_platform() == "darwin":
222222
# CUDA libraries and drivers can be found in /usr/local/cuda/. See
223-
# https://docs.nvidia.com/cuda/cuda-installation-guide-mac-os-x/index.html#install
224-
# https://docs.nvidia.com/deeplearning/sdk/cudnn-install/index.html#installmac
223+
# https://docs.nvidia.com/cuda/archive/10.1/cuda-installation-guide-mac-os-x/index.html#3.2-Install
225224
# Use CUDNN_LIBRARY when cudnn library is installed elsewhere.
226225
cudnn_cmd = "ls /usr/local/cuda/lib/libcudnn*"
227226
else:

util/python_profiler.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,7 @@ def _from_pstat_to_static_html(stats: Stats, html_filename: str):
4444
html_filename: Output filename in which populated template is rendered
4545
"""
4646
RESTR = r'(?<!] \+ ")/static/'
47-
REPLACE_WITH = "https://cdn.rawgit.com/jiffyclub/snakeviz/v0.4.2/snakeviz/static/"
47+
REPLACE_WITH = "https://cdn.jsdelivr.net/gh/jiffyclub/snakeviz@v0.4.2/snakeviz/static/"
4848

4949
if not isinstance(html_filename, str):
5050
raise ValueError("A valid file name must be provided.")

0 commit comments

Comments
 (0)