|
984 | 984 | "\n",
|
985 | 985 | "Some are specific for CPU and some are better for GPU.\n",
|
986 | 986 | "\n",
|
987 |
| - "Getting to know which is which can take some time.\n", |
| 987 | + "Getting to know which one can take some time.\n", |
988 | 988 | "\n",
|
989 | 989 | "Generally if you see `torch.cuda` anywhere, the tensor is being used for GPU (since Nvidia GPUs use a computing toolkit called CUDA).\n",
|
990 | 990 | "\n",
|
|
1901 | 1901 | "id": "bXKozI4T0hFi"
|
1902 | 1902 | },
|
1903 | 1903 | "source": [
|
1904 |
| - "Without the transpose, the rules of matrix mulitplication aren't fulfilled and we get an error like above.\n", |
| 1904 | + "Without the transpose, the rules of matrix multiplication aren't fulfilled and we get an error like above.\n", |
1905 | 1905 | "\n",
|
1906 | 1906 | "How about a visual? \n",
|
1907 | 1907 | "\n",
|
|
1988 | 1988 | "id": "zIGrP5j1pN7j"
|
1989 | 1989 | },
|
1990 | 1990 | "source": [
|
1991 |
| - "> **Question:** What happens if you change `in_features` from 2 to 3 above? Does it error? How could you change the shape of the input (`x`) to accomodate to the error? Hint: what did we have to do to `tensor_B` above?" |
| 1991 | + "> **Question:** What happens if you change `in_features` from 2 to 3 above? Does it error? How could you change the shape of the input (`x`) to accommodate to the error? Hint: what did we have to do to `tensor_B` above?" |
1992 | 1992 | ]
|
1993 | 1993 | },
|
1994 | 1994 | {
|
|
2188 | 2188 | "\n",
|
2189 | 2189 | "You can change the datatypes of tensors using [`torch.Tensor.type(dtype=None)`](https://pytorch.org/docs/stable/generated/torch.Tensor.type.html) where the `dtype` parameter is the datatype you'd like to use.\n",
|
2190 | 2190 | "\n",
|
2191 |
| - "First we'll create a tensor and check it's datatype (the default is `torch.float32`)." |
| 2191 | + "First we'll create a tensor and check its datatype (the default is `torch.float32`)." |
2192 | 2192 | ]
|
2193 | 2193 | },
|
2194 | 2194 | {
|
|
2289 | 2289 | }
|
2290 | 2290 | ],
|
2291 | 2291 | "source": [
|
2292 |
| - "# Create a int8 tensor\n", |
| 2292 | + "# Create an int8 tensor\n", |
2293 | 2293 | "tensor_int8 = tensor.type(torch.int8)\n",
|
2294 | 2294 | "tensor_int8"
|
2295 | 2295 | ]
|
|
3139 | 3139 | "source": [
|
3140 | 3140 | "Just as you might've expected, the tensors come out with different values.\n",
|
3141 | 3141 | "\n",
|
3142 |
| - "But what if you wanted to created two random tensors with the *same* values.\n", |
| 3142 | + "But what if you wanted to create two random tensors with the *same* values.\n", |
3143 | 3143 | "\n",
|
3144 | 3144 | "As in, the tensors would still contain random values but they would be of the same flavour.\n",
|
3145 | 3145 | "\n",
|
|
3220 | 3220 | "It looks like setting the seed worked. \n",
|
3221 | 3221 | "\n",
|
3222 | 3222 | "> **Resource:** What we've just covered only scratches the surface of reproducibility in PyTorch. For more, on reproducibility in general and random seeds, I'd checkout:\n",
|
3223 |
| - "> * [The PyTorch reproducibility documentation](https://pytorch.org/docs/stable/notes/randomness.html) (a good exericse would be to read through this for 10-minutes and even if you don't understand it now, being aware of it is important).\n", |
| 3223 | + "> * [The PyTorch reproducibility documentation](https://pytorch.org/docs/stable/notes/randomness.html) (a good exercise would be to read through this for 10-minutes and even if you don't understand it now, being aware of it is important).\n", |
3224 | 3224 | "> * [The Wikipedia random seed page](https://en.wikipedia.org/wiki/Random_seed) (this'll give a good overview of random seeds and pseudorandomness in general)."
|
3225 | 3225 | ]
|
3226 | 3226 | },
|
|
0 commit comments