Skip to content

Commit b69d741

Browse files
authored
Merge branch 'master' into mraunak/tf_win_clang
2 parents e944e76 + ff989f0 commit b69d741

File tree

16 files changed

+198
-194
lines changed

16 files changed

+198
-194
lines changed

site/en/guide/migrate/evaluator.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -122,7 +122,7 @@
122122
"\n",
123123
"In TensorFlow 1, you can configure a `tf.estimator` to evaluate the estimator using `tf.estimator.train_and_evaluate`.\n",
124124
"\n",
125-
"In this example, start by defining the `tf.estimator.Estimator` and speciyfing training and evaluation specifications:"
125+
"In this example, start by defining the `tf.estimator.Estimator` and specifying training and evaluation specifications:"
126126
]
127127
},
128128
{

site/en/guide/sparse_tensor.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -620,7 +620,7 @@
620620
"\n",
621621
"However, there are a few cases where it can be useful to distinguish zero values from missing values. In particular, this allows for one way to encode missing/unknown data in your training data. For example, consider a use case where you have a tensor of scores (that can have any floating point value from -Inf to +Inf), with some missing scores. You can encode this tensor using a sparse tensor where the explicit zeros are known zero scores but the implicit zero values actually represent missing data and not zero. \n",
622622
"\n",
623-
"Note: This is generally not the intended usage of `tf.sparse.SparseTensor`s; and you might want to also consier other techniques for encoding this such as for example using a separate mask tensor that identifies the locations of known/unknown values. However, exercise caution while using this approach, since most sparse operations will treat explicit and implicit zero values identically."
623+
"Note: This is generally not the intended usage of `tf.sparse.SparseTensor`s; and you might want to also consider other techniques for encoding this such as for example using a separate mask tensor that identifies the locations of known/unknown values. However, exercise caution while using this approach, since most sparse operations will treat explicit and implicit zero values identically."
624624
]
625625
},
626626
{

site/en/guide/tf_numpy_type_promotion.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -178,7 +178,7 @@
178178
"* `f32*` means Python `float` or weakly-typed `f32`\n",
179179
"* `c128*` means Python `complex` or weakly-typed `c128`\n",
180180
"\n",
181-
"The asterik (*) denotes that the corresponding type is “weak” - such a dtype is temporarily inferred by the system, and could defer to other dtypes. This concept is explained more in detail [here](#weak_tensor)."
181+
"The asterisk (*) denotes that the corresponding type is “weak” - such a dtype is temporarily inferred by the system, and could defer to other dtypes. This concept is explained more in detail [here](#weak_tensor)."
182182
]
183183
},
184184
{
@@ -449,7 +449,7 @@
449449
"source": [
450450
"### WeakTensor Construction\n",
451451
"\n",
452-
"WeakTensors are created if you create a tensor without specifing a dtype the result is a WeakTensor. You can check whether a Tensor is \"weak\" or not by checking the weak attribute at the end of the Tensor's string representation."
452+
"WeakTensors are created if you create a tensor without specifying a dtype the result is a WeakTensor. You can check whether a Tensor is \"weak\" or not by checking the weak attribute at the end of the Tensor's string representation."
453453
]
454454
},
455455
{

site/en/guide/versions.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -171,12 +171,10 @@ incrementing the major version number for TensorFlow Lite, or vice versa.
171171
The API surface that is covered by the TensorFlow Lite Extension APIs version
172172
number is comprised of the following public APIs:
173173

174-
```
175174
* [tensorflow/lite/c/c_api_opaque.h](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/c/c_api_opaque.h)
176175
* [tensorflow/lite/c/common.h](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/c/common.h)
177176
* [tensorflow/lite/c/builtin_op_data.h](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/c/builtin_op_data.h)
178177
* [tensorflow/lite/builtin_ops.h](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/builtin_ops.h)
179-
```
180178

181179
Again, experimental symbols are not covered; see [below](#not_covered) for
182180
details.

site/en/hub/tutorials/s3gan_generation_with_tf_hub.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -86,7 +86,7 @@
8686
"2. Click **Runtime > Run all** to run each cell in order.\n",
8787
" * Afterwards, the interactive visualizations should update automatically when you modify the settings using the sliders and dropdown menus.\n",
8888
"\n",
89-
"Note: if you run into any issues, youn can try restarting the runtime and rerunning all cells from scratch by clicking **Runtime > Restart and run all...**.\n",
89+
"Note: if you run into any issues, you can try restarting the runtime and rerunning all cells from scratch by clicking **Runtime > Restart and run all...**.\n",
9090
"\n",
9191
"[1] Mario Lucic\\*, Michael Tschannen\\*, Marvin Ritter\\*, Xiaohua Zhai, Olivier\n",
9292
" Bachem, Sylvain Gelly, [High-Fidelity Image Generation With Fewer Labels](https://arxiv.org/abs/1903.02271), ICML 2019."

site/en/hub/tutorials/wiki40b_lm.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -214,7 +214,7 @@
214214
" # Generate the tokens from the language model\n",
215215
" generation_outputs = module(generation_input_dict, signature=\"prediction\", as_dict=True)\n",
216216
"\n",
217-
" # Get the probablities and the inputs for the next steps\n",
217+
" # Get the probabilities and the inputs for the next steps\n",
218218
" probs = generation_outputs[\"probs\"]\n",
219219
" new_mems = [generation_outputs[\"new_mem_{}\".format(i)] for i in range(n_layer)]\n",
220220
"\n",

site/en/install/source.md

Lines changed: 43 additions & 48 deletions
Original file line numberDiff line numberDiff line change
@@ -34,8 +34,7 @@ Install the TensorFlow *pip* package dependencies (if using a virtual
3434
environment, omit the `--user` argument):
3535

3636
<pre class="prettyprint lang-bsh">
37-
<code class="devsite-terminal">pip install -U --user pip numpy wheel packaging requests opt_einsum</code>
38-
<code class="devsite-terminal">pip install -U --user keras_preprocessing --no-deps</code>
37+
<code class="devsite-terminal">pip install -U --user pip</code>
3938
</pre>
4039

4140
Note: A `pip` version >19.0 is required to install the TensorFlow 2 `.whl`
@@ -60,30 +59,32 @@ file.
6059

6160
Clang is a C/C++/Objective-C compiler that is compiled in C++ based on LLVM. It
6261
is the default compiler to build TensorFlow starting with TensorFlow 2.13. The
63-
current supported version is LLVM/Clang 16.
62+
current supported version is LLVM/Clang 17.
6463

6564
[LLVM Debian/Ubuntu nightly packages](https://apt.llvm.org) provide an automatic
6665
installation script and packages for manual installation on Linux. Make sure you
6766
run the following command if you manually add llvm apt repository to your
6867
package sources:
6968

7069
<pre class="prettyprint lang-bsh">
71-
<code class="devsite-terminal">sudo apt-get update && sudo apt-get install -y llvm-16 clang-16</code>
70+
<code class="devsite-terminal">sudo apt-get update && sudo apt-get install -y llvm-17 clang-17</code>
7271
</pre>
7372

73+
Now that `/usr/lib/llvm-17/bin/clang` is the actual path to clang in this case.
74+
7475
Alternatively, you can download and unpack the pre-built
75-
[Clang + LLVM 16](https://github.com/llvm/llvm-project/releases/tag/llvmorg-16.0.0).
76+
[Clang + LLVM 17](https://github.com/llvm/llvm-project/releases/tag/llvmorg-17.0.2).
7677

7778
Below is an example of steps you can take to set up the downloaded Clang + LLVM
78-
16 binaries on Debian/Ubuntu operating systems:
79+
17 binaries on Debian/Ubuntu operating systems:
7980

8081
1. Change to the desired destination directory: `cd <desired directory>`
8182

8283
1. Load and extract an archive file...(suitable to your architecture):
8384
<pre class="prettyprint lang-bsh">
84-
<code class="devsite-terminal">wget https://github.com/llvm/llvm-project/releases/download/llvmorg-16.0.0/clang+llvm-16.0.0-x86_64-linux-gnu-ubuntu-18.04.tar.xz
85+
<code class="devsite-terminal">wget https://github.com/llvm/llvm-project/releases/download/llvmorg-17.0.2/clang+llvm-17.0.2-x86_64-linux-gnu-ubuntu-22.04.tar.xz
8586
</code>
86-
<code class="devsite-terminal">tar -xvf clang+llvm-16.0.0-x86_64-linux-gnu-ubuntu-18.04.tar.xz
87+
<code class="devsite-terminal">tar -xvf clang+llvm-17.0.2-x86_64-linux-gnu-ubuntu-22.04.tar.xz
8788
</code>
8889
</pre>
8990

@@ -93,10 +94,10 @@ Below is an example of steps you can take to set up the downloaded Clang + LLVM
9394
have to replace anything, unless you have a previous installation, in which
9495
case you should replace the files:
9596
<pre class="prettyprint lang-bsh">
96-
<code class="devsite-terminal">cp -r clang+llvm-16.0.0-x86_64-linux-gnu-ubuntu-18.04/* /usr</code>
97+
<code class="devsite-terminal">cp -r clang+llvm-17.0.2-x86_64-linux-gnu-ubuntu-22.04/* /usr</code>
9798
</pre>
9899

99-
1. Check the obtained Clang + LLVM 16 binaries version:
100+
1. Check the obtained Clang + LLVM 17 binaries version:
100101
<pre class="prettyprint lang-bsh">
101102
<code class="devsite-terminal">clang --version</code>
102103
</pre>
@@ -240,19 +241,6 @@ There are some preconfigured build configs available that can be added to the
240241

241242
## Build and install the pip package
242243

243-
The pip package is build in two steps. A `bazel build` commands creates a
244-
"package-builder" program. You then run the package-builder to create the
245-
package.
246-
247-
### Build the package-builder
248-
Note: GPU support can be enabled with `cuda=Y` during the `./configure` stage.
249-
250-
Use `bazel build` to create the TensorFlow 2.x package-builder:
251-
252-
<pre class="devsite-terminal devsite-click-to-copy">
253-
bazel build [--config=option] //tensorflow/tools/pip_package:build_pip_package
254-
</pre>
255-
256244
#### Bazel build options
257245

258246
Refer to the Bazel
@@ -268,33 +256,42 @@ that complies with the manylinux2014 package standard.
268256

269257
### Build the package
270258

271-
The `bazel build` command creates an executable named `build_pip_package`—this
272-
is the program that builds the `pip` package. Run the executable as shown
273-
below to build a `.whl` package in the `/tmp/tensorflow_pkg` directory.
259+
To build pip package, you need to specify `--repo_env=WHEEL_NAME` flag.
260+
depending on the provided name, package will be created, e.g:
274261

275-
To build from a release branch:
262+
To build tensorflow CPU package:
263+
<pre class="devsite-terminal devsite-click-to-copy">
264+
bazel build //tensorflow/tools/pip_package:wheel --repo_env=WHEEL_NAME=tensorflow_cpu
265+
</pre>
276266

267+
To build tensorflow GPU package:
277268
<pre class="devsite-terminal devsite-click-to-copy">
278-
./bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
269+
bazel build //tensorflow/tools/pip_package:wheel --repo_env=WHEEL_NAME=tensorflow --config=cuda
279270
</pre>
280271

281-
To build from master, use `--nightly_flag` to get the right dependencies:
272+
To build tensorflow TPU package:
273+
<pre class="devsite-terminal devsite-click-to-copy">
274+
bazel build //tensorflow/tools/pip_package:wheel --repo_env=WHEEL_NAME=tensorflow_tpu --config=tpu
275+
</pre>
282276

277+
To build nightly package, set `tf_nightly` instead of `tensorflow`, e.g.
278+
to build CPU nightly package:
283279
<pre class="devsite-terminal devsite-click-to-copy">
284-
./bazel-bin/tensorflow/tools/pip_package/build_pip_package --nightly_flag /tmp/tensorflow_pkg
280+
bazel build //tensorflow/tools/pip_package:wheel --repo_env=WHEEL_NAME=tf_nightly_cpu
285281
</pre>
286282

287-
Although it is possible to build both CUDA and non-CUDA configurations under the
288-
same source tree, it's recommended to run `bazel clean` when switching between
289-
these two configurations in the same source tree.
283+
As a result, generated wheel will be located in
284+
<pre class="devsite-terminal devsite-click-to-copy">
285+
bazel-bin/tensorflow/tools/pip_package/wheel_house/
286+
</pre>
290287

291288
### Install the package
292289

293290
The filename of the generated `.whl` file depends on the TensorFlow version and
294291
your platform. Use `pip install` to install the package, for example:
295292

296293
<pre class="devsite-terminal prettyprint lang-bsh">
297-
pip install /tmp/tensorflow_pkg/tensorflow-<var>version</var>-<var>tags</var>.whl
294+
pip install bazel-bin/tensorflow/tools/pip_package/wheel_house/tensorflow-<var>version</var>-<var>tags</var>.whl
298295
</pre>
299296

300297
Success: TensorFlow is now installed.
@@ -344,26 +341,23 @@ virtual environment:
344341

345342
1. Optional: Configure the build—this prompts the user to answer build
346343
configuration questions.
347-
2. Build the tool used to create the *pip* package.
348-
3. Run the tool to create the *pip* package.
349-
4. Adjust the ownership permissions of the file for outside the container.
344+
2. Build the *pip* package.
345+
3. Adjust the ownership permissions of the file for outside the container.
350346

351347
<pre class="devsite-disable-click-to-copy prettyprint lang-bsh">
352348
<code class="devsite-terminal tfo-terminal-root">./configure # if necessary</code>
353349

354-
<code class="devsite-terminal tfo-terminal-root">bazel build --config=opt //tensorflow/tools/pip_package:build_pip_package</code>
355-
356-
<code class="devsite-terminal tfo-terminal-root">./bazel-bin/tensorflow/tools/pip_package/build_pip_package /mnt # create package</code>
357-
358-
<code class="devsite-terminal tfo-terminal-root">chown $HOST_PERMS /mnt/tensorflow-<var>version</var>-<var>tags</var>.whl</code>
350+
<code class="devsite-terminal tfo-terminal-root">bazel build //tensorflow/tools/pip_package:wheel --repo_env=WHEEL_NAME=tensorflow_cpu --config=opt</code>
351+
`
352+
<code class="devsite-terminal tfo-terminal-root">chown $HOST_PERMS bazel-bin/tensorflow/tools/pip_package/wheel_house/tensorflow-<var>version</var>-<var>tags</var>.whl</code>
359353
</pre>
360354

361355
Install and verify the package within the container:
362356

363357
<pre class="prettyprint lang-bsh">
364358
<code class="devsite-terminal tfo-terminal-root">pip uninstall tensorflow # remove current version</code>
365359

366-
<code class="devsite-terminal tfo-terminal-root">pip install /mnt/tensorflow-<var>version</var>-<var>tags</var>.whl</code>
360+
<code class="devsite-terminal tfo-terminal-root">pip install bazel-bin/tensorflow/tools/pip_package/wheel_house/tensorflow-<var>version</var>-<var>tags</var>.whl</code>
367361
<code class="devsite-terminal tfo-terminal-root">cd /tmp # don't import from source directory</code>
368362
<code class="devsite-terminal tfo-terminal-root">python -c "import tensorflow as tf; print(tf.__version__)"</code>
369363
</pre>
@@ -401,19 +395,17 @@ with GPU support:
401395
<pre class="devsite-disable-click-to-copy prettyprint lang-bsh">
402396
<code class="devsite-terminal tfo-terminal-root">./configure # if necessary</code>
403397

404-
<code class="devsite-terminal tfo-terminal-root">bazel build --config=opt --config=cuda //tensorflow/tools/pip_package:build_pip_package</code>
405-
406-
<code class="devsite-terminal tfo-terminal-root">./bazel-bin/tensorflow/tools/pip_package/build_pip_package /mnt # create package</code>
398+
<code class="devsite-terminal tfo-terminal-root">bazel build //tensorflow/tools/pip_package:wheel --repo_env=WHEEL_NAME=tensorflow --config=cuda --config=opt</code>
407399

408-
<code class="devsite-terminal tfo-terminal-root">chown $HOST_PERMS /mnt/tensorflow-<var>version</var>-<var>tags</var>.whl</code>
400+
<code class="devsite-terminal tfo-terminal-root">chown $HOST_PERMS bazel-bin/tensorflow/tools/pip_package/wheel_house/tensorflow-<var>version</var>-<var>tags</var>.whl</code>
409401
</pre>
410402

411403
Install and verify the package within the container and check for a GPU:
412404

413405
<pre class="prettyprint lang-bsh">
414406
<code class="devsite-terminal tfo-terminal-root">pip uninstall tensorflow # remove current version</code>
415407

416-
<code class="devsite-terminal tfo-terminal-root">pip install /mnt/tensorflow-<var>version</var>-<var>tags</var>.whl</code>
408+
<code class="devsite-terminal tfo-terminal-root">pip install bazel-bin/tensorflow/tools/pip_package/wheel_house/tensorflow-<var>version</var>-<var>tags</var>.whl</code>
417409
<code class="devsite-terminal tfo-terminal-root">cd /tmp # don't import from source directory</code>
418410
<code class="devsite-terminal tfo-terminal-root">python -c "import tensorflow as tf; print(\"Num GPUs Available: \", len(tf.config.list_physical_devices('GPU')))"</code>
419411
</pre>
@@ -430,6 +422,7 @@ Success: TensorFlow is now installed.
430422

431423
<table>
432424
<tr><th>Version</th><th>Python version</th><th>Compiler</th><th>Build tools</th></tr>
425+
<tr><td>tensorflow-2.16.1</td><td>3.9-3.12</td><td>Clang 17.0.6</td><td>Bazel 6.5.0</td></tr>
433426
<tr><td>tensorflow-2.15.0</td><td>3.9-3.11</td><td>Clang 16.0.0</td><td>Bazel 6.1.0</td></tr>
434427
<tr><td>tensorflow-2.14.0</td><td>3.9-3.11</td><td>Clang 16.0.0</td><td>Bazel 6.1.0</td></tr>
435428
<tr><td>tensorflow-2.13.0</td><td>3.8-3.11</td><td>Clang 16.0.0</td><td>Bazel 5.3.0</td></tr>
@@ -468,6 +461,7 @@ Success: TensorFlow is now installed.
468461

469462
<table>
470463
<tr><th>Version</th><th>Python version</th><th>Compiler</th><th>Build tools</th><th>cuDNN</th><th>CUDA</th></tr>
464+
<tr><td>tensorflow-2.16.1</td><td>3.9-3.12</td><td>Clang 17.0.6</td><td>Bazel 6.5.0</td><td>8.9</td><td>12.3</td></tr>
471465
<tr><td>tensorflow-2.15.0</td><td>3.9-3.11</td><td>Clang 16.0.0</td><td>Bazel 6.1.0</td><td>8.9</td><td>12.2</td></tr>
472466
<tr><td>tensorflow-2.14.0</td><td>3.9-3.11</td><td>Clang 16.0.0</td><td>Bazel 6.1.0</td><td>8.7</td><td>11.8</td></tr>
473467
<tr><td>tensorflow-2.13.0</td><td>3.8-3.11</td><td>Clang 16.0.0</td><td>Bazel 5.3.0</td><td>8.6</td><td>11.8</td></tr>
@@ -508,6 +502,7 @@ Success: TensorFlow is now installed.
508502

509503
<table>
510504
<tr><th>Version</th><th>Python version</th><th>Compiler</th><th>Build tools</th></tr>
505+
<tr><td>tensorflow-2.16.1</td><td>3.9-3.12</td><td>Clang from xcode 13.6</td><td>Bazel 6.5.0</td></tr>
511506
<tr><td>tensorflow-2.15.0</td><td>3.9-3.11</td><td>Clang from xcode 10.15</td><td>Bazel 6.1.0</td></tr>
512507
<tr><td>tensorflow-2.14.0</td><td>3.9-3.11</td><td>Clang from xcode 10.15</td><td>Bazel 6.1.0</td></tr>
513508
<tr><td>tensorflow-2.13.0</td><td>3.8-3.11</td><td>Clang from xcode 10.15</td><td>Bazel 5.3.0</td></tr>

site/en/r1/guide/autograph.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -241,7 +241,7 @@
241241
"id": "m-jWmsCmByyw"
242242
},
243243
"source": [
244-
"AutoGraph supports common Python statements like `while`, `for`, `if`, `break`, and `return`, with support for nesting. Compare this function with the complicated graph verson displayed in the following code blocks:"
244+
"AutoGraph supports common Python statements like `while`, `for`, `if`, `break`, and `return`, with support for nesting. Compare this function with the complicated graph version displayed in the following code blocks:"
245245
]
246246
},
247247
{

site/en/r1/guide/distribute_strategy.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -118,7 +118,7 @@
118118
"## Types of strategies\n",
119119
"`tf.distribute.Strategy` intends to cover a number of use cases along different axes. Some of these combinations are currently supported and others will be added in the future. Some of these axes are:\n",
120120
"\n",
121-
"* Syncronous vs asynchronous training: These are two common ways of distributing training with data parallelism. In sync training, all workers train over different slices of input data in sync, and aggregating gradients at each step. In async training, all workers are independently training over the input data and updating variables asynchronously. Typically sync training is supported via all-reduce and async through parameter server architecture.\n",
121+
"* Synchronous vs asynchronous training: These are two common ways of distributing training with data parallelism. In sync training, all workers train over different slices of input data in sync, and aggregating gradients at each step. In async training, all workers are independently training over the input data and updating variables asynchronously. Typically sync training is supported via all-reduce and async through parameter server architecture.\n",
122122
"* Hardware platform: Users may want to scale their training onto multiple GPUs on one machine, or multiple machines in a network (with 0 or more GPUs each), or on Cloud TPUs.\n",
123123
"\n",
124124
"In order to support these use cases, we have 4 strategies available. In the next section we will talk about which of these are supported in which scenarios in TF."
@@ -371,7 +371,7 @@
371371
"id": "hQv1lm9UPDFy"
372372
},
373373
"source": [
374-
"So far we've talked about what are the different stategies available and how you can instantiate them. In the next few sections, we will talk about the different ways in which you can use them to distribute your training. We will show short code snippets in this guide and link off to full tutorials which you can run end to end."
374+
"So far we've talked about what are the different strategies available and how you can instantiate them. In the next few sections, we will talk about the different ways in which you can use them to distribute your training. We will show short code snippets in this guide and link off to full tutorials which you can run end to end."
375375
]
376376
},
377377
{

0 commit comments

Comments
 (0)