You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: CODE_OF_CONDUCT.md
+6-2
Original file line number
Diff line number
Diff line change
@@ -1,7 +1,11 @@
1
1
# TensorFlow Code of Conduct
2
2
3
-
In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, nationality, personal appearance, race, religion, or sexual identity and orientation.
4
-
3
+
In the interest of fostering an open and welcoming environment, we as
4
+
contributors and maintainers pledge to make participation in our project and our
5
+
community a harassment-free experience for everyone, regardless of age, body
6
+
size, disability, ethnicity, gender identity and expression, level of
7
+
experience, nationality, personal appearance, race, religion, or sexual identity
**Here's why we have that policy**: TensorFlow developers respond to issues. We want to focus on work that benefits the whole community, e.g., fixing bugs and adding features. Support only helps individuals. GitHub also notifies thousands of people when issues are filed. We want them to see you communicating an interesting problem, rather than being redirected to Stack Overflow.
12
14
13
15
------------------------
14
16
15
17
### System information
16
-
-**Have I written custom code (as opposed to using a stock example script provided in TensorFlow)**:
17
-
-**OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**:
18
-
-**Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device**:
19
-
-**TensorFlow installed from (source or binary)**:
20
-
-**TensorFlow version (use command below)**:
21
-
-**Python version**:
22
-
-**Bazel version (if compiling from source)**:
23
-
-**GCC/Compiler version (if compiling from source)**:
24
-
-**CUDA/cuDNN version**:
25
-
-**GPU model and memory**:
26
-
-**Exact command to reproduce**:
18
+
19
+
-**Have I written custom code (as opposed to using a stock example script
20
+
provided in TensorFlow)**:
21
+
-**OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**:
22
+
-**Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue
23
+
happens on a mobile device**:
24
+
-**TensorFlow installed from (source or binary)**:
25
+
-**TensorFlow version (use command below)**:
26
+
-**Python version**:
27
+
-**Bazel version (if compiling from source)**:
28
+
-**GCC/Compiler version (if compiling from source)**:
29
+
-**CUDA/cuDNN version**:
30
+
-**GPU model and memory**:
31
+
-**Exact command to reproduce**:
27
32
28
33
You can collect some of this information using our environment capture script:
Copy file name to clipboardexpand all lines: RELEASE.md
+144-83
Original file line number
Diff line number
Diff line change
@@ -90,89 +90,150 @@ Coinciding with this change, new releases of [TensorFlow's Docker images](https:
90
90
* The current TensorFlow release now **requires**[gast](https://pypi.org/project/gast/) version 0.3.3.
91
91
92
92
## Bug Fixes and Other Changes
93
-
*`tf.data`:
94
-
* Removed `autotune_algorithm` from experimental optimization options.
95
-
* TF Core:
96
-
*`tf.constant` always creates CPU tensors irrespective of the current device context.
97
-
* Eager `TensorHandles` maintain a list of mirrors for any copies to local or remote devices. This avoids any redundant copies due to op execution.
98
-
* For `tf.Tensor` & `tf.Variable`, `.experimental_ref()` is no longer experimental and is available as simply `.ref()`.
99
-
*`pfor/vectorized_map`: Added support for vectorizing 56 more ops. Vectorizing `tf.cond` is also supported now.
100
-
* Set as much partial shape as we can infer statically within the gradient impl of the gather op.
101
-
* Gradient of `tf.while_loop` emits `StatelessWhile` op if `cond` and body functions are stateless. This allows multiple gradients while ops to run in parallel under distribution strategy.
102
-
* Speed up `GradientTape` in eager mode by auto-generating list of op inputs/outputs which are unused and hence not cached for gradient functions.
103
-
* Support `back_prop=False` in `while_v2` but mark it as deprecated.
104
-
* Improve error message when attempting to use `None` in data-dependent control flow.
105
-
* Add `RaggedTensor.numpy()`.
106
-
* Update `RaggedTensor.__getitem__` to preserve uniform dimensions & allow indexing into uniform dimensions.
107
-
* Update `tf.expand_dims` to always insert the new dimension as a non-ragged dimension.
108
-
* Update `tf.embedding_lookup` to use `partition_strategy` and `max_norm` when `ids` is ragged.
109
-
* Allow `batch_dims==rank(indices)` in `tf.gather`.
110
-
* Add support for bfloat16 in `tf.print`.
111
-
*`tf.distribute`:
112
-
* Support `embedding_column` with variable-length input features for `MultiWorkerMirroredStrategy`.
113
-
*`tf.keras`:
114
-
* Added `experimental_aggregate_gradients` argument to `tf.keras.optimizer.Optimizer.apply_gradients`. This allows custom gradient aggregation and processing aggregated gradients in custom training loop.
115
-
* Allow `pathlib.Path` paths for loading models via Keras API.
116
-
*`tf.function`/AutoGraph:
117
-
* AutoGraph is now available in `ReplicaContext.merge_call`, `Strategy.extended.update` and `Strategy.extended.update_non_slot`.
118
-
* Experimental support for shape invariants has been enabled in `tf.function`. See the API docs for `tf.autograph.experimental.set_loop_options` for additonal info.
119
-
* AutoGraph error messages now exclude frames corresponding to APIs internal to AutoGraph.
120
-
* Improve shape inference for `tf.function` input arguments to unlock more Grappler optimizations in TensorFlow 2.x.
121
-
* Improve automatic control dependency management of resources by allowing resource reads to occur in parallel and synchronizing only on writes.
122
-
* Fix execution order of multiple stateful calls to `experimental_run_v2` in `tf.function`.
123
-
* You can now iterate over `RaggedTensors` using a for loop inside `tf.function`.
124
-
*`tf.lite`:
125
-
* Migrated the `tf.lite` C inference API out of experimental into lite/c.
126
-
* Add an option to disallow `NNAPI` CPU / partial acceleration on Android 10
127
-
* TFLite Android AARs now include the C headers and APIs are required to use TFLite from native code.
128
-
* Refactors the delegate and delegate kernel sources to allow usage in the linter.
129
-
* Limit delegated ops to actually supported ones if a device name is specified or `NNAPI` CPU Fallback is disabled.
130
-
* TFLite now supports `tf.math.reciprocal1` op by lowering to `tf.div op`.
131
-
* TFLite's unpack op now supports boolean tensor inputs.
132
-
* Microcontroller and embedded code moved from experimental to main TensorFlow Lite folder
133
-
* Check for large TFLite tensors.
134
-
* Fix GPU delegate crash with C++17.
135
-
* Add 5D support to TFLite `strided_slice`.
136
-
* Fix error in delegation of `DEPTH_TO_SPACE` to `NNAPI` causing op not to be accelerated.
137
-
* Fix segmentation fault when running a model with LSTM nodes using `NNAPI` Delegate
138
-
* Fix `NNAPI` delegate failure when an operand for Maximum/Minimum operation is a scalar.
139
-
* Fix `NNAPI` delegate failure when Axis input for reduce operation is a scalar.
140
-
* Expose option to limit the number of partitions that will be delegated to `NNAPI`.
141
-
* If a target accelerator is specified, use its feature level to determine operations to delegate instead of SDK version.
142
-
*`tf.random`:
143
-
* Various random number generation improvements:
144
-
* Add a fast path for default `random_uniform`
145
-
*`random_seed` documentation improvement.
146
-
*`RandomBinomial` broadcasts and appends the sample shape to the left rather than the right.
* Add check for memory alignment to MemoryAllocation::MemoryAllocation() on 32-bit ARM. This ensures a deterministic early exit instead of a hard to debug bus error later.
164
-
*`saved_model_cli aot_compile_cpu` allows you to compile saved models to XLA header+object files and include them in your C++ programs.
165
-
* Enable `Igamma`, `Igammac` for XLA.
166
-
* Deterministic Op Functionality:
167
-
* XLA reduction emitter is deterministic when the environment variable `TF_DETERMINISTIC_OPS` is set to "true" or "1". This extends deterministic `tf.nn.bias_add` back-prop functionality (and therefore also deterministic back-prop of bias-addition in Keras layers) to include when XLA JIT complilation is enabled.
168
-
* Fix problem, when running on a CUDA GPU and when either environment variable `TF_DETERMINSTIC_OPS` or environment variable `TF_CUDNN_DETERMINISTIC` is set to "true" or "1", in which some layer configurations led to an exception with the message "No algorithm worked!"
169
-
* Tracing and Debugging:
170
-
* Add source, destination name to `_send` traceme to allow easier debugging.
171
-
* Add traceme event to `fastpathexecute`.
172
-
* Other:
173
-
* Fix an issue with AUC.reset_states for multi-label AUC [#35852](https://github.com/tensorflow/tensorflow/issues/35852)
174
-
* Fix the TF upgrade script to not delete files when there is a parsing error and the output mode is `in-place`.
175
-
* Move `tensorflow/core:framework/*_pyclif` rules to `tensorflow/core/framework:*_pyclif`.
93
+
94
+
*`tf.data`:
95
+
* Removed `autotune_algorithm` from experimental optimization options.
96
+
* TF Core:
97
+
*`tf.constant` always creates CPU tensors irrespective of the current
98
+
device context.
99
+
* Eager `TensorHandles` maintain a list of mirrors for any copies to local
100
+
or remote devices. This avoids any redundant copies due to op execution.
101
+
* For `tf.Tensor` & `tf.Variable`, `.experimental_ref()` is no longer
102
+
experimental and is available as simply `.ref()`.
103
+
*`pfor/vectorized_map`: Added support for vectorizing 56 more ops.
104
+
Vectorizing `tf.cond` is also supported now.
105
+
* Set as much partial shape as we can infer statically within the gradient
106
+
impl of the gather op.
107
+
* Gradient of `tf.while_loop` emits `StatelessWhile` op if `cond` and body
108
+
functions are stateless. This allows multiple gradients while ops to run
109
+
in parallel under distribution strategy.
110
+
* Speed up `GradientTape` in eager mode by auto-generating list of op
111
+
inputs/outputs which are unused and hence not cached for gradient
112
+
functions.
113
+
* Support `back_prop=False` in `while_v2` but mark it as deprecated.
114
+
* Improve error message when attempting to use `None` in data-dependent
115
+
control flow.
116
+
* Add `RaggedTensor.numpy()`.
117
+
* Update `RaggedTensor.__getitem__` to preserve uniform dimensions & allow
118
+
indexing into uniform dimensions.
119
+
* Update `tf.expand_dims` to always insert the new dimension as a
120
+
non-ragged dimension.
121
+
* Update `tf.embedding_lookup` to use `partition_strategy` and `max_norm`
122
+
when `ids` is ragged.
123
+
* Allow `batch_dims==rank(indices)` in `tf.gather`.
124
+
* Add support for bfloat16 in `tf.print`.
125
+
*`tf.distribute`:
126
+
* Support `embedding_column` with variable-length input features for
127
+
`MultiWorkerMirroredStrategy`.
128
+
*`tf.keras`:
129
+
* Added `experimental_aggregate_gradients` argument to
130
+
`tf.keras.optimizer.Optimizer.apply_gradients`. This allows custom
131
+
gradient aggregation and processing aggregated gradients in custom
132
+
training loop.
133
+
* Allow `pathlib.Path` paths for loading models via Keras API.
134
+
*`tf.function`/AutoGraph:
135
+
* AutoGraph is now available in `ReplicaContext.merge_call`,
136
+
`Strategy.extended.update` and `Strategy.extended.update_non_slot`.
137
+
* Experimental support for shape invariants has been enabled in
138
+
`tf.function`. See the API docs for
139
+
`tf.autograph.experimental.set_loop_options` for additonal info.
140
+
* AutoGraph error messages now exclude frames corresponding to APIs
141
+
internal to AutoGraph.
142
+
* Improve shape inference for `tf.function` input arguments to unlock more
143
+
Grappler optimizations in TensorFlow 2.x.
144
+
* Improve automatic control dependency management of resources by allowing
145
+
resource reads to occur in parallel and synchronizing only on writes.
146
+
* Fix execution order of multiple stateful calls to `experimental_run_v2`
147
+
in `tf.function`.
148
+
* You can now iterate over `RaggedTensors` using a for loop inside
149
+
`tf.function`.
150
+
*`tf.lite`:
151
+
* Migrated the `tf.lite` C inference API out of experimental into lite/c.
152
+
* Add an option to disallow `NNAPI` CPU / partial acceleration on Android
153
+
10
154
+
* TFLite Android AARs now include the C headers and APIs are required to
155
+
use TFLite from native code.
156
+
* Refactors the delegate and delegate kernel sources to allow usage in the
157
+
linter.
158
+
* Limit delegated ops to actually supported ones if a device name is
159
+
specified or `NNAPI` CPU Fallback is disabled.
160
+
* TFLite now supports `tf.math.reciprocal1` op by lowering to `tf.div op`.
161
+
* TFLite's unpack op now supports boolean tensor inputs.
162
+
* Microcontroller and embedded code moved from experimental to main
163
+
TensorFlow Lite folder
164
+
* Check for large TFLite tensors.
165
+
* Fix GPU delegate crash with C++17.
166
+
* Add 5D support to TFLite `strided_slice`.
167
+
* Fix error in delegation of `DEPTH_TO_SPACE` to `NNAPI` causing op not to
168
+
be accelerated.
169
+
* Fix segmentation fault when running a model with LSTM nodes using
170
+
`NNAPI` Delegate
171
+
* Fix `NNAPI` delegate failure when an operand for Maximum/Minimum
172
+
operation is a scalar.
173
+
* Fix `NNAPI` delegate failure when Axis input for reduce operation is a
174
+
scalar.
175
+
* Expose option to limit the number of partitions that will be delegated
176
+
to `NNAPI`.
177
+
* If a target accelerator is specified, use its feature level to determine
178
+
operations to delegate instead of SDK version.
179
+
*`tf.random`:
180
+
* Various random number generation improvements:
181
+
* Add a fast path for default `random_uniform`
182
+
*`random_seed` documentation improvement.
183
+
*`RandomBinomial` broadcasts and appends the sample shape to the left
0 commit comments