4
4
5
5
This is an attempt to gather commonly discussed topics when doing API
6
6
reviews. It’ll hopefully be a useful resource to both API owners and people
7
- proposing API changes. Process TF API Owners meet twice weekly to discuss
8
- changes. We try to get to PRs on the next meeting, but we don’t always make it
9
- all the way through. If your change is particularly urgent, please ping the PR
10
- to notify us of any urgency.
7
+ proposing API changes. [ TF API Owners] ( https://github.com/orgs/tensorflow/teams/api-owners )
8
+ meet twice weekly to discuss changes. We try to get to PRs on the next meeting,
9
+ but we don’t always make it all the way through. If your change is particularly
10
+ urgent, please ping the PR to notify us of any urgency.
11
11
12
12
## Process
13
13
@@ -33,7 +33,7 @@ and in general breaking serialized GraphDefs is worse than breaking the python
33
33
APIs.
34
34
35
35
Forward compatibility is more subtle: we should avoid changing the graph
36
- produced by currently correct python code without a three weeks notice . This
36
+ produced by currently correct python code without a three weeks notice. This
37
37
comes up most frequently when adding new ops, but also applies to non-obvious
38
38
things such as the graph emitted by gradients or pfor.
39
39
@@ -67,7 +67,6 @@ made public) and should refer to public API symbols by their full exported name.
67
67
68
68
### Common names
69
69
70
-
71
70
Prefer keepdims over keep_dims. Prefer axis over dim. Data types are called
72
71
dtype. name is a common last argument of ops but backward compatibility mandates
73
72
that new arguments are added after the last existing argument, even if that
@@ -93,6 +92,7 @@ endpoint is that the “experimental” will simply be removed. If you don’t b
93
92
that’ll work, it should probably not be added in its current form.
94
93
95
94
### Style
95
+
96
96
Generally, follow Google style.
97
97
98
98
Avoid redundancy. Do not write arguments of the form `function(...,
@@ -175,14 +175,15 @@ new ops if possible, but performance, hardware compatibility, and other concerns
175
175
often do require new ops.
176
176
177
177
When adding new ops, look for:
178
+
178
179
- closure under automatic differentiation (i.e. we avoid ops which are
179
180
differentiable but not twice-differentiable, or which are technically
180
181
differentiable but not marked as such)
181
182
- performant kernels (it’s better not to have an op than to have an op with a
182
183
suboptimal kernel; we need to make sure kernel experts have reviewed the
183
184
code)
184
185
- broadcasting (all numerical ops should broadcast using numpy rules)
185
- - does support for this op have to be added to pfor?
186
+ - does support for this op have to be added to pfor/vectorized_map ?
186
187
- dtype support (in general all numerical ops should support the common
187
188
integer, floating point, and complex dtypes, if they all make sense; we need
188
189
to watch out for int32 on GPUs though)
@@ -199,14 +200,3 @@ When adding new ops, look for:
199
200
it CPU, GPU, XLA, TPU, etc) and prefer to have at least a plan for writing
200
201
device-agnostic code
201
202
- should the python layer for this operation support raggedtensor/sparsetensor?
202
-
203
-
204
- ## Keras changes
205
-
206
- Keras has its own external API and maintainers, as well as an internal
207
- team. Keras API changes tend to be cross-cutting as core Keras classes are base
208
- classes to many TF API endpoints. We generally only look at keras changes after
209
- a keras owner (say, fchollet or karmel) has already approved it on their
210
- end. fchollet has guidelines for changes to the Keras API (location TBD). These
211
- include many generally good practices that are useful to keep in mind outside of
212
- Keras as well.
0 commit comments