44
55This is an attempt to gather commonly discussed topics when doing API
66reviews. It’ll hopefully be a useful resource to both API owners and people
7- proposing API changes. Process TF API Owners meet twice weekly to discuss
8- changes. We try to get to PRs on the next meeting, but we don’t always make it
9- all the way through. If your change is particularly urgent, please ping the PR
10- to notify us of any urgency.
7+ proposing API changes. [ TF API Owners] ( https://github.com/orgs/tensorflow/teams/api-owners )
8+ meet twice weekly to discuss changes. We try to get to PRs on the next meeting,
9+ but we don’t always make it all the way through. If your change is particularly
10+ urgent, please ping the PR to notify us of any urgency.
1111
1212## Process
1313
@@ -33,7 +33,7 @@ and in general breaking serialized GraphDefs is worse than breaking the python
3333APIs.
3434
3535Forward compatibility is more subtle: we should avoid changing the graph
36- produced by currently correct python code without a three weeks notice . This
36+ produced by currently correct python code without a three weeks notice. This
3737comes up most frequently when adding new ops, but also applies to non-obvious
3838things such as the graph emitted by gradients or pfor.
3939
@@ -67,7 +67,6 @@ made public) and should refer to public API symbols by their full exported name.
6767
6868### Common names
6969
70-
7170Prefer keepdims over keep_dims. Prefer axis over dim. Data types are called
7271dtype. name is a common last argument of ops but backward compatibility mandates
7372that new arguments are added after the last existing argument, even if that
@@ -93,6 +92,7 @@ endpoint is that the “experimental” will simply be removed. If you don’t b
9392that’ll work, it should probably not be added in its current form.
9493
9594### Style
95+
9696Generally, follow Google style.
9797
9898Avoid redundancy. Do not write arguments of the form `function(...,
@@ -175,14 +175,15 @@ new ops if possible, but performance, hardware compatibility, and other concerns
175175often do require new ops.
176176
177177When adding new ops, look for:
178+
178179 - closure under automatic differentiation (i.e. we avoid ops which are
179180 differentiable but not twice-differentiable, or which are technically
180181 differentiable but not marked as such)
181182 - performant kernels (it’s better not to have an op than to have an op with a
182183 suboptimal kernel; we need to make sure kernel experts have reviewed the
183184 code)
184185 - broadcasting (all numerical ops should broadcast using numpy rules)
185- - does support for this op have to be added to pfor?
186+ - does support for this op have to be added to pfor/vectorized_map ?
186187 - dtype support (in general all numerical ops should support the common
187188 integer, floating point, and complex dtypes, if they all make sense; we need
188189 to watch out for int32 on GPUs though)
@@ -199,14 +200,3 @@ When adding new ops, look for:
199200 it CPU, GPU, XLA, TPU, etc) and prefer to have at least a plan for writing
200201 device-agnostic code
201202 - should the python layer for this operation support raggedtensor/sparsetensor?
202-
203-
204- ## Keras changes
205-
206- Keras has its own external API and maintainers, as well as an internal
207- team. Keras API changes tend to be cross-cutting as core Keras classes are base
208- classes to many TF API endpoints. We generally only look at keras changes after
209- a keras owner (say, fchollet or karmel) has already approved it on their
210- end. fchollet has guidelines for changes to the Keras API (location TBD). These
211- include many generally good practices that are useful to keep in mind outside of
212- Keras as well.
0 commit comments