|
| 1 | +# tensorflow/api-owners review practices |
| 2 | + |
| 3 | +## Overview |
| 4 | + |
| 5 | +This is an attempt to gather commonly discussed topics when doing API |
| 6 | +reviews. It’ll hopefully be a useful resource to both API owners and people |
| 7 | +proposing API changes. [TF API Owners](https://github.com/orgs/tensorflow/teams/api-owners) |
| 8 | +meet twice weekly to discuss changes. We try to get to PRs on the next meeting, |
| 9 | +but we don’t always make it all the way through. If your change is particularly |
| 10 | +urgent, please ping the PR to notify us of any urgency. |
| 11 | + |
| 12 | +## Process |
| 13 | + |
| 14 | +We only look at changes which have already been approved by other reviewers. If |
| 15 | +there are major outstanding comments, we will wait with API review until those |
| 16 | +are resolved. If there are questions for API owners, explicitly raise this in |
| 17 | +the comments to get an answer. |
| 18 | + |
| 19 | + |
| 20 | +## High level points |
| 21 | + |
| 22 | +### Backward and forward compatibility |
| 23 | +We avoid backwards-incompatible API changes. We also avoid |
| 24 | +backwards-incompatible behavior changes, such as restricting the set of valid |
| 25 | +inputs to a function or extending the set of valid outputs of a function. Adding |
| 26 | +support for previously not supported behavior is okay, as are changes to |
| 27 | +explicitly experimental APIs (within reason). When needing to provide a new or |
| 28 | +different behavior, we strongly prefer a new version of the API over breaking |
| 29 | +backwards compatibility. Note that we are free to deprecate APIs; we just cannot |
| 30 | +break code which relies on their documented behavior. We need to worry about |
| 31 | +backward compatibility both of our python APIs and of the serialized GraphDefs, |
| 32 | +and in general breaking serialized GraphDefs is worse than breaking the python |
| 33 | +APIs. |
| 34 | + |
| 35 | +Forward compatibility is more subtle: we should avoid changing the graph |
| 36 | +produced by currently correct python code without a three weeks notice. This |
| 37 | +comes up most frequently when adding new ops, but also applies to non-obvious |
| 38 | +things such as the graph emitted by gradients or pfor. |
| 39 | + |
| 40 | +Including the name “experimental” in an API endpoint allows you to break |
| 41 | +backwards compatibility in the future, as noted above. However, we still prefer |
| 42 | +to mark the experimental API as deprecated for one release before removing it in |
| 43 | +the subsequent release. Please do not use the experimental namespace as an |
| 44 | +escape hatch for bad code or functionality you |
| 45 | +don’t-really-want-but-need-for-now; experimental APIs should be APIs that we |
| 46 | +intend to make standard in the near future, but have some lingering questions to |
| 47 | +resolve first. |
| 48 | + |
| 49 | + |
| 50 | +### Docstrings |
| 51 | + |
| 52 | +TF APIs should have comprehensive documentation in the form of docstrings. If at |
| 53 | +all possible these docstrings should have runnable examples, and these examples |
| 54 | +should form a doctest so they stay correct. The examples should demonstrate an |
| 55 | +end-to-end user workflow, such that it’s clear how to generate the necessary |
| 56 | +inputs for the API and what to do with the outputs. The docstring should be |
| 57 | +understandable by someone who is not familiar with TF. See the [guide to writing |
| 58 | +TF docstrings](https://www.tensorflow.org/community/contribute/docs_ref) for |
| 59 | +more information. |
| 60 | + |
| 61 | +Our documentation generator for classes only sees methods, so prefer defining |
| 62 | +members as properties instead of assigning them in `__init__`. |
| 63 | + |
| 64 | +Docstrings should only refer to other public TF API symbols (i.e. do not refer |
| 65 | +to other symbols defined in the same file as a function which is just now being |
| 66 | +made public) and should refer to public API symbols by their full exported name. |
| 67 | + |
| 68 | +### Common names |
| 69 | + |
| 70 | +Prefer keepdims over keep_dims. Prefer axis over dim. Data types are called |
| 71 | +dtype. name is a common last argument of ops but backward compatibility mandates |
| 72 | +that new arguments are added after the last existing argument, even if that |
| 73 | +results in name not being the last argument. |
| 74 | + |
| 75 | +We generally prefer spelling things out over using abbreviations except when |
| 76 | +abbreviations are more standard than spelling things out (i.e. don’t spell out |
| 77 | +linalg or svd). When in doubt we ask domain experts or use web search to see |
| 78 | +what spelling is most common. |
| 79 | + |
| 80 | +If possible we prefer to name things in a similar way to numpy (e.g., we would |
| 81 | +not pick einsum as a name, but numpy and others before it have, and that |
| 82 | +precedent is very strong). |
| 83 | + |
| 84 | +We prefer experimental namespaces (i.e. tf.data.experimental.foobar) over |
| 85 | +experimental-prefixed names (i.e. tf.data.experimental_foobar) except when |
| 86 | +adding an experimental class method, or an experimental argument. Experimental |
| 87 | +endpoints should be deprecated in a minor release before they can be removed in |
| 88 | +the next. We would like new experimental symbols to be things which will |
| 89 | +eventually end up in core TF as opposed to things we expect will be phased out |
| 90 | +with no clear replacement. The best expectation to have for an experimental |
| 91 | +endpoint is that the “experimental” will simply be removed. If you don’t believe |
| 92 | +that’ll work, it should probably not be added in its current form. |
| 93 | + |
| 94 | +### Style |
| 95 | + |
| 96 | +Generally, follow Google style. |
| 97 | + |
| 98 | +Avoid redundancy. Do not write arguments of the form `function(..., |
| 99 | +enable_feature=False, feature_config=None)` if you can also write `function(..., |
| 100 | +feature_config=None)`, where implicitly, `enable_feature = feature_config is not |
| 101 | +None`. |
| 102 | + |
| 103 | +Try to embed well with the ambient language. Think about how your API interacts |
| 104 | +with language idioms (e.g., in Python: can it be hashed, i.e., used as a dict |
| 105 | +key? Is it iterable? Is it a mapping? Can it be equality compared? |
| 106 | +Ordered?). Think about how your API interacts with other pieces of the Python |
| 107 | +ecosystem as well— is there an analogue in Numpy or PyTorch that we should |
| 108 | +consider aligning with? |
| 109 | + |
| 110 | +Use language-native constructs wherever you can. In Python, a tuple should be a |
| 111 | +tuple. The bar for custom configuration objects is relatively high, a dict or |
| 112 | +namedtuple goes a long way. |
| 113 | + |
| 114 | +In particular, do not expose protobufs directly as part of an API. You can use |
| 115 | +protobufs for serialization or to encode network traffic. Protobufs should |
| 116 | +always be an implementation detail, and never visible on the API surface. Use |
| 117 | +language native constructs (dicts or classes for Python, structs for C/C++) if |
| 118 | +you need configuration objects. |
| 119 | + |
| 120 | +Avoid global (or any non-local) state as much as possible (this includes Python |
| 121 | +'with' scopes). If you need global context, consider whether it can be |
| 122 | +thread-local. The TF API is supposed to be thread-safe. Avoid stateful operation |
| 123 | +(mutability) if you can. Both features make it hard to reason about code, and |
| 124 | +make composability harder to achieve. |
| 125 | + |
| 126 | +We prefer strings ("auto", "never", etc) over enums (tf.namespace.AUTO, |
| 127 | +etc). Strings are easier to type, and forces us to document all possible values |
| 128 | +and their semantics in the docstrings of all places which accept the string, as |
| 129 | +opposed to only in the enum definition, which is a little friendlier. |
| 130 | + |
| 131 | +### Orthogonality and integration with the existing APIs |
| 132 | + |
| 133 | +Is the new API implementable in terms of existing APIs? If so, we might want to |
| 134 | +consider pointing users to using the existing APIs. Does the new API add enough |
| 135 | +value over a combination of existing APIs? Does the API solve only a specific |
| 136 | +problem (that’s usually a sign combinations of existing APIs would be |
| 137 | +preferred)? |
| 138 | + |
| 139 | +If not, are existing APIs implementable in terms of the new API? If this is |
| 140 | +simple, we might want to steer users towards the new and away from the old API |
| 141 | +(possibly, old APIs should be deprecated along with introducing the new API). |
| 142 | + |
| 143 | +If neither is the case, it might be possible that there is a more general API |
| 144 | +which makes both the existing API and the new API easy to express. We try to |
| 145 | +keep global consistency of our API in mind when reviewing new changes. |
| 146 | + |
| 147 | +How will this API work together with others? Does it do something slightly |
| 148 | +differently than others? Does it expect inputs which match what other parts of |
| 149 | +TensorFlow produce? Does its output match what other parts of TensorFlow can |
| 150 | +consume? |
| 151 | + |
| 152 | +Does it do things the same way other similar pieces in TensorFlow do it? E.g., |
| 153 | +if a common pattern to achieve a behavior is an extra argument, don't use a |
| 154 | +function decorator to achieve the same in a different area of the API. |
| 155 | + |
| 156 | +Two wrongs don’t make a right. That is, if a bad API already exists in TF, that |
| 157 | +does not give license to new APIs to be bad in the same way. Improvement must be |
| 158 | +balanced with consistency, however, and sometimes it’s okay to carry small |
| 159 | +imperfections into new APIs for the sake of consistency with old APIs. |
| 160 | + |
| 161 | +### Does it belong in TF at all? |
| 162 | + |
| 163 | +As TF evolves there’s a tendency to put everything inside of it, with costs |
| 164 | +compounding over the long term. If there is a reasonable home for a new API |
| 165 | +outside core TF (say in addons, io, TFP, or other projects entirely) that can be |
| 166 | +strongly preferrable. If new code can be released as independent libraries, it |
| 167 | +should be. This is especially true for APIs that are actively evolving; core TF |
| 168 | +imposes many restrictions, so it’s far better to trial new APIs outside of the |
| 169 | +core library. |
| 170 | + |
| 171 | +## Adding new ops |
| 172 | + |
| 173 | +Adding new ops to TF should be done with care. We generally prefer not adding |
| 174 | +new ops if possible, but performance, hardware compatibility, and other concerns |
| 175 | +often do require new ops. |
| 176 | + |
| 177 | +When adding new ops, look for: |
| 178 | + |
| 179 | + - closure under automatic differentiation (i.e. we avoid ops which are |
| 180 | + differentiable but not twice-differentiable, or which are technically |
| 181 | + differentiable but not marked as such) |
| 182 | + - performant kernels (it’s better not to have an op than to have an op with a |
| 183 | + suboptimal kernel; we need to make sure kernel experts have reviewed the |
| 184 | + code) |
| 185 | + - broadcasting (all numerical ops should broadcast using numpy rules) |
| 186 | + - does support for this op have to be added to pfor/vectorized_map? |
| 187 | + - dtype support (in general all numerical ops should support the common |
| 188 | + integer, floating point, and complex dtypes, if they all make sense; we need |
| 189 | + to watch out for int32 on GPUs though) |
| 190 | + - device support (cuda kernels should be implemented if possible, and similarly |
| 191 | + a tf/xla bridge entry should be added if it makes sense) |
| 192 | + - attributes versus inputs (anything which can be an input to an operation |
| 193 | + should be an input, and attributes should only be used to parametrize the |
| 194 | + operation in ways that affect the output dtypes or sometimes shapes) |
| 195 | + - state management (is the op stateful? Can it instead be made stateless and |
| 196 | + rely on optimizations like memory reuse for performance? Can it be made to |
| 197 | + keep its state using one of the existing mechanisms like variables? If not, |
| 198 | + its state should be encapsulated using resource handles if at all possible) |
| 199 | + - we generally don’t like ops which are supported only on a single device (be |
| 200 | + it CPU, GPU, XLA, TPU, etc) and prefer to have at least a plan for writing |
| 201 | + device-agnostic code |
| 202 | + - should the python layer for this operation support raggedtensor/sparsetensor? |
0 commit comments