Skip to content

Tensor API Refactoring#8

Open
karllessard wants to merge 446 commits intotensor-basefrom
master
Open

Tensor API Refactoring#8
karllessard wants to merge 446 commits intotensor-basefrom
master

Conversation

@karllessard
Copy link
Owner

No description provided.

Copy link

@Craigacp Craigacp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've gone through it with some initial comments. Do you have any of this refactoring implemented? I think it might be useful to look at some examples of the changes it implies in user code.

accessing a tensor instance. Actually, this information is carried by the [`DataType` enum class](https://github.com/karllessard/tensorflow/blob/master/tensorflow/java/src/main/java/org/tensorflow/DataType.java),
which converts back and forth a type alias (such as `Integer`) to a TF data type (such as `INT32`).

Again, this conversion can be sometimes painful and could be avoid by changing the `DataType` enum class to a
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm confused as to how this is better than an enum. The enum can carry all this information already, and is just as accessible from JNI as an anonymous class is if not more. Do you have an example which shows the benefit of this approach over the current one? The placeholder factory could be modified to ask the enum what the ordinal is (in the way it would need to be updated to call DataType.ordinal).

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, maybe I mixed two different concept here.

The main idea was not to get rid of the enum but to get rid of the conversion between a Java type and an enum, using this method, like here.

Now why using a static class instead of an enum? Well, since we already have a specific class per supported tensor type (in this RFC I mean), for me it sounds just more intuitive to store all information related to that type inside this class and the user does not have to deduce the corresponding value in an enum. (e.g. the type info of TInt32is TInt32.DTYPE, instead of the type of of TInt32is DataType.INT32).

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok. That approach precludes switching over the types or other things which need to know about all the Tensor types in the C API. Which might be fine, as you point out later it's much more extensible, but I'm not sure if it's worth exposing types in Java that aren't understood by the C API underneath.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

but I'm not sure if it's worth exposing types in Java that aren't understood by the C API underneath.

That is not the plan neither, all tensor class such a TInt32would match a type supported by the C API. Or maybe you mean the "custom" type I was proposing later? Yes, this one need more rethinking for sure.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I mean the custom types proposed at the end of the document.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

However there are things like fp16 and bf16 which aren't easily representable in Java at the moment, but are probably useful.

return allocate(DTYPE, Shape.make()).set(value);
}

public static TInt32 vector(Integer... values) {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The boxing here makes this very inefficient, and also it's not possible to pass in an int[]. We'd need a signature that is int... in addition to these methods.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you please give more explanation what makes the boxed varargs not performant, other than the regular required conversion? I'm lacking a bit of knowledge in this field and I think you know a lot more about it.

I'm just trying to avoid supporting primitive arrays in the NdArray API, like I used to, because it complexifies a lot the hierarchy of objects required if we want to keep control on which tensor supports which primitive array (i.e. you can pass a int[] to a TInt32 but not to a TBool).

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The varargs construction allocates an array and writes all the values into it. So for an f(Integer... inputs) called as obj.f(1000,2000,3000,4000) it first creates an Integer object which contains the field storing 1000, then an Integer representing 2000, etc. Then it stores a reference to each of those Integers into the created Integer[4]. This means that the call allocated 5 objects on the heap. For an entry point g(int... inputs) called as obj.g(1000,2000,3000,4000) then it allocates a single int[4] and writes all the values into that array. So that's only a single object allocation. This isn't too important for short arrays like in your example, but it's really important for longer arrays.

The integers used are important too, as the JDK has a cache of the integers in the range [-127, 127] so those integers shouldn't trigger allocation. But values outside those ranges (and Floats, Doubles etc) will allocate a fresh object.

The code to convert an array of primitives into an array of unboxed primitives is a single line IntStream.of(intArr).boxed().toArray(Integer[]::new) and there isn't a FloatStream or BooleanStream to do the equivalent transformation for those types quickly.

I think it's important to be able to pass in a primitive array, as that's what people will be using for efficiency reasons, and if we require the intermediate boxing then it's going to generate a lot of garbage and object allocations.

Either that or we get the NDArray set(T value, int... indices) method to be extremely fast and remove all ways of creating things with arrays or varargs, so the natural way of using it is generating a blank one and then filling it with the feature extractor. If it's backed by a ByteBuffer then it's probably a quick method, but if it's writing directly into the TF Tensor then we pay the JNI cost every single time.

The split between primitives and objects is painful in current Java, as you need to use the primitives to get any kind of speed, but the lack of generics over primitives leads to ugly APIs with edges. Once specialised generics lands we should be able to get around this, but that's still a while away.

Copy link
Owner Author

@karllessard karllessard Sep 6, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot, that was very helpful. And yes, your last comment says it all:

The split between primitives and objects is painful in current Java, as you need to use the primitives to get any kind of speed, but the lack of generics over primitives leads to ugly APIs with edges.

I guess we can start with an easy clean API principally relying on boxed types, while being careful to avoid allocation overhead like you stated, and then add the primitive arrays if performances are not reached. But like you also said:

 This isn't too important for short arrays like in your example, but it's really important for longer arrays.

which is probably most of the cases when an NdArray using one of those method (vector(), set(), ...). For bulk initialize of a large set of data, probably the NdArray.write(DataBuffer)will be a better pick.

If it's backed by a ByteBuffer then it's probably a quick method, but if it's writing directly into the TF Tensor then we pay the JNI cost every single time.

There should not be any difference in performances between writing to a NdArrayor a Tensoras both are backed by ByteBuffer (currently JNI is only implied to return a reference to the ByteBufferof the tensor and then everything can be done in Java space).

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When does the ByteBuffer get handed to the TF C API?

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually? When the tensor is allocated by the native layer, its buffer is mapped to a ByteBuffer (see here and returned to the Java runtime.

So it was just per design that a user can not write directly to the tensor memory right now, something we’ll change with the new Tensor NIO interface.

BTW, @EronWright, who wrote that code a few years ago, has join the SIG lately to offer his help on this.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, cool, I didn't realize that. Does that interfere with any of the automatic memory management it's doing under the hood (e.g. CUDA unified memory, or whatever magic happens with TPUs)?

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My blind guess is that a tensor in GPU memory is always copy to local memory before being mapped to a ByteBuffer, but Eron probably know better about it.

If that's the case, then yes there would be a performance impact caused by the copy so we should make sure to map that memory only when required (which is the case right now... I think).

Copy link

@EronWright EronWright Sep 7, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wouldn't be surprised if a direct ByteBuffer could manipulate a GPU's mapped memory. Not an expert on that.

About the whole concept of an NDArray that is backed by a direct byte buffer, we might find inspiration in the V8TypedArray in the Java bindings for V8 (article).

Regarding the copying of buffer data that TF Java performs today, I believe the rationale is that TF assumes that it owns the memory. There's some reference counting there but I haven't looked at it recently. Meanwhile, I am very keen on introducing Netty-style reference counting to TF Java, and intend to write a proposal to that effect.


It has been reported that in some cases, supporting compile-time type safety can become a nightmare when
testing a network using different data types. Actually, this can require a lot of search & replace on each
experiment (e.g. `FLOAT` vs `DOUBLE`).
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we expanded the set of input and output methods (i.e. so Tensor had methods that accepted and returned all Java types and did the expected conversions) we could use a generic form of this, store the required enum or DataType implementation in a field and then make all tensor construction either accept that field or infer the type from other tensors. That would make it a single line change to change data types.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I'm wondering if the Tensor should accept all Java types and handle the conversion... right now, it does expose all input/output methods but fail at runtime if the type is not compatible the tensor data type, like here.

There might be other issues as well, think of "UInt32"... how do you pass such a value in a 32-bit Integer?? It sounds like we might need to some conversion in the end...

Copy link

@Craigacp Craigacp Sep 6, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The problem with accepting all types is when we have value types and can more easily express uint32 it'll be irritating to deal with it. But the type system will be quite different then, and we're unlikely to upgrade to that Java version for a while after release (plus it's not ready yet).

Thinking more about this, maybe there should be a static factory on Tensor that accepts a DataType and an NDArray or other mass type and performs the appropriate conversions. That way the only thing that needs to change is the data type which can be a variable stored in a single place. That's pretty close to what exists in the current version, but allowing it to make type conversions rather than just checking for type equality.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I still don’t have a solution for supporting those kind of DataTypes... again, if we let the user pick the representation of his choice for initializing the data instead of forcing him to a specific one might help a bit (e.g. a user could decide to use ints, floats or doubles or even BigInteger/Decimal to feed a TFloat tensor, instead of forcing him to use floatss only). We’ll then need a converter to map the provided values to the physical buffer...

That’s something that wasn’t foreseen though and we should work on it if we all think that’s the right approach.


Note also that an advantage of using `NdArray` is that the type and the shape of the tensor is explicit and can be easily
retrieved by calling `t.dataType()` and `t.shape()` respectively, while with standard Java arrays those need
to be inferred and discovered by the TF Java client using costly reflections and navigating through the array.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not a reflective operation that takes time, it's just the physical traversal of the Java multidimensional array, as they could be stored anywhere on the heap, and a 4d array could be made up of hundreds of thousands of arrays.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok I'll change the text to avoid the confusion.

```
Of course, the standard Java arrays gives a better view on where is located a given row, by embedding it
through a series of brackets. The actual proposal is focusing on simplicity and performance more that readability.
*Please, feel free to suggest any better solution.*
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not too fond of this implicit row wise construction as I got completely lost trying to figure out where I was in the Tensor at any given call, plus it means the TensorBuilder is carrying around implicit state saying where it wants to write to next.

At the expense of readability I think it would be preferable to use something like

TInt32.ofShape(3,2,2)
.set(0,0,new NDArray<>(1,2))
.set(0,1,new NDArray<>(3,4))
.set(1,0,new NDArray<>(5,6))
.set(1,1,new NDArray<>(7,8))
.set(2,0,new NDArray<>(9,10))
.set(2,1,new NDArray<>(11,12));

Replacing new NDArray<>() with whatever the appropriate constructor is. To make the varargs work we might have to permute the order of arguments (i.e. new NDArray<>(),int..). It could also be a TensorBuilder and require a done() method to construct too, I've no strong feelings either way.

Copy link
Owner Author

@karllessard karllessard Sep 6, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hehe, yes me neither to be honest, I just drop the basic idea here to let others suggest something better :)

The example you gave is pretty much already supported by the NdArray API, except that the coordinates are provided after the value (because its a vararg that can conflict with an int scalar value). For example, you can set a scalar value like this:

TInt32 t  = ...;
t.set(100, 0, 0).set(200, 1, 0);

We could add something similar but vectors, which will accept a vector, something like

TInt32.ofShape(3, 2, 2)
    .set(vector(1, 2), 0, 0)
    .set(vector(3, 4), 0, 1)
    .set(vector(5, 6), 1, 0)
    ...

where vector instantiate a rank-1 array... or simply a primitive or auto-boxed array. What do you think? I understand that specifying the coordinates for each vector is more explicit but more verbose at the same time. I wouldn't mind though, personally.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I think that's better. I'm in favour of explicitness when constructing values.

@karllessard
Copy link
Owner Author

I've gone through it with some initial comments. Do you have any of this refactoring implemented? I think it might be useful to look at some examples of the changes it implies in user code.

I partially do in the tensor-refactoringof that repository, which is incomplete at this moment. I'll try to do a full conversion of a whole example instead and let you know.

@EronWright
Copy link

It strikes me that Tensor<T> behaves somewhat like a Scala type class, when we think of it as representing behavior (e.g. math ops) applied to NDArray. Might be worth adopting such idea explicitly, and/or looking at how "type classes" are typically modeled in Java. (random thought)

@karllessard
Copy link
Owner Author

karllessard commented Sep 7, 2019

It strikes me that Tensor<T> behaves somewhat like a Scala type class, when we think of it as representing behavior (e.g. math ops) applied to NDArray. Might be worth adopting such idea explicitly, and/or looking at how "type classes" are typically modeled in Java. (random thought)

In the current proposal, a Tensor is simply a NdArray that has been allocated by TensorFlow, so they are quite interchangeable. Just to illustrate it, NdArray could have been renamed to Tensor while Tensor could have been renamed TFTensor.

@karllessard karllessard closed this Sep 7, 2019
@karllessard karllessard reopened this Sep 7, 2019
@karllessard
Copy link
Owner Author

karllessard commented Sep 7, 2019

Thinking of it, with that new proposal, if we opt for concrete classes per type (TInt32, TString, ...), we could probably just rename NdArray to Tensor (a name that I prefer) and simply drop the actual Tensor<> class.

@karllessard
Copy link
Owner Author

@Craigacp : here is a little sample of what the migration would be from actual Tensor API to the new one. Note that this code does not compile yet and also assume that will find a way to avoid those try-with-resources everywhere, like we are discussing.

It's not the best example though as there is not much happening here but well, at least there is one.

@Craigacp
Copy link

Craigacp commented Sep 7, 2019

WRT Scala type classes there was some discussion of typing tensors in Java Scala and kotlin at the jvmls after Erik Meijer's talk and I think the consensus was that the Java type system isn't powerful enough. That said my type theory isn't too strong and John Rose was taking about something like that on Twitter during it (https://twitter.com/JohnRose00/status/1156308406585532416?s=19).

We're looking at if the Panama memory access API plus var handles will be enough to express tensor slicing, but that's not too useful for the SIG as it's targeting Java 8.

@karllessard
Copy link
Owner Author

@jxtps, do you have some opinion/suggestions on this topic?

I think it’s pretty to close to what you were discussing before in the SIG mailing list.

@jxtps
Copy link

jxtps commented Sep 12, 2019

Hey, sorry I have been neck-deep in some other projects, will try to take a look later this week.

@EronWright
Copy link

I think that @Craigacp has a good point that the Java type system isn't powerful enough (and in some cases may even be counter-productive).

To @karllessard's point about NDArray vs Tensor, I prefer to separate the tensor (view) vs buffer (storage) separation. 2 cents.

May I suggest you take a look at tensorflow-scala, which has a fairly well-developed set of types. At least for comparison sake.

@Craigacp
Copy link

Craigacp commented Sep 12, 2019

@EronWright, as parts of OpenJDK Valhalla start to land in Java then the barrier between primitive types and references wrt things like generics will go away, so it would be nice if we can avoid surfacing too many of these issues until the JVM is ready to help us. I think we'll need to for performance reasons though, which is disappointing.

One of the issues I've had when coding in Scala is that it's difficult to know when it's going to box your primitives, and if it does that then your performance wanders off a cliff. As the unification between primitives and references only exists in the Scala language and not in the class files I had to resort to running javap on the class files emitted from scalac to figure out what was going to be boxed and what was a primitive.

@jxtps
Copy link

jxtps commented Sep 20, 2019

Nice work on the RFCs @karllessard et al! I have a couple of somewhat open-ended questions / comments (sorry I'm a little all over the place):

  1. The creation of "small" tensors could use a fluency boost. Tensors.ofDouble(new long[]{2,2}) could instead be Tensors.ofDouble(2, 2), i.e. using long... dims, right? (This was from https://github.com/karllessard/community/blob/master/sigs/jvm/rfcs/20190606-java-tensor-io.md )
    1. Since everything needs to start from a Shape (or long[]), maybe add convenience methods on there to easily create constants, variables & tensors?
    2. Or is that too many ways to do the same thing? There already seems to be quite a few...
  2. Feels like we should enumerate a couple of use-cases and make sure the APIs work smoothly for them.
    1. Do people really construct "small-to-medium" sized tensors as constants in their code? I.e. is TInt32.ofShape(3, 2, 2).row(... likely to see significant use?
    2. So far I've been working mostly with images, and I get them into tensorflow by creating a large, flat float[] that I have carefully populated, then attach a size and create a tensor. That works reasonably well, but obviously has some memcopy overhead.
    3. Could obviously switch to using the NdArray write primitives.
      1. I'd be a little wary of calling a set(float value, long... indices) type method since then you'd have to recompute the flat array index on every call.
      2. The Iterator<> is probably fine, but then there's no "indexing help" (not super-needed, so this may be fine - probably better to start with a slim interface and then we can add conveniences as we discover the real need?)
  3. I'm not 100% clear on what the best way to handle the various <Type> tags.
    1. For starters, I still doubt the usefulness of making the dtype part of the type signature.
    2. If we're going to keep it, your proposal sounds broadly feasible, though there's something that nags at me but I can't quite put my finger on it. I think I'd prefer to have a class DataType<T_ForNdArray, T_ForTfInternals> but I don't think that would really work since the corresponding Tensor type would still need to independently specify the T_ForNdArray and we'd want to avoid that duplication.
      1. Note that users would be able to create new DataTypes if there's a non-private DataType.create method, though that's probably not a big deal in reality.
  4. Avoid boxing like the plague. Use int instead of Integer wherever possible - there's boxing overhead, there are some java <-> scala interop issues (weird corner cases), and you never actually want an Integer, you always want an int.
    1. If we're going to create all the separate primitive-type-specific classes, then there's really no need to use Integer et al basically anywhere.

@jxtps
Copy link

jxtps commented Sep 21, 2019

  1. With tf.math.add(c, c) returning a specific tensor type (TInt32), you'll have to overload the implementation of tf.math.add at least once per tensor type.

    1. That's going to add up fast - if there are N ops and M data types, then you'll end up with O(N*M) implementations. Sure, that's in generated code, but still - you're putting a big burden on the framework code / devs for marginal user benefit, and e.g. IntelliJ auto-complete will offer dropdowns that repeat the same function M times, cluttering it up and greatly reducing its value.
  2. With public abstract class Tensor<T> implements NdArray<T> { ... } and public final class TInt32 extends Tensor<Integer> implements Numeric { ... }, how will TInt32 offer direct IntNdArray functionality?

  3. What is the full mapping between T_ForNdArray and dtype?

The Tensor type-safety features you're looking to create could maybe work in Scala where there are typedefs, primitive specialization of generic classes, and multiple inheritance via traits.

In Java, which is a great language in so many ways, you will be banging your head against the type system at every turn.

Framework developer time and effort is a very limited resource. Tensor dtype safety is not that big of a deal in practice. Let's not create an enormous type edifice that we will be struggling to design and maintain when doing:

public class Tensor {
   final public DataType dtype;
   ... 
   public IntNdArray asInts() {...} // Throws exception if dtype isn't compatible
   ...
}

would get the job done in a fraction of the time/effort and be easy to use.

The only substantive loss I see in shedding the <T> from Tensor is that some operations change the datatype, e.g myFloats > 0.5f produces a boolean tensor. Similarly tf.image.decode_png produces a uint8 tensor. The first time around those can come as surprises.

A potential workaround for that is to default to auto-casting and issuing a warning. The user can then choose to either turn off auto-cast (making it an error that throws an exception), or accept auto-cast (silencing the warning).

@karllessard
Copy link
Owner Author

Thanks @jxtps, there is a lot to reply to right now but let me start with your latest post:

With tf.math.add(c, c) returning a specific tensor type (TInt32), you'll have to overload the implementation of tf.math.add at least once per tensor type.

There is no need to overload the operation per tensor type. The type of the operation (and in the case of math.add, the type of its output) are still inferred by the input operands. So the signature remains Add<T> add(Operand<T> x, Operand<T> y), the only thing that changes is that T instead of being Integer will be a TInt32, that you can access in eager mode via something like Operand.data(), i.e.

  • tf.math.add(x, y) returns Add<T>, which implements Operand<T>.
  • tf.math.add(x, y).data() returns a TInt32

With public abstract class Tensor implements NdArray { ... } and public final class TInt32 extends Tensor implements Numeric { ... }, how will TInt32 offer direct IntNdArray functionality?

That's because in this new RFC, I dropped the primitive-based NdArray interfaces, which ends up to be a bad decision. I restored them in a new prototype I'm working on right now to prevent the problem you reported.

In this new prototype, tensor types like TInt32 does not extend Tensor anymore. Instead, Tensor<T> remains a reference to a native TF tensor, like now. But T is a tensor type instead of a java type. And it is the tensor type T that extends from NdArray, which you can access by calling Tensor.data(). For example, something like

Tensor<TInt32t = TInt32.tensorOfShape(2, 2);  // TInt32 extends from IntNdArray
t.data().at(0).write(new int[] { 100, 101 })  // possible because IntNdArray accepts to write a int[]
            .at(1).write(new int[] { 102, 103 });

What is the full mapping between T_ForNdArray and dtype

In this new prototype, then it is up to dtype to decide how it can map tensor data to NdArray. So in case of TInt32, it is pretty simple and can simply work with integers, because both are 32-bits. But, for example, TFloat16 could still implement the FloatNdArray interface, while working on a ByteNdArray, mapping each float value to only 16-bits. I added support for this as well in the code (to be discussed, of course).

The Tensor type-safety features you're looking to create could maybe work in Scala where there are typedefs, primitive specialization of generic classes, and multiple inheritance via traits.

If the trend is to remove the type-safety feature completely, we sure can do but we need to ask more broadly first because I think some users enjoy this aspect of TF Java. Nonetheless, the snippet you provided as a alternative is pretty close to what my new prototype is exposing, except that by keeping type-safety present, only one method is required (data()) which already returns the right type of NdArray.

To be continued...

@karllessard
Copy link
Owner Author

The creation of "small" tensors could use a fluency boost. Tensors.ofDouble(new long[]{2,2}) could instead be Tensors.ofDouble(2, 2), i.e. using long... dims, right?

I tried to simplify it too by moving the factory methods to the datatype class instead of Tensors (so you don't need to specify it). In the RFC, it is TInt32.ofShape(2, 2) but now I would change it to TInt32.tensorOfShape(2, 2), since the return value would be a Tensorand not a TInt32.

Also, we could have some shortcuts for rank-0 and rank-2 tensors, such as TInt32.scalar(1) and TInt32.vector(1, 2, 3, 4).

Or is that too many ways to do the same thing? There already seems to be quite a few...

That is a very valid point, there is also the Constant class that builds up tensors... ideally, it should be clear for the user which one to use in which and not give him too many options...

@EronWright
Copy link

Suggestion that you close this PR if it is obsolete.

@karllessard
Copy link
Owner Author

Thanks @EronWright for raising this up, while a lot of stuff is still accurate with what has been merged lately, some of it must be updated. Instead of just closing it, it could also be interesting to make changes and merge it in TF community repo so it gains some visibility to all TF developers. I can take a look at it.

penpornk and others added 29 commits June 22, 2020 15:01
More importantly, add an official guide on how TF1 Estimator users with feature columns can convert their code into TF2 Keras users with KPL.
Update Keras categorical input design, mark it as complete.
RFC: Introducing Transactions extension to Modular Filesystems
RFC: Migrate gelu activation from Addons to Core
Update two proposed changes to the existing Attention layer
RFC: Multihead Attention and EinsumDense on Keras
We want to require all new public APIs to have publicly documented arguments and return values.
RFC: TensorFlow Extension Types
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.