Skip to content

Lower aten::_unique2 #4661

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 4 commits into
base: master
Choose a base branch
from
Open

Lower aten::_unique2 #4661

wants to merge 4 commits into from

Conversation

ymwangg
Copy link
Contributor

@ymwangg ymwangg commented Feb 21, 2023

This PR adds aten::_unique2. Similar to aten::nonzero, this op is currently marked as experimental. The algorithm is similar to native pytorch cuda implementation.

@JackCaoG JackCaoG added the dynamism Dynamic Shape Features label Feb 21, 2023
@JackCaoG JackCaoG self-requested a review February 21, 2023 19:59
@JackCaoG JackCaoG added the lowering ATen Operation lowering label Feb 21, 2023
t, sorted=True, return_inverse=True, return_counts=True))
return results

self.runAtenTest([torch.randint(4, 10, size=(10,)) for _ in range(2)] +
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we also check result of unique is dynamic?

Copy link
Collaborator

@JackCaoG JackCaoG left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mostly LGTM, thanks @ymwangg !

@@ -321,6 +321,7 @@ supported:
- triangular_solve
- unbind.int
- uniform_
- _unique2
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder how _unique2 differs from unique

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The _unique function does not support return_counts. It looks like it's not accessible to normal user because torch.unique is dispatched to either unique_dim or _unique2 here.

@@ -3190,4 +3190,25 @@ at::Tensor XLANativeFunctions::_cdist_forward(
bridge::GetXlaTensor(x1), bridge::GetXlaTensor(x2), p));
}

std::tuple<at::Tensor, at::Tensor, at::Tensor> XLANativeFunctions::_unique2(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FYI: I don't think this PR will be impacted but just in case you need to develop dynamic models, you'd need to develop on functionization_dynamic_shape branch which has functionalization feature enabled.


torch::lazy::NodePtr Clone(torch::lazy::OpList operands) const override;

XlaOpVector Lower(LoweringContext* loctx) const override;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: mind adding a ToString()?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It appears ToString() override is only needed when the op has input types other than lazy::Value. So I guess the default should be fine.

xla::Shape NodeOutputShape(const torch::lazy::Value& input) {
xla::Shape input_shape = GetXlaShape(input);
int64_t num_elements = xla::ShapeUtil::ElementsIn(input_shape);
xla::PrimitiveType indices_type = GetShapeDimensionType(/*device=*/nullptr);
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same question here, do we want the indices type to be S32 or S64? Though returning S32 does not break any tests.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would leave most of the op in S32 to avoid extra cost lol. I think eventually we want to make setDimensionSize and getDimensionSize handle S64, which I don't think is super complicated, but someone need to work on it...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
dynamism Dynamic Shape Features lowering ATen Operation lowering
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants