Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[spike] evaluate + prototype interaction of unified memory abstraction with custom_ops #1556

Open
3 tasks
Titus-von-Koeller opened this issue Mar 5, 2025 · 0 comments
Assignees
Labels
Optimizers Issues or feature requests relating to optimizers spike

Comments

@Titus-von-Koeller
Copy link
Collaborator

Unified memory isn't supported in PyTorch and was considered a potential blocker for the custom ops refactor.

We found a workaround at the time, with a simple viability proof.

It's however not clear how this fits together with the current open PR #1544 and RFC #1545 and this needs to be fleshed out.

Questions:

  • Are the needed changes to the code base deeply rooted or relatively superficial
  • Is it impactful to work on this right now or should we focus on finalizing the non-optimizer related custom_ops first
  • Maybe it's straight forward to already implement this while prototyping? If yes, we can already open a PR or make it part of the open PR.
@Titus-von-Koeller Titus-von-Koeller self-assigned this Mar 5, 2025
@Titus-von-Koeller Titus-von-Koeller added Optimizers Issues or feature requests relating to optimizers spike labels Mar 5, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Optimizers Issues or feature requests relating to optimizers spike
Projects
None yet
Development

No branches or pull requests

1 participant