-
Notifications
You must be signed in to change notification settings - Fork 77
[ROCm] Hipify changes #398
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from all commits
d6337fa
528bcac
39ee458
86323d8
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -31,6 +31,12 @@ endmacro() | |
|
||
# TODO: Default to ON if CUDA available. | ||
option(TP_USE_CUDA "Enable support for CUDA tensors" OFF) | ||
option(TP_USE_ROCM "Enable support for ROCM tensors" OFF) | ||
|
||
# if both TP_USE_CUDA and TP_USE_ROCM is set then break | ||
if(TP_USE_CUDA AND TP_USE_ROCM) | ||
message(FATAL_ERROR "Tensorpipe can be built either for CUDA or ROCm, TP_USE_CUDA and TP_USE_ROCM both are set, erroring out!!!!") | ||
endif() | ||
Comment on lines
+37
to
+39
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I am not suggesting to do this now, but how difficult would that limitation be to lift? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. For the first version of ROCm support we intend to still use the There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We are thinking tensorpipe build for CUDA or ROCm will be mutual exclusive, since pyTorch lib is also mutual exclusive. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think the question here was motivated by the fact that in principle there shouldn't be any hard blocker to have both CUDA and ROCm (once we fix the name conflicts) right? Hence this is mainly a comment about "code style". In practice though yes, when used from PyTorch they will be mutually exclusive, hence no worries about this. |
||
|
||
# Optional features | ||
option(TP_BUILD_BENCHMARK "Build benchmarks" OFF) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OOC, this dependency does not seem existing in PyTorch's
.gitmodules
. Any reason for this difference?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, that's because PyTorch currently has hipify logic as part of its source code at https://github.com/pytorch/pytorch/blob/master/torch/utils/hipify
We can't use that for Tensorpipe as it'll create a circular dependency.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, in PyTorch hipify is not used as a git submodule. For new project to hipify we are using hipify-torch repo as a git submodule, so that all the hipification code is in a single place, providing many interfaces.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we plan to change PyTorch's hipify to use the same strategy?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, we would like to in the long run, but PyTorch being a much bigger codebase, and having a more complex hipification strategy, will require much more coordination and effort.