-
Notifications
You must be signed in to change notification settings - Fork 725
Open
Milestone
Description
This issue collects tasks that block porting rir/rir.cpp and rir/ray_tracing.cpp to use torch stable ABI.
- implement
mutable_data_ptr<T>()
andconst_data_ptr<T>()
in torch/csrc/stable/tensor_struct.h. For instance, this simplifies porting of expressions liketensor.data_ptr<scalar_t>()
. Currently, one needs to rewrite this asreinterpret_cast<scalar_t*>(tensor.data_ptr())
where tensor is atorch::stable::Tensor
. Not really a blocker but would be nice to have.
Fix available: [STABLE ABI] Add mutable_data_ptr() and const_data_ptr() methods to torch::stable::Tensor. pytorch#161891 - import
arange
as a stable/ops.h factory function - implement
torch::fft::fftshift
andtorch::fft::irfft
as a stable/ops.h operation
Resolution: delete rir/ray_tracing.cpp as unused - implement
index
as atorch::stable::Tensor
method. Can we use torch::indexing::Slice() in torch stable ABI code? - expose
AT_DISPATCH_FLOATING_TYPES_AND_HALF
andAT_DISPATCH_FLOATING_TYPES
to stable ABI. Not really a blocker but would be nice to have.
For a workaround, see [STABLE ABI] Porting forced_align #4078 - implement
zeros
andfull
as astable/ops.h
factory functions. Currently, one can usenew_empty
andfill_
to mimic these functions. Not really a blocker but would be nice to have. - implement
tensor
as astable/ops.h
factory function. Currently, one can usenew_empty
but it is really clumsy to mimictensor
, especially for CUDA tensors. - implement
dot
,norm
, andmax
as atorch::stable::Tensor
method or astable/ops.h
operation - implement
item<T>()
as atorch::stable::Tensor
template method
For a workaround, see [STABLE ABI] Porting forced_align #4078
janeyx99
Metadata
Metadata
Assignees
Labels
No labels