Skip to content

torch.expand fails to create tensor with dynamic shape #3897

Open
@miladm

Description

@miladm

🐛 Bug

Running into the following error when lowering torch.expand with dynamic inputs. Looks like the tensor creation logic in upstream pytorch doesn't expect the observed shape values.

C++ exception with description "Trying to create tensor with negative dimension -9223372036769472112: [-9223372036769472112, -9223372036769472112, 3, 4]
Exception raised from check_size_nonnegative at /workspace/pytorch/aten/src/ATen/EmptyTensor.h:14 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x7d (0x7fc16106cbad in /workspace/pytorch/torch/lib/libc10.so)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0xde (0x7fc16106b29e in /workspace/pytorch/torch/lib/libc10.so)
frame #2: at::detail::empty_strided_generic(c10::ArrayRef<long>, c10::ArrayRef<long>, c10::Allocator*, c10::DispatchKeySet, c10::ScalarType) + 0x2c4 (0x7fc134fc7284 in /workspace/pytorch/torch/lib/libtorch_cpu.so)
frame #3: at::detail::empty_strided_cpu(c10::ArrayRef<long>, c10::ArrayRef<long>, c10::optional<c10::ScalarType>, c10::optional<c10::Layout>, c10::optional<c10::Device>, c10::optional<bool>) + 0x84 (0x7fc134fc7674 in /workspace/pytorch/torch/lib/libtorch_cpu.so)
frame #4: at::native::empty_strided_cpu(c10::ArrayRef<long>, c10::ArrayRef<long>, c10::optional<c10::ScalarType>, c10::optional<c10::Layout>, c10::optional<c10::Device>, c10::optional<bool>) + 0x29 (0x7fc1355b8319 in /workspace/pytorch/torch/lib/libtorch_cpu.so)
frame #5: <unknown function> + 0x1ddbc18 (0x7fc136158c18 in /workspace/pytorch/torch/lib/libtorch_cpu.so)
frame #6: at::_ops::empty_strided::redispatch(c10::DispatchKeySet, c10::ArrayRef<long>, c10::ArrayRef<long>, c10::optional<c10::ScalarType>, c10::optional<c10::Layout>, c10::optional<c10::Device>, c10::optional<bool>) + 0xb3 (0x7fc135d19533 in /workspace/pytorch/torch/lib/libtorch_cpu.so)
frame #7: <unknown function> + 0x1d1e55e (0x7fc13609b55e in /workspace/pytorch/torch/lib/libtorch_cpu.so)
frame #8: at::_ops::empty_strided::call(c10::ArrayRef<long>, c10::ArrayRef<long>, c10::optional<c10::ScalarType>, c10::optional<c10::Layout>, c10::optional<c10::Device>, c10::optional<bool>) + 0x14b (0x7fc135d1918b in /workspace/pytorch/torch/lib/libtorch_cpu.so)
frame #9: at::native::_to_copy(at::Tensor const&, c10::optional<c10::ScalarType>, c10::optional<c10::Layout>, c10::optional<c10::Device>, c10::optional<bool>, bool, c10::optional<c10::MemoryFormat>) + 0x1106 (0x7fc135599366 in /workspace/pytorch/torch/lib/libtorch_cpu.so)
frame #10: <unknown function> + 0x1f958c0 (0x7fc1363128c0 in /workspace/pytorch/torch/lib/libtorch_cpu.so)
frame #11: at::_ops::_to_copy::redispatch(c10::DispatchKeySet, at::Tensor const&, c10::optional<c10::ScalarType>, c10::optional<c10::Layout>, c10::optional<c10::Device>, c10::optional<bool>, bool, c10::optional<c10::MemoryFormat>) + 0x90 (0x7fc135a1ac30 in /workspace/pytorch/torch/lib/libtorch_cpu.so)
frame #12: <unknown function> + 0x1d2da83 (0x7fc1360aaa83 in /workspace/pytorch/torch/lib/libtorch_cpu.so)
frame #13: at::_ops::_to_copy::redispatch(c10::DispatchKeySet, at::Tensor const&, c10::optional<c10::ScalarType>, c10::optional<c10::Layout>, c10::optional<c10::Device>, c10::optional<bool>, bool, c10::optional<c10::MemoryFormat>) + 0x90 (0x7fc135a1ac30 in /workspace/pytorch/torch/lib/libtorch_cpu.so)
frame #14: <unknown function> + 0x32f3a25 (0x7fc137670a25 in /workspace/pytorch/torch/lib/libtorch_cpu.so)
frame #15: at::_ops::_to_copy::call(at::Tensor const&, c10::optional<c10::ScalarType>, c10::optional<c10::Layout>, c10::optional<c10::Device>, c10::optional<bool>, bool, c10::optional<c10::MemoryFormat>) + 0x174 (0x7fc135a1a8d4 in /workspace/pytorch/torch/lib/libtorch_cpu.so)
frame #16: at::native::to(at::Tensor const&, c10::optional<c10::ScalarType>, c10::optional<c10::Layout>, c10::optional<c10::Device>, c10::optional<bool>, bool, bool, c10::optional<c10::MemoryFormat>) + 0xf5 (0x7fc135599ed5 in /workspace/pytorch/torch/lib/libtorch_cpu.so)
frame #17: <unknown function> + 0x21475f5 (0x7fc1364c45f5 in /workspace/pytorch/torch/lib/libtorch_cpu.so)
frame #18: at::_ops::to_dtype_layout::call(at::Tensor const&, c10::optional<c10::ScalarType>, c10::optional<c10::Layout>, c10::optional<c10::Device>, c10::optional<bool>, bool, bool, c10::optional<c10::MemoryFormat>) + 0x17e (0x7fc135bd7dae in /workspace/pytorch/torch/lib/libtorch_cpu.so)
frame #19: torch_xla::cpp_test::CloseValues(at::Tensor, at::Tensor, double, double) + 0x14c (0x593fbc in test/cpp/build/test_ptxla)
frame #20: test/cpp/build/test_ptxla() [0x660c07]
frame #21: test/cpp/build/test_ptxla() [0x6bd834]
frame #22: torch_xla::cpp_test::ForEachDevice(absl::lts_20211102::Span<torch_xla::DeviceType const>, std::function<void (c10::Device const&)> const&) + 0x140 (0x593d80 in test/cpp/build/test_ptxla)
frame #23: torch_xla::cpp_test::AtenXlaTensorTest_TestExpandSymInt_Test::TestBody() + 0x100 (0x609540 in test/cpp/build/test_ptxla)
frame #24: void testing::internal::HandleSehExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) + 0x7e (0x7c38ee in test/cpp/build/test_ptxla)
frame #25: void testing::internal::HandleExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) + 0x7b (0x7a826b in test/cpp/build/test_ptxla)
frame #26: testing::Test::Run() + 0xd9 (0x781e79 in test/cpp/build/test_ptxla)
frame #27: testing::TestInfo::Run() + 0x10d (0x782c2d in test/cpp/build/test_ptxla)
frame #28: testing::TestSuite::Run() + 0x110 (0x783490 in test/cpp/build/test_ptxla)
frame #29: testing::internal::UnitTestImpl::RunAllTests() + 0x473 (0x794153 in test/cpp/build/test_ptxla)
frame #30: bool testing::internal::HandleSehExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) + 0x7e (0x7c74fe in test/cpp/build/test_ptxla)
frame #31: bool testing::internal::HandleExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) + 0x7b (0x7aabfb in test/cpp/build/test_ptxla)
frame #32: testing::UnitTest::Run() + 0xd4 (0x793c94 in test/cpp/build/test_ptxla)
frame #33: main + 0x1c (0x5918dc in test/cpp/build/test_ptxla)
frame #34: __libc_start_main + 0xe7 (0x7fc1339f0c87 in /lib/x86_64-linux-gnu/libc.so.6)
frame #35: _start + 0x2a (0x5917fa in test/cpp/build/test_ptxla

Metadata

Metadata

Assignees

No one assigned

    Labels

    dynamismDynamic Shape Features

    Type

    No type

    Projects

    No projects

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions