Replies: 2 comments
-
From the video, it looks like array = np.arange(1.0, 8.0)
print(array)
print(f"NumPy array location: {hex(id(array))}")
array[0] = 9
print(array)
print(f"NumPy array location: {hex(id(array))}")
array = array + 1
print(array)
print(f"NumPy array location: {hex(id(array))}") Gives an output like this:
I'm guessing that when you create a |
Beta Was this translation helpful? Give feedback.
-
Hey @Arcane-WD here what i found from the documentation of torch.Tensor.numpy If force is False (the default), the conversion is performed only if the tensor is on the CPU, does not require grad, does not have its conjugate bit set, and is a dtype and layout that NumPy supports. The returned ndarray and the tensor will share their storage, so changes to the tensor will be reflected in the ndarray and vice versa. i hope that should resolve your doubt. |
Beta Was this translation helpful? Give feedback.
-
Hello, I have a small doubt in the vid part "PyTorch and NumPy". I create a new ones tensor, then create a variable to hold numpy version of said tensor, then increment tensor using

tensor = tensor + 1
there is no data shared b/w tensor and numpy_tensor. But if I usetensor += 1
, the data is shared and both show the same output.Beta Was this translation helpful? Give feedback.
All reactions