You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
EDIT: Sorry, I think this is already addressed in this issue. I'm not too sure if it's because of the same reason or not. Apologies if this issue is a duplicate (I can't find a delete button!)
Hi,
I noticed a small issue when I slice np arrays and send them over to torch using pytorch - some of the values get set to inf on the torch side. Here's a small piece of code that replicates what I observed:
-- echo.lua : a simple class with a method to just echo back a tensorrequire'torch'require'nn'localEcho=torch.class('Echo')
functionEcho:__init()
-- Dummy initendfunctionEcho:echo(nparray)
returnnparrayend
# test.py: script to generate a bunch of random tensors,# slice them, and see if they if they're echo'd back normallyimportPyTorchHelpersimportnumpyasnpEcho=PyTorchHelpers.load_lua_class('echo.lua', 'Echo')
net=Echo()
foriinrange(1000):
arr=np.random.rand(10,100)
arr_slice=arr[:,1:] # arbitrary sliceecho=net.echo(arr_slice)
printnp.sum(arr_slice), np.sum(echo.asNumpyTensor())
Note that if I either: (1) Don't slice the array or (2) slice, but also multiply by 1.0 (arr_slice = 1.0*arr[:,1:]), then the issue disappears. Any idea why? (I'm using python2.7)
PS: I've been juggling between fbtorch, several forks of lunatic-python, and putting up http servers in lua and querying from python. It's been a nightmare so far. Thank you so much for putting up this repo!
The text was updated successfully, but these errors were encountered:
Hi, I've recently observed something weird which might be related.
I'm defining a network, and in forward() I have a slicing operation separating the first half of my channels:
EDIT: Sorry, I think this is already addressed in this issue. I'm not too sure if it's because of the same reason or not. Apologies if this issue is a duplicate (I can't find a delete button!)
Hi,
I noticed a small issue when I slice np arrays and send them over to torch using pytorch - some of the values get set to inf on the torch side. Here's a small piece of code that replicates what I observed:
My output looks like:
Note that if I either: (1) Don't slice the array or (2) slice, but also multiply by 1.0 (
arr_slice = 1.0*arr[:,1:]
), then the issue disappears. Any idea why? (I'm using python2.7)PS: I've been juggling between fbtorch, several forks of lunatic-python, and putting up http servers in lua and querying from python. It's been a nightmare so far. Thank you so much for putting up this repo!
The text was updated successfully, but these errors were encountered: