-
Notifications
You must be signed in to change notification settings - Fork 1.4k
Description
I am working with Pytorch 1.13.1 and Python 3.10.12.
When using the Cleverhans CW attack for Pytorch, the attack script runs into 3 errors.
- On line 108 of the attack py file:
const = x.new_ones(len(x), 1) * initial_constThe following error comes up:
TypeError: new_ones(): argument 'dtype' must be torch.dtype, not int
To solve this, I assumed the 1 was supposed to denote the dimension of the tensor, rather than a dtype, so I wrapped the function call in another set of parenthesis:
const = x.new_ones((len(x), 1)) * initial_const- On line 134:
optimizer = torch.optim.Adam([modifier], lr=lr)I get the following error:
ValueError: can’t optimize a non-leaf Tensor
This is due to line 123:
modifier = torch.zeros_like(x, requires_grad=True)
Which returns a non-leaf Tensor. I was thinking at some point maybe a version update changed this function to not return a leaf tensor, but regardless, I fixed it using Pytorch's zeros function, since the documentation of zeros_like described these functions as equivalent if you give them different parameters accordingly. The edited line looks like this:
modifier = torch.zeros(x.size(), requires_grad=True, dtype=x.dtype, layout=x.layout, device=x.device)- On line 155:
loss.backward()I received this error:
RuntimeError: Trying to backward through the graph a second time (or directly access saved tensors after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to backward through the graph a second time or if you need to access saved tensors after calling backward.
To solve this, I just specified retain_graph=True:
loss.backward(retain_graph=True)But I'm not sure it this is the most efficient fix...
While the solutions I implemented make the attack script run and seemingly work, I am not sure if maybe I am missing something or if I have unknowingly changed the code's functionality in some way. So I would really appreciate any feedback and/or guidance.