Skip to content

Conversation

galv
Copy link

@galv galv commented Apr 26, 2023

Fixes:

Fix [W] pytorch-runner-N0-04/26/23-22:50:28 | `inference_time` was not set. Inference time will be incorrect! To correctly compare runtimes, please set the `inference_time` attribute in `infer_impl()`

Also fixes RuntimeError: Can't call numpy() on Tensor that requires grad. Use tensor.detach().numpy() instead.

Fixes:

```
Fix [W] pytorch-runner-N0-04/26/23-22:50:28 | `inference_time` was not set. Inference time will be incorrect! To correctly compare runtimes, please set the `inference_time` attribute in `infer_impl()`
```

Also fixes `RuntimeError: Can't call numpy() on Tensor that requires grad. Use tensor.detach().numpy() instead.`
@@ -68,8 +68,9 @@ def infer_impl(self, feed_dict):

out_dict = OrderedDict()
for name, output in zip(self.output_names, outputs):
out_dict[name] = output.cpu().numpy()
return out_dict, end - start
out_dict[name] = output.detach().cpu().numpy()

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could also probably be just output.numpy(force = True)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually we could just return PyTorch tensors now, i.e.

Suggested change
out_dict[name] = output.detach().cpu().numpy()
out_dict[name] = output

@kevinch-nv kevinch-nv requested a review from a team as a code owner July 9, 2025 17:14
@kevinch-nv kevinch-nv requested review from yuanyao-nv and removed request for a team July 9, 2025 17:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants