Skip to content

Conversation

daher928
Copy link

This pull request adds a check for torch.cuda.is_available() before torch.cuda.set_device(rank) to prevent runtime errors on non-CUDA environments.

Changes Made

  • Added torch.cuda.is_available() check before calling torch.cuda.set_device(rank)
  • Falls back to CPU device when CUDA is not available
  • Also added CUDA availability check before torch.cuda.synchronize() call
  • Prevents runtime errors when running on systems without CUDA support

Context

This change prevents crashes when the distributed initializer is called on systems that don't have CUDA available, making the code more robust across different deployment environments.

…ironments

- Added torch.cuda.is_available() check before calling torch.cuda.set_device(rank)
- Falls back to CPU device when CUDA is not available
- Also added CUDA check before torch.cuda.synchronize() call
- Fixes runtime errors when running on systems without CUDA support
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant