-
Notifications
You must be signed in to change notification settings - Fork 393
[wip] Multimem allreduce cutlass dsl #1169
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. Weβll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
[wip] Multimem allreduce cutlass dsl #1169
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @Amir-19, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request introduces a high-performance multi-memory allreduce operation by integrating NVIDIA NVSHMEM with CUTLASS DSL. The changes enable direct GPU-to-GPU communication and custom kernel generation, aiming to provide a more efficient collective communication primitive for distributed deep learning workloads.
Highlights
- NVSHMEM Integration: Introduced C++ bindings for NVIDIA NVSHMEM APIs, exposing functionalities like memory allocation (
nvshmem_malloc
), process group information (my_pe
,n_pes
), and collective operations (barrier_all
,alltoall
,multicast_ptr
) to PyTorch. - Custom AllReduce Kernel with CUTLASS DSL: Developed a
MultimemAllReduce
Python class that orchestrates a custom allreduce operation. This includes a CUDA kernel written using CUTLASS DSL, which performs direct multi-memory load-reduce and store operations (multimem_ld_reduce
,multimem_st
) and uses multi-memory atomic operations for synchronization (signal_multimem
,wait_loop
). - JIT System Enhancements: Modified the JIT compilation infrastructure to support device-side linking for CUDA code, specifically enabling the compilation and linking of NVSHMEM-dependent kernels.
- New Utilities for Multi-Memory Operations: Added
flashinfer/cute_utils.py
containing low-level CUTLASS DSL user operations that wrap PTX assembly instructions for multi-memory load-reduce, store, and barrier signaling/waiting. - Dependency Management: Updated
setup.py
to includenvidia-nvshmem-cu12
andnvidia-cutlass-dsl
as required packages, ensuring the necessary libraries are available. - Comprehensive Testing: Included new multi-process tests (
tests/test_multimem_allreduce.py
) to verify the correctness of theMultimemAllReduce
implementation againsttorch.distributed.all_reduce
across various data sizes and ranks.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with π and π on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. β©
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
The pull request introduces a MultimemAllReduce
operation using nvshmem
and Cutlass DSL. The review focuses on correctness and robustness, identifying issues like error handling, undefined attributes, and typos. The suggestions aim to improve the stability of the new feature.
@@ -51,7 +51,7 @@ def generate_build_meta(aot_build_meta: dict) -> None: | |||
|
|||
ext_modules = [] | |||
cmdclass = {} | |||
install_requires = ["numpy", "torch", "ninja", "requests"] | |||
install_requires = ["numpy", "torch", "ninja", "requests", "nvidia-nvshmem-cu12", "nvidia-cutlass-dsl"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Currently nvidia-cutlass-dsl
only supports python 3.12 so we might make it default dependency after more wheels for more python versions are ready, otherwise the flashinfer installation might fail (if user are not using python 3.12).
π Description
π Related Issues
π Pull Request Checklist
Thank you for contributing to FlashInfer! Before we review your pull request, please make sure the following items are complete.
β Pre-commit Checks
pre-commit
by runningpip install pre-commit
(or used your preferred method).pre-commit install
.pre-commit run --all-files
and fixed any reported issues.π§ͺ Tests
unittest
, etc.).Reviewer Notes