Skip to content

Decompose aten.channel_shuffle op (#4243) #4259

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 20 commits into
base: main
Choose a base branch
from

Conversation

ivangarcia44
Copy link
Contributor

@ivangarcia44 ivangarcia44 commented Jul 8, 2025

Support for the channel shuffle operator is added by torch dialect level decomposition (similar to the pixel_shuffle operation).

The decomposition is based on this specification:
https://docs.pytorch.org/docs/stable/generated/torch.nn.ChannelShuffle.html
and implementation:
aten/src/ATen/native/ChanelShuffle.cpp
https://github.com/pytorch/pytorch/blob/23491519d288dedb2a54cfad5fef7fcb2ad8eade/aten/src/ATen/native/ChanelShuffle.cpp#L4

Note that the operator consists of an expansion, expanded channel dimensions permute, and contraction of channel dimensions back to the original size. For example, for an input array of shape 1x8x4x4 with a group size of 2 would generate the MLIR linalg code below.

module {
func.func @channel_shuffle(%arg0: !torch.vtensor<[1, 8, 4, 4], f32>) -> !torch.vtensor<[1, 8, 4, 4], f32> {
%c0 = torch.constant.int 0
%c1 = torch.constant.int 1
%c2 = torch.constant.int 2
%c3 = torch.constant.int 3
%c4 = torch.constant.int 4
%dims = torch.prim.ListConstruct %c0, %c2, %c1, %c3, %c4 : (!torch.int, !torch.int, !torch.int, !torch.int, !torch.int) -> !torch.list

%reshaped = torch.prims.split_dim %arg0, %c1, %c2 : !torch.vtensor<[1, 8, 4, 4], f32>, !torch.int, !torch.int -> !torch.vtensor<[1, 4, 2, 4, 4], f32>

%permuted = torch.aten.permute %reshaped, %dims : !torch.vtensor<[1, 4, 2, 4, 4], f32>, !torch.list -> !torch.vtensor<[1, 2, 4, 4, 4], f32>

%collapsed = torch.prims.collapse %permuted, %c1, %c2 : !torch.vtensor<[1, 2, 4, 4, 4], f32>, !torch.int, !torch.int -> !torch.vtensor<[1, 8, 4, 4], f32>

return %collapsed : !torch.vtensor<[1, 8, 4, 4], f32>

}
}
References:
PyTorch ChannelShuffle definition:
https://docs.pytorch.org/docs/stable/generated/torch.nn.ChannelShuffle.html
ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices (2017):
https://arxiv.org/pdf/1707.01083
A Lightweight Dendritic ShuffleNet for Medical Image Classification (2025)
https://www.jstage.jst.go.jp/article/transinf/advpub/0/advpub_2024EDP7059/_pdf
PyTorch implementation:
aten/src/ATen/native/ChanelShuffle.cpp
https://github.com/pytorch/pytorch/blob/23491519d288dedb2a54cfad5fef7fcb2ad8eade/aten/src/ATen/native/ChanelShuffle.cpp#L4

Resolves #4243

@ivangarcia44 ivangarcia44 marked this pull request as draft July 8, 2025 21:03
@ivangarcia44 ivangarcia44 marked this pull request as ready for review July 8, 2025 21:58
@ivangarcia44 ivangarcia44 requested a review from sahas3 July 17, 2025 22:38
@ivangarcia44
Copy link
Contributor Author

ivangarcia44 commented Jul 22, 2025

Hi all, this is a reminder to please provide feedback to this PR to add support of the channel shuffle operation in torch to linalg lowering. Thank you

@newling @silvasean @rsuderman @zjgarvey @penguin-wwy @rafaelubalmw @sahas3 @vinitdeodhar @alaa-ali @dixinzhou @ramiro050 @qedawkins

Copy link
Member

@sahas3 sahas3 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM with a minor nit.

Please wait for other reviewers to review before merging. Thanks!



class ChannelShuffleDynamicDims(torch.nn.Module):
# Basic test case for ChannelShuffle operation.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Basic test case for ChannelShuffle operation. -> Basic test case for ChannelShuffle operation for dynamic dimensions

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Provide torch to linalg lowering for the torch.aten.channel_shuffle operation
3 participants