Skip to content

A Pytorch CUDA/C++ JIT implementation with Python wrapper of Involution

Notifications You must be signed in to change notification settings

justanhduc/involution

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 

Repository files navigation

Involution

A native CUDA/C++ Pytorch implementation with Python wrapper of Involution: Inverting the Inherence of Convolution for Visual Recognition.

Features

  • This implementation is the same as the official version as all the CUDA codes are taken from there. Minimal C++ wrapper is written.
  • This implementation does not require CuPy as a dependency.
  • This implementation supports dilation with same padding.
  • This implementation supports Half floating point (experimental).

Usage

Firstly, set a CUDA_HOME variable to point to your CUDA root.

export CUDA_HOME=/path/to/your/CUDA/root

Then, simply clone this repo and copy the package to your code base. Then just import it from the package

from involution import Involution
import torch as T

inv = Involution(16, 3, 1, dilation=1).cuda()
input = T.randn(8, 16, 64, 64).cuda()
print(inv(input).shape)

In the first import time, it will compile the package so it will take some time. From the second time, the import time will be normal.

Testing

cd involution
pytest test.py

Note: The tests for fp16 are likely to fail the test.

Note: Any value of dilation larger than 1 will fail the test.

License

MIT. See here.

Reference

Official code

About

A Pytorch CUDA/C++ JIT implementation with Python wrapper of Involution

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published