Skip to content
@xlite-dev

xlite-dev

Develop ML/AI toolkits and ML/AI/CUDA Learning resources.

Pinned Loading

  1. LeetCUDA LeetCUDA Public

    📚LeetCUDA: Modern CUDA Learn Notes with PyTorch for Beginners🐑, 200+ CUDA Kernels, Tensor Cores, HGEMM, FA-2 MMA.🎉

    Cuda 5.5k 582

  2. lite.ai.toolkit lite.ai.toolkit Public

    🛠 A lite C++ AI toolkit: 100+ models with MNN, ORT and TRT, including Det, Seg, Stable-Diffusion, Face-Fusion, etc.🎉

    C++ 4.2k 749

  3. Awesome-LLM-Inference Awesome-LLM-Inference Public

    📚A curated list of Awesome LLM/VLM Inference Papers with Codes: Flash-Attention, Paged-Attention, WINT8/4, Parallelism, etc.🎉

    Python 4.2k 293

  4. Awesome-DiT-Inference Awesome-DiT-Inference Public

    📚A curated list of Awesome Diffusion Inference Papers with Codes: Sampling, Cache, Quantization, Parallelism, etc.🎉

    Python 323 17

  5. torchlm torchlm Public

    💎An easy-to-use PyTorch library for face landmarks detection: training, evaluation, inference, and 100+ data augmentations.🎉

    Python 260 24

  6. ffpa-attn ffpa-attn Public

    ⚡️FFPA: Extend FlashAttention-2 with Split-D, achieve ~O(1) SRAM complexity for large headdim, 1.8x~3x↑ vs SDPA.🎉

    Cuda 192 8

Repositories

Showing 10 of 30 repositories
  • flux-faster Public

    A forked version of flux-fast that makes flux-fast even faster with cache-dit, 3.3x speedup on NVIDIA L20.

    xlite-dev/flux-faster’s past year of commit activity
    Python 11 0 0 0 Updated Jul 15, 2025
  • flux-fast Public Forked from huggingface/flux-fast

    A forked version of flux-fast that makes flux-fast even faster with cache-dit.

    xlite-dev/flux-fast’s past year of commit activity
    Python 5 8 0 0 Updated Jul 15, 2025
  • cache-dit Public Forked from vipshop/cache-dit

    🤗CacheDiT: A Training-free and Easy-to-use Cache Acceleration Toolbox for Diffusion Transformers🔥

    xlite-dev/cache-dit’s past year of commit activity
    Python 4 4 0 0 Updated Jul 15, 2025
  • SpargeAttn Public Forked from thu-ml/SpargeAttn

    SpargeAttention: A training-free sparse attention that can accelerate any model inference.

    xlite-dev/SpargeAttn’s past year of commit activity
    Cuda 6 Apache-2.0 48 0 0 Updated Jul 14, 2025
  • SageAttention Public Forked from thu-ml/SageAttention

    Quantized Attention that achieves speedups of 2.1-3.1x and 2.7-5.1x compared to FlashAttention2 and xformers, respectively, without lossing end-to-end metrics across various models.

    xlite-dev/SageAttention’s past year of commit activity
    Cuda 0 Apache-2.0 155 0 0 Updated Jul 14, 2025
  • nunchaku Public Forked from mit-han-lab/nunchaku

    [ICLR2025 Spotlight] SVDQuant: Absorbing Outliers by Low-Rank Components for 4-Bit Diffusion Models

    xlite-dev/nunchaku’s past year of commit activity
    Python 2 Apache-2.0 123 0 0 Updated Jul 14, 2025
  • Awesome-DiT-Inference Public

    📚A curated list of Awesome Diffusion Inference Papers with Codes: Sampling, Cache, Quantization, Parallelism, etc.🎉

    xlite-dev/Awesome-DiT-Inference’s past year of commit activity
    Python 323 GPL-3.0 17 0 0 Updated Jul 14, 2025
  • Awesome-LLM-Inference Public

    📚A curated list of Awesome LLM/VLM Inference Papers with Codes: Flash-Attention, Paged-Attention, WINT8/4, Parallelism, etc.🎉

    xlite-dev/Awesome-LLM-Inference’s past year of commit activity
    Python 4,235 GPL-3.0 293 0 0 Updated Jul 14, 2025
  • lite.ai.toolkit Public

    🛠 A lite C++ AI toolkit: 100+ models with MNN, ORT and TRT, including Det, Seg, Stable-Diffusion, Face-Fusion, etc.🎉

    xlite-dev/lite.ai.toolkit’s past year of commit activity
    C++ 4,163 GPL-3.0 749 1 0 Updated Jul 14, 2025
  • LeetCUDA Public

    📚LeetCUDA: Modern CUDA Learn Notes with PyTorch for Beginners🐑, 200+ CUDA Kernels, Tensor Cores, HGEMM, FA-2 MMA.🎉

    xlite-dev/LeetCUDA’s past year of commit activity
    Cuda 5,512 GPL-3.0 582 6 0 Updated Jul 14, 2025