🏠
Working from home
Pinned Loading
-
-
-
-
Tiny-LLaMa-swichback-int8-acceleration
Tiny-LLaMa-swichback-int8-acceleration PublicForked from LeftGoga/NLP_project
Accelerate Tiny Llama training using quantized int8 operations with the bitsandbytes library. Includes tools for setup, fine-tuning examples on datasets, and performance evaluation metrics. Focused…
Jupyter Notebook
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.