|
82 | 82 | name: "gemma-3-12b-it-qat"
|
83 | 83 | urls:
|
84 | 84 | - https://huggingface.co/google/gemma-3-12b-it
|
85 |
| - - https://huggingface.co/vinimuchulski/gemma-3-12b-it-qat-q4_0-gguf |
| 85 | + - https://huggingface.co/bartowski/google_gemma-3-12b-it-qat-GGUF |
86 | 86 | description: |
|
87 | 87 | This model corresponds to the 12B instruction-tuned version of the Gemma 3 model in GGUF format using Quantization Aware Training (QAT). The GGUF corresponds to Q4_0 quantization.
|
88 | 88 |
|
|
91 | 91 | You can find the half-precision version here.
|
92 | 92 | overrides:
|
93 | 93 | parameters:
|
94 |
| - model: gemma-3-12b-it-q4_0.gguf |
| 94 | + model: google_gemma-3-12b-it-qat-Q4_0.gguf |
95 | 95 | files:
|
96 |
| - - filename: gemma-3-12b-it-q4_0.gguf |
97 |
| - sha256: 6f1bb5f455414f7b46482bda51cbfdbf19786e21a5498c4403fdfc03d09b045c |
98 |
| - uri: huggingface://vinimuchulski/gemma-3-12b-it-qat-q4_0-gguf/gemma-3-12b-it-q4_0.gguf |
| 96 | + - filename: google_gemma-3-12b-it-qat-Q4_0.gguf |
| 97 | + sha256: 2ad4c9ce431a2d5b80af37983828c2cfb8f4909792ca5075e0370e3a71ca013d |
| 98 | + uri: huggingface://bartowski/google_gemma-3-12b-it-qat-GGUF/google_gemma-3-12b-it-qat-Q4_0.gguf |
99 | 99 | - !!merge <<: *gemma3
|
100 | 100 | name: "gemma-3-4b-it-qat"
|
101 | 101 | urls:
|
102 | 102 | - https://huggingface.co/google/gemma-3-4b-it
|
103 |
| - - https://huggingface.co/vinimuchulski/gemma-3-4b-it-qat-q4_0-gguf |
| 103 | + - https://huggingface.co/bartowski/google_gemma-3-4b-it-qat-GGUF |
104 | 104 | description: |
|
105 | 105 | This model corresponds to the 4B instruction-tuned version of the Gemma 3 model in GGUF format using Quantization Aware Training (QAT). The GGUF corresponds to Q4_0 quantization.
|
106 | 106 |
|
|
109 | 109 | You can find the half-precision version here.
|
110 | 110 | overrides:
|
111 | 111 | parameters:
|
112 |
| - model: gemma-3-4b-it-q4_0.gguf |
| 112 | + model: google_gemma-3-4b-it-qat-Q4_0.gguf |
113 | 113 | files:
|
114 |
| - - filename: gemma-3-4b-it-q4_0.gguf |
115 |
| - sha256: 2ca493d426ffcb43db27132f183a0230eda4a3621e58b328d55b665f1937a317 |
116 |
| - uri: huggingface://vinimuchulski/gemma-3-4b-it-qat-q4_0-gguf/gemma-3-4b-it-q4_0.gguf |
| 114 | + - filename: google_gemma-3-4b-it-qat-Q4_0.gguf |
| 115 | + sha256: 0231e2cba887f4c7834c39b34251e26b2eebbb71dfac0f7e6e2b2c2531c1a583 |
| 116 | + uri: huggingface://bartowski/google_gemma-3-4b-it-qat-GGUF/google_gemma-3-4b-it-qat-Q4_0.gguf |
117 | 117 | - !!merge <<: *gemma3
|
118 | 118 | name: "gemma-3-27b-it-qat"
|
119 | 119 | urls:
|
120 | 120 | - https://huggingface.co/google/gemma-3-27b-it
|
121 |
| - - https://huggingface.co/vinimuchulski/gemma-3-27b-it-qat-q4_0-gguf |
| 121 | + - https://huggingface.co/bartowski/google_gemma-3-27b-it-qat-GGUF |
122 | 122 | description: |
|
123 | 123 | This model corresponds to the 27B instruction-tuned version of the Gemma 3 model in GGUF format using Quantization Aware Training (QAT). The GGUF corresponds to Q4_0 quantization.
|
124 | 124 |
|
|
127 | 127 | You can find the half-precision version here.
|
128 | 128 | overrides:
|
129 | 129 | parameters:
|
130 |
| - model: gemma-3-27b-it-q4_0.gguf |
| 130 | + model: google_gemma-3-27b-it-qat-Q4_0.gguf |
131 | 131 | files:
|
132 | 132 | - filename: gemma-3-27b-it-q4_0.gguf
|
133 |
| - sha256: 45e586879bc5f5d7a5b6527e812952057ce916d9fc7ba16f7262ec9972c9e2a2 |
134 |
| - uri: huggingface://vinimuchulski/gemma-3-27b-it-qat-q4_0-gguf/gemma-3-27b-it-q4_0.gguf |
| 133 | + sha256: 4f1e32db877a9339df2d6529c1635570425cbe81f0aa3f7dd5d1452f2e632b42 |
| 134 | + uri: huggingface://bartowski/google_gemma-3-27b-it-qat-GGUF/google_gemma-3-27b-it-qat-Q4_0.gguf |
135 | 135 | - !!merge <<: *gemma3
|
136 | 136 | name: "qgallouedec_gemma-3-27b-it-codeforces-sft"
|
137 | 137 | urls:
|
|
0 commit comments