Skip to content

Commit 4bc4b1e

Browse files
committed
chore(model gallery) update gemma3 qat models
Signed-off-by: Ettore Di Giacinto <[email protected]>
1 parent e495b89 commit 4bc4b1e

File tree

1 file changed

+14
-14
lines changed

1 file changed

+14
-14
lines changed

gallery/index.yaml

+14-14
Original file line numberDiff line numberDiff line change
@@ -82,7 +82,7 @@
8282
name: "gemma-3-12b-it-qat"
8383
urls:
8484
- https://huggingface.co/google/gemma-3-12b-it
85-
- https://huggingface.co/vinimuchulski/gemma-3-12b-it-qat-q4_0-gguf
85+
- https://huggingface.co/bartowski/google_gemma-3-12b-it-qat-GGUF
8686
description: |
8787
This model corresponds to the 12B instruction-tuned version of the Gemma 3 model in GGUF format using Quantization Aware Training (QAT). The GGUF corresponds to Q4_0 quantization.
8888

@@ -91,16 +91,16 @@
9191
You can find the half-precision version here.
9292
overrides:
9393
parameters:
94-
model: gemma-3-12b-it-q4_0.gguf
94+
model: google_gemma-3-12b-it-qat-Q4_0.gguf
9595
files:
96-
- filename: gemma-3-12b-it-q4_0.gguf
97-
sha256: 6f1bb5f455414f7b46482bda51cbfdbf19786e21a5498c4403fdfc03d09b045c
98-
uri: huggingface://vinimuchulski/gemma-3-12b-it-qat-q4_0-gguf/gemma-3-12b-it-q4_0.gguf
96+
- filename: google_gemma-3-12b-it-qat-Q4_0.gguf
97+
sha256: 2ad4c9ce431a2d5b80af37983828c2cfb8f4909792ca5075e0370e3a71ca013d
98+
uri: huggingface://bartowski/google_gemma-3-12b-it-qat-GGUF/google_gemma-3-12b-it-qat-Q4_0.gguf
9999
- !!merge <<: *gemma3
100100
name: "gemma-3-4b-it-qat"
101101
urls:
102102
- https://huggingface.co/google/gemma-3-4b-it
103-
- https://huggingface.co/vinimuchulski/gemma-3-4b-it-qat-q4_0-gguf
103+
- https://huggingface.co/bartowski/google_gemma-3-4b-it-qat-GGUF
104104
description: |
105105
This model corresponds to the 4B instruction-tuned version of the Gemma 3 model in GGUF format using Quantization Aware Training (QAT). The GGUF corresponds to Q4_0 quantization.
106106

@@ -109,16 +109,16 @@
109109
You can find the half-precision version here.
110110
overrides:
111111
parameters:
112-
model: gemma-3-4b-it-q4_0.gguf
112+
model: google_gemma-3-4b-it-qat-Q4_0.gguf
113113
files:
114-
- filename: gemma-3-4b-it-q4_0.gguf
115-
sha256: 2ca493d426ffcb43db27132f183a0230eda4a3621e58b328d55b665f1937a317
116-
uri: huggingface://vinimuchulski/gemma-3-4b-it-qat-q4_0-gguf/gemma-3-4b-it-q4_0.gguf
114+
- filename: google_gemma-3-4b-it-qat-Q4_0.gguf
115+
sha256: 0231e2cba887f4c7834c39b34251e26b2eebbb71dfac0f7e6e2b2c2531c1a583
116+
uri: huggingface://bartowski/google_gemma-3-4b-it-qat-GGUF/google_gemma-3-4b-it-qat-Q4_0.gguf
117117
- !!merge <<: *gemma3
118118
name: "gemma-3-27b-it-qat"
119119
urls:
120120
- https://huggingface.co/google/gemma-3-27b-it
121-
- https://huggingface.co/vinimuchulski/gemma-3-27b-it-qat-q4_0-gguf
121+
- https://huggingface.co/bartowski/google_gemma-3-27b-it-qat-GGUF
122122
description: |
123123
This model corresponds to the 27B instruction-tuned version of the Gemma 3 model in GGUF format using Quantization Aware Training (QAT). The GGUF corresponds to Q4_0 quantization.
124124

@@ -127,11 +127,11 @@
127127
You can find the half-precision version here.
128128
overrides:
129129
parameters:
130-
model: gemma-3-27b-it-q4_0.gguf
130+
model: google_gemma-3-27b-it-qat-Q4_0.gguf
131131
files:
132132
- filename: gemma-3-27b-it-q4_0.gguf
133-
sha256: 45e586879bc5f5d7a5b6527e812952057ce916d9fc7ba16f7262ec9972c9e2a2
134-
uri: huggingface://vinimuchulski/gemma-3-27b-it-qat-q4_0-gguf/gemma-3-27b-it-q4_0.gguf
133+
sha256: 4f1e32db877a9339df2d6529c1635570425cbe81f0aa3f7dd5d1452f2e632b42
134+
uri: huggingface://bartowski/google_gemma-3-27b-it-qat-GGUF/google_gemma-3-27b-it-qat-Q4_0.gguf
135135
- !!merge <<: *gemma3
136136
name: "qgallouedec_gemma-3-27b-it-codeforces-sft"
137137
urls:

0 commit comments

Comments
 (0)