Skip to content

Commit 204cd48

Browse files
authored
docs: fix qat description in README.md (#3212)
1 parent f3fc5e7 commit 204cd48

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

torchao/quantization/qat/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
Quantization-Aware Training (QAT) refers to applying fake quantization during the
44
training or fine-tuning process, such that the final quantized model will exhibit
5-
higher accuracies and perplexities. Fake quantization refers to rounding the float
5+
higher accuracies and lower perplexities. Fake quantization refers to rounding the float
66
values to quantized values without actually casting them to dtypes with lower
77
bit-widths, in contrast to post-training quantization (PTQ), which does cast the
88
quantized values to lower bit-width dtypes, e.g.:

0 commit comments

Comments
 (0)