-
I want to use qat method for my model, but i can only find ptq quantizer in executorch, are there some examples of how to implement Quantization Aware Training (QAT) for qnn backend? |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments
-
QAT is going through the same code path as PTQ. if you don't have some special handling for conv-bn, you could start with adding the configuration for QAT, like: https://github.com/pytorch/pytorch/blob/a0429c01ad665ffb2faa04a411913ecee9962566/torch/ao/quantization/quantizer/xnnpack_quantizer.py#L116 |
Beta Was this translation helpful? Give feedback.
-
@cccclai can you tag qualcomm POC to see if they want to support QAT? |
Beta Was this translation helpful? Give feedback.
-
Yeah QAT is supported. cc: @shewu-quic @chunit-quic @haowhsu-quic @winskuo-quic let's add some docs |
Beta Was this translation helpful? Give feedback.
Yeah QAT is supported. cc: @shewu-quic @chunit-quic @haowhsu-quic @winskuo-quic let's add some docs