Fix: Attach _linear_extra_repr to quantized linear layers #1546
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
I've made changes to resolve the AttributeError: 'Linear' object has no attribute '_linear_extra_repr' during quantization by adding necessary imports—import types and from torchao.quantization.quant_api import _linear_extra_repr—to torchchat/utils/quantize.py, enabling dynamic method attachment and access to _linear_extra_repr. Additionally, I modified the quantize_model function by introducing a recursive helper function _attach_extra_repr within the same file. This function traverses all modules in the model and, for each nn.Linear module missing the extra_repr attribute, dynamically attaches _linear_extra_repr using types.MethodType. The helper is invoked at the beginning of quantize_model to ensure all nn.Linear modules have the extra_repr method before quantization begins.