Skip to content

Commit

Permalink
Update builder.py
Browse files Browse the repository at this point in the history
  • Loading branch information
kunal-vaishnavi authored Feb 20, 2025
1 parent 6892779 commit 48152da
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions src/python/py/models/builder.py
Original file line number Diff line number Diff line change
Expand Up @@ -3324,7 +3324,7 @@ def get_args():
3 is bf16.
2 is fp16.
1 is fp32.
Default is 4 for CPU and 0 for non-CPU.
Default is 4 for the CPU EP and 0 for non-CPU EPs.
int4_block_size = 16/32/64/128/256: Specify the block_size for int4 quantization.
int4_is_symmetric = Quantize the weights symmetrically. Default is true.
If true, quantization is done to int4. If false, quantization is done to uint4.
Expand Down Expand Up @@ -3355,7 +3355,7 @@ def get_args():
If enabled, all nodes being placed on the CUDA EP is the prerequisite for the CUDA graph to be used correctly.
It is not guaranteed that CUDA graph be enabled as it depends on the model and the graph structure.
use_8bits_moe = Use 8-bit quantization for MoE layers. Default is false.
If true, the QMoE op will use 4-bit quantization. If false, the QMoE op will use 8-bits quantization.
If true, the QMoE op will use 8-bit quantization. If false, the QMoE op will use 4-bit quantization.
use_qdq = Use the QDQ decomposition for ops.
Use this option when you want to use quantize-dequantize ops. For example, you will have a quantized MatMul op instead of the MatMulNBits op.
adapter_path = Path to folder on disk containing the adapter files (adapter_config.json and adapter model weights).
Expand Down

0 comments on commit 48152da

Please sign in to comment.