Skip to content

Commit 3b86b9f

Browse files
meenchenkevalmorabia97
authored andcommitted
Fix AWQ export when quantization of some layers are disabled (#721)
## What does this PR do? **Type of change:** Bug fix <!-- Use one of the following: Bug fix, new feature, new example, new tests, documentation. --> **Overview:** Fix AWQ export when quantization of some layers are disabled ## Usage <!-- You can potentially add a usage example below. --> ```python # Add a code snippet demonstrating how to use this ``` ## Testing <!-- Mention how have you tested your change if applicable. --> ## Before your PR is "*Ready for review*" <!-- If you haven't finished some of the above items you can still open `Draft` PR. --> - **Make sure you read and follow [Contributor guidelines](https://github.com/NVIDIA/Model-Optimizer/blob/main/CONTRIBUTING.md)** and your commits are signed. - **Is this change backward compatible?**: Yes/No <!--- If No, explain why. --> - **Did you write any new necessary tests?**: Yes/No - **Did you add or update any necessary documentation?**: Yes/No - **Did you update [Changelog](https://github.com/NVIDIA/Model-Optimizer/blob/main/CHANGELOG.rst)?**: Yes/No <!--- Only for new features, API changes, critical bug fixes or bw breaking changes. --> ## Additional Information <!-- E.g. related issue. --> Signed-off-by: weimingc <17592131+meenchen@users.noreply.github.com>
1 parent 8bc5699 commit 3b86b9f

File tree

1 file changed

+3
-1
lines changed

1 file changed

+3
-1
lines changed

modelopt/torch/export/unified_export_hf.py

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -117,7 +117,9 @@ def _output_hook(module, input, output):
117117
module_names.add(name)
118118

119119
# For MoE models update pre_quant_scale to average pre_quant_scale amongst experts
120-
if is_moe(module) and ("awq" in quantization_format):
120+
if is_moe(module) and (
121+
quantization_format is not QUANTIZATION_NONE and "awq" in quantization_format
122+
):
121123
# update_experts_avg_prequant_scale(module)
122124
grouped_experts = get_experts_list(module, model_type)
123125
for modules in grouped_experts:

0 commit comments

Comments
 (0)