-
Notifications
You must be signed in to change notification settings - Fork 234
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[TorchFX] Documetation update #2917
base: develop
Are you sure you want to change the base?
Conversation
339d95f
to
ba2afd4
Compare
README.md
Outdated
# Using capture_pre_autograd_graph to export torch.fx.GraphModule | ||
# the same way it done for PyTorch 2 Export Post Training Quantization: | ||
# https://pytorch.org/tutorials/prototype/pt2e_quant_ptq.html | ||
fx_model = capture_pre_autograd_graph(model.eval(), args=torch.ones(input_shape)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Torch 2.5.0 introduced export_for_training
method which is used for quantization pipeline https://pytorch.org/executorch/stable/tutorials/export-to-executorch-tutorial.html I'm waiting that NNCF will support torch 2.5.0 in the upcoming release. I would suggest to merge this PR after clarification the status of the task of supporting torch 2.5.0 in NNCF.
@anzr299, @MaximProshin
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done, please check
ba2afd4
to
e691e61
Compare
e691e61
to
9d0a87f
Compare
Changes
Reason for changes
Related tickets
#2766