Skip to content

Conversation

y-yang42
Copy link

@y-yang42 y-yang42 commented Sep 3, 2025

Description

When exporting to onnx, current code prints

PyTorch inference output shapes - Boxes: torch.Size([1, 3900, 4]), Labels: torch.Size([1, 3900, 2])

instead of

PyTorch inference output shapes - Boxes: torch.Size([1, 300, 4]), Labels: torch.Size([1, 300, 2])

as self.model is put to eval() instead of model.

Type of change

Please delete options that are not relevant.

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • This change requires a documentation update

How has this change been tested, please provide a testcase or example of how you tested the change?

from rfdetr import RFDETRBase

model = RFDETRBase()
model.export()

Any specific deployment considerations

For example, documentation changes, usability, usage/costs, secrets, etc.

Docs

  • Docs updated? What were the changes:

@CLAassistant
Copy link

CLAassistant commented Sep 3, 2025

CLA assistant check
All committers have signed the CLA.

@isaacrob-roboflow
Copy link
Collaborator

it's just the print statement that's wrong, correct? the onnx graph actually does have 300 outputs for you?

@y-yang42
Copy link
Author

y-yang42 commented Sep 8, 2025

it's just the print statement that's wrong, correct? the onnx graph actually does have 300 outputs for you?

Yes, only the print statement is wrong, onnx graph do have 300 outputs.

When running 2 export consecutively,

from rfdetr import RFDETRBase

model = RFDETRBase()
model.export()
model.export()

The first one will print 3900 and the second one will print 300. The model is switched to eval mode somewhere during the conversion process.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants