Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Questions on quantitative indicators APLS and TOPO #36

Open
Sysilia opened this issue Oct 20, 2024 · 2 comments
Open

Questions on quantitative indicators APLS and TOPO #36

Sysilia opened this issue Oct 20, 2024 · 2 comments

Comments

@Sysilia
Copy link

Sysilia commented Oct 20, 2024

I ran the project with my own 512*512 sized dataset and got pretty good visualization results, but when I was quantifying the results I referred to spacenet's evaluation metric to quantify my own results, and I got topo results where a large portion of the prec was 1.0 and a lot of results close to 1, which I thought was very unreasonable, but didn't I don't know what's wrong with it, do you need to adjust any parameter when you quantize it?

@TCXX
Copy link

TCXX commented Jan 9, 2025

Thank you for trying SAM-Road!

Yes, I think you need to adjust params for your 512x512 dataset-

Buffer Size: The TOPO metric uses a buffer around roads to determine matches. For 512x512 images, try reducing the buffer size to match your image scale; start with a buffer size of ~2-3x your road width in pixels.

Matching Parameters: Adjust the max_path_length, min_length_px, and matching_threshold

It would be helpful if you can provide:

  • The current buffer_size_meters you're using
  • Your road width in pixels
  • A sample of your evaluation configuration

@htcr
Copy link
Owner

htcr commented Jan 17, 2025

I ran the project with my own 512*512 sized dataset and got pretty good visualization results, but when I was quantifying the results I referred to spacenet's evaluation metric to quantify my own results, and I got topo results where a large portion of the prec was 1.0 and a lot of results close to 1, which I thought was very unreasonable, but didn't I don't know what's wrong with it, do you need to adjust any parameter when you quantize it?

TOPO is essntially sampling subgraph patches from predictions & GT and comparing them. If prediction is accurate, you might indeed get a bunch of 1.0 (full score). Were you looking at the intermediate outputs or the aggregated final score?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants