-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature/pyright benchmarking #942
base: main
Are you sure you want to change the base?
Conversation
# Conflicts: # benchmark/mnist_benchmark.py
@@ -61,9 +61,9 @@ def benchmark_coreset_algorithms( | |||
reshaped_data = raw_data.reshape(raw_data.shape[0], -1) | |||
|
|||
umap_model = umap.UMAP(densmap=True, n_components=25) | |||
umap_data = umap_model.fit_transform(reshaped_data) | |||
umap_data = jnp.asarray(umap_model.fit_transform(reshaped_data)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove corresponding jnp.asarray
in line 75.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Needs to be data = Data(umap_data)
.
@@ -24,13 +24,11 @@ def plot_benchmarking_results(data): | |||
""" | |||
Visualise the benchmarking results. | |||
|
|||
:param data: A dictionary where the first key is the original sample size | |||
and the rest of the keys are the coreset sizes (as strings) and values | |||
:param data: A dictionary where keys are the coreset sizes (as strings) and values |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Need to make same updates (inc to example) in print_metrics_table.
@@ -66,8 +61,7 @@ def plot_benchmarking_results(data): | |||
for i, metric in enumerate(metrics): | |||
ax = axs[i] | |||
ax.set_title( | |||
f"{metric.replace('_', ' ').title()} vs " | |||
f"Coreset Size (n_samples = {n_samples})", | |||
f"{metric.replace('_', ' ').title()} vs Coreset Size (n_samples = {1_000})", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don't hardcode values. If it would be useful to display n_samples, either include it in the results dictionary as metadata (or you could import a constant - probably not such a good idea).
General musings not requiring any action
I'm still uneasy about having different JSON formats for each benchmarking script.
The format of the JSON could change in future. A useful piece of metadata to add might be a version specifier. We do this for the performance data, although we place it in the file name rather than inside the file. I don't think it's particularly important here as both scripts will be run manually. So long as the code is in sync between the scripts, all is fine.
@@ -61,9 +61,9 @@ def benchmark_coreset_algorithms( | |||
reshaped_data = raw_data.reshape(raw_data.shape[0], -1) | |||
|
|||
umap_model = umap.UMAP(densmap=True, n_components=25) | |||
umap_data = umap_model.fit_transform(reshaped_data) | |||
umap_data = jnp.asarray(umap_model.fit_transform(reshaped_data)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Needs to be data = Data(umap_data)
.
#912
PR Type
Description
Static type check fixes
How Has This Been Tested?
Existing tests pass as expected.
Pyright passes on benchmark directory with 0 warnings, 0 errors and 0 informations.
Does this PR introduce a breaking change?
Checklist before requesting a review