Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

spring cleaning: removing bugs and code smells #261

Merged
merged 20 commits into from
Apr 2, 2024
Merged

Conversation

1b15
Copy link
Collaborator

@1b15 1b15 commented Mar 5, 2024

@1b15 1b15 self-assigned this Mar 5, 2024
Copy link

codecov bot commented Mar 5, 2024

Codecov Report

Attention: Patch coverage is 88.05970% with 8 lines in your changes are missing coverage. Please review.

Project coverage is 91.07%. Comparing base (ec2434e) to head (19212bb).

Files Patch % Lines
neurolib/models/model.py 87.76% 6 Missing ⚠️
neurolib/models/multimodel/model.py 77.78% 2 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master     #261      +/-   ##
==========================================
+ Coverage   90.93%   91.07%   +0.13%     
==========================================
  Files          65       65              
  Lines        5206     5172      -34     
==========================================
- Hits         4734     4710      -24     
+ Misses        472      462      -10     
Flag Coverage Δ
unittests 91.07% <88.06%> (+0.13%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Member

@caglorithm caglorithm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you very much for this awesome PR. Please have a look at the comments I left.

The only thing I can see is that continue_run might not behave as expected and that there is a change of the time axis in storeOutputsAndStates.

Comment on lines -70 to -82
if self.BOLD.shape[1] == 0:
# add new data
self.t_BOLD = t_BOLD_resampled
self.BOLD = BOLD_resampled
elif append is True:
# append new data to old data
self.t_BOLD = np.hstack((self.t_BOLD, t_BOLD_resampled))
self.BOLD = np.hstack((self.BOLD, BOLD_resampled))
else:
# overwrite old data
self.t_BOLD = t_BOLD_resampled
self.BOLD = BOLD_resampled

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You seem to have the append=True case, why?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The functionality for saving outputs is already in models/model.py.

@@ -311,7 +299,7 @@ def clearModelState(self):
self.state = dotdict({})
self.outputs = dotdict({})
# reinitialize bold model
if self.params.get("bold"):
if self.boldInitialized:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this do the same thing as before? Reads like: if it is initialized, initialize it.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The difference is that BOLD is now only initialized at the beginning of the first run. One only needs to clear the bold state with re-initialization if it has been initialized before.

Comment on lines 326 to 327
if not hasattr(self, "t"):
raise ValueError("You tried using continue_run=True on the first run.")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we have to error here? The user could user continue_run=True in the first call as well.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

addressed this in the latest commit

data = data[:, :: self.sample_every]
else:
raise ValueError(f"Don't know how to subsample data of shape {data.shape}.")
if data.shape[-1] >= self.params["duration"] - self.startindt:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this if clause prevent any previous case to run?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This catches unintended subsampling of BOLD.

Comment on lines +136 to +138
if continue_run:
self.setInitialValuesToLastState()
else:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We would like to allow continue_run even in the case if there hasn't been any previous run.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

addressed this in the latest commit

Comment on lines +231 to +232
self.setOutput("t", results.time.values, append=append, removeICs=False)
self.setStateVariables("t", results.time.values)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This changes the time axis.

Copy link
Collaborator Author

@1b15 1b15 Mar 11, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We previously had different time axis behaviours for model and multimodel:

  • for regular models, the time axis always started at 0 because it was reset for every chunk unless it was appended
  • for multimodels, the time axis started at self.start_t from continued runs

We adopted the multimodel behaviour with a more elegant implementation without self.start_t.

# set start t for next run for the last value now
self.start_t = self.t[-1]
if not hasattr(self, "t"):
raise ValueError("You tried using continue_run=True on the first run.")

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
raise ValueError("You tried using continue_run=True on the first run.")
raise ValueError("You tried using `continue_run=True` on the first run.")

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thank you Vasco, I forgot to make the suggested change in the multimodel

@1b15 1b15 requested review from caglorithm and removed request for jajcayn and caglarcakan March 12, 2024 15:08
Copy link
Member

@caglorithm caglorithm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, amazing work, thank you

@caglorithm caglorithm merged commit 6be0d37 into master Apr 2, 2024
12 checks passed
@caglorithm caglorithm deleted the spring_cleaning branch April 2, 2024 13:06
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants