Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Imp/tsmixer basic #2555

Open
wants to merge 22 commits into
base: master
Choose a base branch
from
Open

Imp/tsmixer basic #2555

wants to merge 22 commits into from

Conversation

eschibli
Copy link
Contributor

@eschibli eschibli commented Oct 8, 2024

Checklist before merging this PR:

  • Mentioned all issues that this PR fixes or addresses.
  • Summarized the updates of this PR under Summary.
  • Added an entry under Unreleased in the Changelog.

Implements #2510

Summary

Adds the option to project to the output temporal space at the end of TS-Mixer, rather than the beginning. This was how most of the results in the original google-research paper were achieved (ie, the architecture in Fig #1 of the paper). This may allow higher performance in cases where past covariates are important by allowing a more direct series of residual connections along the input time dimension.

I allowed support for future covariates by instead projecting them into the lookback temporal space, but this probably won't perform well in cases where they are more important than the historical targets and past covariates.

Other Information

The original paper and source code do not clarify whether the final temporal projection should go before or after the final feature projection as they hardcoded hidden_size to output_dim and therefore did not have need a final feature projection. I erred on the side of putting the temporal projection first, as otherwise the common output_dim==1 could lead to unexpected, catastrophic compression before the temporal projection step.

Copy link

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

@madtoinou
Copy link
Collaborator

Hi @eschibli,

First of all, thanks for opening this PR!

For the linting, it will make your life much easier if you follow these instruction, or you can also run it manually

Copy link

codecov bot commented Oct 27, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 94.10%. Comparing base (71a1902) to head (f9c0d15).

Additional details and impacted files
@@            Coverage Diff             @@
##           master    #2555      +/-   ##
==========================================
- Coverage   94.15%   94.10%   -0.05%     
==========================================
  Files         139      139              
  Lines       14992    15006      +14     
==========================================
+ Hits        14116    14122       +6     
- Misses        876      884       +8     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@eschibli
Copy link
Contributor Author

Hi @eschibli,
...

Thanks @madtoinou. I was not able to get Gradle running on my machine and didn't realize ruff was that easy to set up so sorry for spamming your test pipeline.

I don't believe the failing mac build is a result of my changes so it should be good for review now.

@dennisbader
Copy link
Collaborator

Hi @eschibli, thanks for the PR. Yes, the failing mac tests are unrelated to your PR, we're working on it :).
Also, give us some time to review, our capacity is currently a bit limited 🙏

@eschibli
Copy link
Contributor Author

eschibli commented Nov 3, 2024

Understood Dennis

Copy link
Collaborator

@madtoinou madtoinou left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks great, thank you for this nice PR @eschibli! Some minor comment about the order of the operations/projections to make the flow more intuitive.

Could you also extend the TSMixer notebook to include a section where the difference in performance with "project_first_layer=True/False" and future covariates can be visualized?

x = _time_to_feature(x)

# Otherwise, encoder-style model with residual blocks in input time dimension
# In the original paper this was not implimented for future covariates,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
# In the original paper this was not implimented for future covariates,
# In the original paper this was not implemented for future covariates,

# In the original paper this was not implimented for future covariates,
# but rather than ignoring them or raising an error we remap them to the input time dimension.
# Suboptimal but may be useful in some cases.
elif self.future_cov_dim:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

to make it a bit more intuitive, I would move this code below, inside the if self.future_cov_dim and change the condition to if not self.project_first_layer in order to group the operation on each kind of features:

  1. "target"; project to output time dimension in the first layer if project_first_layer = True otherwise we stay in input time dimension
  2. "target"; do the feature_mixing_hist (not changed)
  3. "fut_cov"; project the future covariates to input time dimension if project_first_layer=False (the logic you added)
  4. concatenate the future covariates to the target features (not changed)
  5. static covariates (not changed)
  6. "target"; projection to the output time dimension if it did not occur earlier
  7. "target"; application of fc_out, critical for probabilistic forecasts

x = mixing_layer(x, x_static=x_static)

# If we are in the input time dimension, we need to project to the output time dimension.
# The original paper did not a fc_out layer (as hidden_size == output_dim)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
# The original paper did not a fc_out layer (as hidden_size == output_dim)
# The original paper did not use a fc_out layer (as hidden_size == output_dim)

if project_first_layer:
assert model.model.sequence_length == output_len
else:
assert model.model.sequence_length == input_len
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can the test also include a call to predict() to make sure it works as well (even if the forward pass is already occurring in the call to fit())?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants