Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about model evaluation #41

Open
myalos opened this issue Sep 4, 2023 · 2 comments
Open

Question about model evaluation #41

myalos opened this issue Sep 4, 2023 · 2 comments

Comments

@myalos
Copy link

myalos commented Sep 4, 2023

Thanks for sharing the code, i learned a lot from it.
I see that in eval.py evaluation is performed on the target scaled with StandardScaler
I think that evaluation will decrease the actual mse.
I'm new to time series forecasting and don't know if it's reasonable to evaluate it on standardized data.

@JulesBelveze
Copy link
Owner

Hey @myalos thanks for your interest!
Well my idea behind using the scaled value to compute the MSE is that if you are trying to predict a target which takes values in the range [0, 0.1] you would get a way smaller MSE than if you are trying to predict a target which ranges between 0 and 1000, right? So by scaling the values you make sure that you end up with comparable values.
What do you think?

@myalos
Copy link
Author

myalos commented Sep 8, 2023

Thanks for reply. I argree with what you said. I think that mae could be better metric to compare the models and rmse in original scale could make people better understand the power of the model, since the value between [0, 1] is not very intuitive.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants