You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for sharing the code, i learned a lot from it.
I see that in eval.py evaluation is performed on the target scaled with StandardScaler
I think that evaluation will decrease the actual mse.
I'm new to time series forecasting and don't know if it's reasonable to evaluate it on standardized data.
The text was updated successfully, but these errors were encountered:
Hey @myalos thanks for your interest!
Well my idea behind using the scaled value to compute the MSE is that if you are trying to predict a target which takes values in the range [0, 0.1] you would get a way smaller MSE than if you are trying to predict a target which ranges between 0 and 1000, right? So by scaling the values you make sure that you end up with comparable values.
What do you think?
Thanks for reply. I argree with what you said. I think that mae could be better metric to compare the models and rmse in original scale could make people better understand the power of the model, since the value between [0, 1] is not very intuitive.
Thanks for sharing the code, i learned a lot from it.
I see that in eval.py evaluation is performed on the target scaled with StandardScaler
I think that evaluation will decrease the actual mse.
I'm new to time series forecasting and don't know if it's reasonable to evaluate it on standardized data.
The text was updated successfully, but these errors were encountered: