You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello,
Thank you for sharing your code. I am reproducing the results of the paper using the code provided in this repository. I do not make any modifications on the code. However, I noticed that the results I obtained are inconsistent with those reported in the paper.
I ran the experiments using the following configuration:
Dataset: traffic,power
The results I obtained are as follows:
Power:lstm_32:epoch,train_loss,train_mae,val_loss,val_mae,test_loss,test_mae
00000186, 0.00054288, 0.01556623, 0.00057405, 0.01547826, 0.00053597, 0.01518926
(Paper reported: [0.628 ± 0.003])
ltc_32:epoch,train_loss,train_mae,val_loss,val_mae,test_loss,test_mae
186,0.0007,0.0171,0.0007,0.0165,0.0006,0.0162
(Paper reported: [0.642 ± 0.021])
ctgru_32:best epoch, train loss, train mae, valid loss, valid mae, test loss, test mae
00000144, 0.00052888, 0.01522892, 0.00056067, 0.01497820, 0.00054100, 0.01477734
(Paper reported: [0.389 ± 0.076])
Results of the traffic dataset are similar to the power dataset. All methods perform far better with those reported in the paper. However, for the dataset such as Gesture and Occupancy whose metrics are accuracy, the results I obtained are consistent with those reported in the paper.
Could you clarify if there are any additional steps on datasets processing or hyperparameters not mentioned in the README that could affect the results?
Are there any updates to the code or specific random seeds used in the paper's experiments?
Environment Details:
OS: linux&macos Python version:3.6&3.12 TensorFlow :1.14&2.15 CPU:
Thank you in advance for your help! Please let me know if additional details are required.
The text was updated successfully, but these errors were encountered:
Hello,
Thank you for sharing your code. I am reproducing the results of the paper using the code provided in this repository. I do not make any modifications on the code. However, I noticed that the results I obtained are inconsistent with those reported in the paper.
I ran the experiments using the following configuration:
Dataset: traffic,power
The results I obtained are as follows:
Power:lstm_32:epoch,train_loss,train_mae,val_loss,val_mae,test_loss,test_mae
00000186, 0.00054288, 0.01556623, 0.00057405, 0.01547826, 0.00053597, 0.01518926
(Paper reported: [0.628 ± 0.003])
ltc_32:epoch,train_loss,train_mae,val_loss,val_mae,test_loss,test_mae
186,0.0007,0.0171,0.0007,0.0165,0.0006,0.0162
(Paper reported: [0.642 ± 0.021])
ctgru_32:best epoch, train loss, train mae, valid loss, valid mae, test loss, test mae
00000144, 0.00052888, 0.01522892, 0.00056067, 0.01497820, 0.00054100, 0.01477734
(Paper reported: [0.389 ± 0.076])
Results of the traffic dataset are similar to the power dataset. All methods perform far better with those reported in the paper. However, for the dataset such as Gesture and Occupancy whose metrics are accuracy, the results I obtained are consistent with those reported in the paper.
Could you clarify if there are any additional steps on datasets processing or hyperparameters not mentioned in the README that could affect the results?
Are there any updates to the code or specific random seeds used in the paper's experiments?
Environment Details:
OS: linux&macos Python version:3.6&3.12 TensorFlow :1.14&2.15 CPU:
Thank you in advance for your help! Please let me know if additional details are required.
The text was updated successfully, but these errors were encountered: