Replies: 1 comment
-
Potentially could use different weights for each batch https://stackoverflow.com/questions/60927247/tensorflow-lstm-how-to-use-different-weights-for-each-batch |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Depending on how good the LSTM model is, we may need to update the model parameters with EnKF as well as the states. Below is a comparison of a bad model (air temp only, 5 epochs training) vs. a better model (air temp only, 50 epochs training) in a data assimilation scheme. The blue lines are the mean predictions from the LSTM without assimilating observations, the black line is the mean of the ensemble members and the grey lines are the LSTM ensemble predictions w/ DA (n_ensembles = 100). The red points are the stream temperature observations.
I did not assimilate observations from the beginning of September - end of October, just to see how the models would behave without obs. Both models fairly quickly return to the mean of the LSTM predictions, but with worse mean predictions and higher uncertainty around the bad model.
LSTM trained with air temp, 50 epochs, 100 ensembles
LSTM trained with air temp, 5 epochs, 100 ensembles
Beta Was this translation helpful? Give feedback.
All reactions