We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The way that TF does checkpointing with:
tf.estimator.train_and_evaluate(nn, train_spec, eval_spec)
Seems to result in a lot of IO lag where it saves the params to disk after every epoch, runs validation, then loads the model again and repeats.
Is there an easier way to just keep this in memory (like other frameworks, e.g. PyTorch) and just save to disk once at the end?
For example running on pure numpy array:
nn.train(tf.estimator.inputs.numpy_input_fn( fake_X, fake_y, shuffle=False, num_epochs=EPOCHS, batch_size=BATCHSIZE))
Takes 14min30s with TF and 16min52s with Keras. However, the train_and_evaluate loop takes 21min49s sec with TF and 20min16s with Keras.
The text was updated successfully, but these errors were encountered:
No branches or pull requests
The way that TF does checkpointing with:
tf.estimator.train_and_evaluate(nn, train_spec, eval_spec)
Seems to result in a lot of IO lag where it saves the params to disk after every epoch, runs validation, then loads the model again and repeats.
Is there an easier way to just keep this in memory (like other frameworks, e.g. PyTorch) and just save to disk once at the end?
For example running on pure numpy array:
Takes 14min30s with TF and 16min52s with Keras. However, the train_and_evaluate loop takes 21min49s sec with TF and 20min16s with Keras.
The text was updated successfully, but these errors were encountered: